Uncommon Descent Serving The Intelligent Design Community

Reinstating the Explanatory Filter

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In an off-hand comment in a thread on this blog I remarked that I was dispensing with the Explanatory Filter in favor of just going with straight-up specified complexity. On further reflection, I think the Explanatory Filter ranks among the most brilliant inventions of all time (right up there with sliced bread). I’m herewith reinstating it — it will appear, without reservation or hesitation, in all my future work on design detection.

P.S. Congrats to Denyse O’Leary, whose Post-Darwinist blog tied for third in the science and technology category from the Canadian Blog Awards.

Comments
I suggest people here consider what it means by specified or specifies. DNA specifies many other things independent of itself that have a function, a very organized function. Just as the letters of the alphabet when properly arranged specify meaning in a language or a computer code specifies operations in the hardware of a computer. The only place this appears in nature is in life. Nowhere else does such a phenomenon happen that uses one configuration to specify another configuration that has an organized function. Also there does not appear any instances where new specificity in life has appeared that isn't just trivial additions or subtractions to the current specificity. Something like the EF may be an attempt to explain every possible instance of non chance and non law situation but this attempt may be too ambitious. As far as the evolution debate is concerned, this universality is not needed. So argue over the universality of the EF, not whether it applies to life or not. As an aside, chance is an intrinsic part of the Modern Evolutionary Theory and always has been no matter what the name is. It operates mainly on the variation side of the theory, or how does one explain the origin of new variation in a population gene pool. The answer is the so called engines of variation that add so called new genetic information to the gene pool. On the genetic side which includes natural selection, chance operates also but to a lesser extent. If the sexual reproduction does not produce the right combinations of genes or genetic elements, there may not be a chance for natural selection to work in the way it is suggested it works. The environment is very chance oriented. Theoretically, selection and environment will lead to one gene pool in the future but chance elements could modify or even thwart this from happening. So on the genetic side, there is also chance elements. Then there is the discussion if there anything called chance at all or is chance just our inability to describe the deterministic forces at play and forces us to use probability distributions to explain the array of situations when each instance may be determined.jerry
December 13, 2008
December
12
Dec
13
13
2008
06:35 AM
6
06
35
AM
PDT
Patrick [61]: Snowflakes are crystals. Crystals are just the same simple pattern repeated. Simple, repeated patterns are not complex. ... The problem is to explain something like the genetic code, which is both complex and specified. Patrick, I feel fairly certain your understand of CSI is flawed here. The snowflake would be both complex and specified. [Although I could be incorrect and still trying to clarify this for myself - ALSO see Para. 7 below] In the Dembskian scheme a binary number's complexity is determined strictly by the number of bits it contains. Considering the UPB is in reference to the number of possible particle interactions or some such, I think we would have to look at the total number of atoms in the snowflake to determine its complexity. That's what its comprised of - atoms. So a snowflake is complex in the design inference just on that basis. Also note that the only type of patterns that can be referenced in the design inference are simple patterns. If you consider some biological functionality comprised of a great number of interworking parts, thats not something the Design Inference can handle. So anyway the pattern of the snowflake is right in line with what the design inference typically references. Consider the Bacterial Flagellum - the pattern identified has only four components. All the design inference does is rule out chance, and then its up to you to decide whether its cause is mechanism or not (i.e. either necessity or design). The thing with the snowflake is we presumably already know about a mechanism to explain it. The way the design inference is usually employed is to assume that if a mechanism is not known, then we are justified in saying its design. It almost seems like pointless exercise in the context of science to rule out chance, as no one (Darwinians or whomever) would consider chance an explanation for anything (or at least they wouldn't admit it). The goal of science is to explicate - i.e. to propose a mechanism. Chance doesn't explain anything. Neither does design for that matter. Para 7. So basically any object in nature that has an identifiable pattern could not have happened by chance. OTOH there does seem to be a way that some treat CSI wherein the complexity (probability) of an object is determined, not by the number of atoms it contains for example, but by our knowledge of mechanisms that could have caused it. Therefore, all non-life phenomena are assumed to be in that category, that is it is assumed that we know about mechanisms to cause them, so they would automatically be labeled probable and non complex. That leaves life. It is obvious to everyone that life is not explained by the typical physical forces we see operating on earth (e.g. wind, erosion, and so on.) Thus the hand-waving and appeals to great lengths of time in most naturalistic explanations. So life would be considered highly improbable with respect to the physical laws that we DO know about. But its an appeal to ignorance to say that because we don't know about any other laws (i.e. mechanisms) to account for life that no such laws exist. Contingency/laws can explain complexity but not specification. ... On the other hand, laws can explain specification but not complexity. (?) A mechanism can explain anything if the mechanism is known to exist. There is certainly no law that says laws (i.e. mechanism, necessity) cannot be complex.JT
December 13, 2008
December
12
Dec
13
13
2008
05:39 AM
5
05
39
AM
PDT
Mark, Trib, Patrick: Passed by while doing long wait downloads. Orgel, 1973:
Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6 [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189.]
That should help. Maybe, my own comment as well, here? [Notice how different aspects make an appearance in discussing dendritic snowflakes.] Hey, the 43 Mbyter just finished; back to work. GEM of TKIkairosfocus
December 13, 2008
December
12
Dec
13
13
2008
05:27 AM
5
05
27
AM
PDT
Patrick #61. Thanks. However, repetitive structures, such as crystals, do constitute specificity. I was responding to Tribune7 #51 Dembski addresses crystals and patterns & such. If they aren’t’ specified they aren’t designed. So with luck Tribune7 will now accept that you are right and that they are specified.Mark Frank
December 12, 2008
December
12
Dec
12
12
2008
10:39 PM
10
10
39
PM
PDT
PS: And, law of averages expectations implicitly, intuitively and naturally -- I daresay inescapably -- bring in the issues underlying Fisherian Elimination. Even, in a context where an officially "Bayesian" approach is being used that does not explicitly refer to targets and rejection regions or islands of function and seas of non-function, etc.kairosfocus
December 12, 2008
December
12
Dec
12
12
2008
09:29 PM
9
09
29
PM
PDT
Patrick, at 55: Thanks for the thought. H'mm, I thought the definition of aspects would bring out the point that one is separating out for analysis what is in fact inseparable practically? Namely:
as·pect . . . . 3. A way in which something can be viewed by the mind: looked at all aspects of the situation. [AmHDict] . . . . 1. a distinct feature or element in a problem or situation [Collins] . . . . a distinct feature or element in a problem; "he studied every facet of the question" [WordNet 3.0] . . . . Synonyms: phase, aspect, facet, angle2, side These nouns refer to a particular or possible way of viewing something, such as an object or a process: Phase refers to a stage or period of change or development: "A phase of my life was closing tonight, a new one opening tomorrow" Charlotte Brontë. Aspect is the way something appears at a specific vantage point: considered all aspects of the project. A facet is one of numerous aspects: studying the many facets of the intricate problem. Angle suggests a limitation of perspective, frequently with emphasis on the observer's own point of view: the reporter's angle on the story. Side refers to something having two or more parts or aspects: "Much might be said on both sides" Joseph Addison. [Synonyms at Phase, AmHDict]
Is there a better term out there? I am also trying not to make the diagram into a spaghetti-monster with all sorts of loops and iteration-loop counts, etc. I do believe one may take a law pass a chance pass -- i.e. "further investigation" -- and a design pass on the same situation, looking at different aspects. E.g. consider a torsional pendulum expt [a wire with a disc weight on the end, oscillating by twisting back and forth] where one isolates the law-like regularities, the experimental scatter that tends to hide that, and the problems due to bias due to experiment design. [We used to use some fairly sophisticated graphing tricks to isolate these features, e.g. log-log and log-lin plots to linearise and then take out scatter through plotting best straight lines etc.] Maybe what is needed is to give an illustrative instance or two in explaining the general utility of the EF, then bring up case studies that show how it relates to the cases of real interest? Thanks. __________ PO and Mark: Re law vs design . . . As you will see from my discussion at 46, law ties to reliable natural regularity, so the similar set-up will repeatedly get very similar results. So, there is low contingency. In case where similar setups give divergent results, then we look for the factors associated with high contingency: chance (undirected) and design (directed). Think about the die example, and how the House in Vegas looks at it: it wants undirected contingency and takes steps to ensure that that is what happens. ______________ PO: Re flat distributions. In a great many relevant cases of practical interest, flat or near-flat distributions are either what is supposed to have been there, or is a very reasonable assumption in light of credible models of the dynamics at work. Cf my reproduction of GP's very relevant discussion on elimination in biology here. In the case of Caputo, he was supposed to preside over a fair system, so the result should have been close enough to what we expect from a fair coin toss exercise as makes no difference. As you know, per basic microstate and cluster considerations of thermodynamics, the predominant cluster of outcomes for such will strongly dominate, i.e a near 50-50 pattern is heavily to be expected. C saw a strong run, on the assumption of innocence. That implies that he had an obvious duty to fix the problem, and simply calling in D and R scrutineers and flipping a coin would have doine nicely. [He didn't even need to think of doing an ideal "lens -shaped 2-sided die" to eliminate the coin edge problem.] So, even the inadvertent unfair coin idea comes down to design by willful negligence, once we factor in the statistical process control implications of a strong run. And the issue of runs emerges as a natural issue given the valid form of the layman's intuitive law of averages and related sampling theory. __________ G'day all GEM of TKIkairosfocus
December 12, 2008
December
12
Dec
12
12
2008
09:19 PM
9
09
19
PM
PDT
Patrick[64], As I said on Olofsson I, I bet there is an assumption of a uniform distribution in the paper. We shall see. Does anybody know where it will appear?Prof_P.Olofsson
December 12, 2008
December
12
Dec
12
12
2008
04:35 PM
4
04
35
PM
PDT
#60
Your post is quite hard to understand, but I think that what you are saying is that it is OK to deduce design by elimination of other causes but not OK to deduce to necessity by elimination of other causes. If so, why?
Come on; my english is certainly quite bad but I don't think it is hard to understand the basic points (provided that one wants to understand). Anyway, let us see the basic points: Your argument fails because necessity and design are asymmetric explications and cannot be exchanged in the first step of EF. In fact, why is it simply no sense to put design detection as the first step? Because there are only two ways to detect design for a given entity: 1. I already know that that entity was designed; and in this case the EF is useless. 2. I don’t know anything about; and in this case design inference has to be done by looking at the entity and finding in it overwhelming proofs that it was designed and non the mere output of necessity and chance. But thi is just the output of an EF that has previously excluded that the entity could have been arisen by means of natural, non-driven, forces. It's for this reason that design and necessity are not exchangeable in the EF, and it's no sense to require it. At the end I provided an example to show how this kind of asymmetry is typical of many and different problems. Let us consider the task to decide if a given natural number is a prime number or not. To the best of our knowledge there isn’t any direct algorithm or formula that allow us to say that a given number N is or not a prime number. In fact, to solve the problem one needs to verify that the number N cannot be decomposed as a product of other prime numbers. Now, isn’t this task similar to design detection, in which it is required the exclusion of the other explications? And wouldn’t be a silly non sense to ask that the decision about N be put as the first step?kairos
December 12, 2008
December
12
Dec
12
12
2008
01:17 PM
1
01
17
PM
PDT
Bill had a more interesting comment in that other thread:
There’s a paper Bob Marks and I just got accepted which shows that evolutionary search can never escape the CSI problem (even if, say, the flagellum was built by a selection-variation mechanism, CSI still had to be fed in).
If I may interpret what I think he's saying: that even if an Indirect Stepwise Pathway was found to be capable ID would not be falsified completely, as the problem would then be shifted to the active information injected at OOL.Patrick
December 12, 2008
December
12
Dec
12
12
2008
01:14 PM
1
01
14
PM
PDT
vjtorley, Your comment actually makes my point for me, which is that the WMAP data are relevant in deciding whether the universe is infinite. Therefore Dembski's objection in The Chance of the Gaps does not apply:
Nevertheless, even though the four inflatons considered here each possesses explanatory power, none of them possesses independent evidence for its existence.
You write:
Third, even if it could be established on inferential grounds that the universe is infinite, nevertheless, when making design inferences, it might still make perfectly good sense to confine ourselves to the event horizon (i.e the observable universe), which is finite:
That would make even less sense than it would have made for Eratosthenes, having measured the diameter of the Earth, to assume that the rest of the world must resemble the Mediterranean.ribczynski
December 12, 2008
December
12
Dec
12
12
2008
12:49 PM
12
12
49
PM
PDT
Patrick, that's seems to be the phrasing :-)tribune7
December 12, 2008
December
12
Dec
12
12
2008
12:44 PM
12
12
44
PM
PDT
The paper does not explicitly talk about crystals but it defines specificity in terms of a pattern that can be expressed in a small number of symbols. Crystals clearly fall into that category.
To save time I'll just quote myself:
Snowflakes are crystals. Crystals are just the same simple pattern repeated. Simple, repeated patterns are not complex. Repetitive structures, with all the info already in H2O, whose hexagonal structure/symmetry is determined by the directional forces - ie wind, gravity- are by no means complex. However, repetitive structures, such as crystals, do constitute specificity. Snowflakes, although specified, are also low in information, because their specification is in the laws, which of course means that node 1 in the Explanatory Filter (Does a law explain it?) would reject snowflakes as being designed. Contingency/laws can explain complexity but not specification. For instance, the exact time sequence of radioactive emissions from a chunk of uranium will be contingent, complex, but not specified. On the other hand, laws can explain specification but not complexity. The formation of a salt crystal follows well-defined laws, produces an independently known repetitive pattern, and is therefore specified; but like the snowflake that pattern will also be simple, not complex. The problem is to explain something like the genetic code, which is both complex and specified.
Patrick
December 12, 2008
December
12
Dec
12
12
2008
12:39 PM
12
12
39
PM
PDT
#58 Kairos Your post is quite hard to understand, but I think that what you are saying is that it is OK to deduce design by elimination of other causes but not OK to deduce to necessity by elimination of other causes. If so, why?Mark Frank
December 12, 2008
December
12
Dec
12
12
2008
12:31 PM
12
12
31
PM
PDT
#53 He talks about in his 1998 book Mere Creation: Science, Faith and Intelligent Design . The phrasing is not what I remember but the idea is the same. That is 7 years prior to the paper Specification: the pattern that significies intelligence - which he said on this site just a few days ago was definitive. The paper does not explicitly talk about crystals but it defines specificity in terms of a pattern that can be expressed in a small number of symbols. Crystals clearly fall into that category.Mark Frank
December 12, 2008
December
12
Dec
12
12
2008
12:28 PM
12
12
28
PM
PDT
#40 Mark Frank
What interests me is the parallel between this first step and the first and second steps in ID version. Why do we need to start with necessity and chance? Why not start with eliminating design and thus concluding necessity and/or chance?
Now I've understood your point; but it seemes to me that your argument fails in the beginning, where design and necessity are basicly treated as symmetric explications for the production of a given artifact/natural entity. Indeed this doesn't seem to be the case. The fact that necessity and design are asymmetric explication is due to the fact that, in absence of preliminary information about, design recognition is just performed by excludinf that the entity could have been arisen by means of non-driven activities. In thise sense it's simply no sense to put design detection as the first step because two are the possible cases: 1. I already know that that entity was designed; and in this case the EF is useless; OR (aut) 2. I don't know anything about; and in this case design inference is the possible output of a filter that has previously excluded the other two possibilities. To explain the difference I would propose a mathematical example about (please take it only for what it is; a convincing analogy). Let us consider the task to decide if a given natural number is a prime number or not. To the better of our knowledge there isn't any direct algorithm or formula that allow us to say that a given number N is a prime number or not without requiring the verification that the number N cannot be decomposed as a product of other prime numbers. Now, isn't in a certain sense the task to recognizing if N is a prime number something similar to design detection, which does require the exclusion of the other possibilities? Wouldn't it be a silly non sense to ask for the reverse of the verification steps?kairos
December 12, 2008
December
12
Dec
12
12
2008
12:07 PM
12
12
07
PM
PDT
Your attempts to use empirical evidence and logic to refute a bias are utterly illogical. Sal Gal, there is a logic to it. Our society and science is based on a philosophy that isn't reasonable -- i.e. only answers provided by methodological naturalism are acceptable -- and Dembski successfully confronts that philosophy on its own terms. We can either have a society based on the view that maybe there is a God (and behave accordingly) or one based on the view that maybe there isn't a God (and behave accordingly). Right now we are the latter and most seem to be making legal/economic/political and cultural decisions based on such. One of the Fox cartoons -- I think it was Family Guy -- had as a recurring punchline "Laura Bush killed a guy" an apparent reference to her accident as a teenager. If those scriptwriters had just an inkling that they might one day have to explain why they did that in circumstance that would have extreme consequences to themselves, I don't think they would have written that.tribune7
December 12, 2008
December
12
Dec
12
12
2008
11:27 AM
11
11
27
AM
PDT
William A. Dembski, Any universe we know is finite. Let's stick to that universe. The universe is. It is not an instance of anything. There is no sample space. The universe is not an event. There is no probability that the universe is the way it is. There is no information in the universe as a whole. The universe is what it is. We cannot learn what the universe is by induction. There is no bias-free learning. Your attempts to use empirical evidence and logic to refute a bias are utterly illogical. You have indicated that you regard methodological naturalism as spiritually pernicious because it turns inexorably into philosophical naturalism (materialism). I'm actually with you there. But I will always hold that empirical science is a workhorse, not a racehorse. It's not going to get us to the Truth before we die. I believe that you haplessly align yourself with the atheists when you make too much of science.Sal Gal
December 12, 2008
December
12
Dec
12
12
2008
11:01 AM
11
11
01
AM
PDT
kf,
I wonder if my recently updated EF flowchart and discussion here may be of further help as well.
I like your flowchart better but it still does not explicitly make it clear the Design can incorporate Necessity and Chance. I especially like how you have a "Further Inquiries" node at the end. I think you could improve that by listing conditions which would warrant a re-evaluation.Patrick
December 12, 2008
December
12
Dec
12
12
2008
10:37 AM
10
10
37
AM
PDT
Mark, I can't remember where I read it first either. I think it was online somewhere. He talks about in his 1998 book Mere Creation: Science, Faith and Intelligent Design . The phrasing is not what I remember but the idea is the same.tribune7
December 12, 2008
December
12
Dec
12
12
2008
09:48 AM
9
09
48
AM
PDT
"Dembski addresses crystals and patterns & such. If they aren’t’ specified they aren’t designed." I get muddled with all the different books. Was this before or after he defined specified in terms of compressability?Mark Frank
December 12, 2008
December
12
Dec
12
12
2008
08:56 AM
8
08
56
AM
PDT
ribczynski You write: "As Monton points out, patterns in the cosmic microwave background provide independent evidence for an infinite universe." Not so fast. I'd like to refer you to an article at http://arxiv.org/PS_cache/arxiv/pdf/0801/0801.0006v2.pdf which makes a strong case on observational grounds that we live in a finite dodecahedral universe. Monton seems to base his case for an infinite universe upon recent WMAP observations indicating that the universe is flat. As I am not a scientist, I will keep my comments brief. First, according to the NASA Website http://map.gsfc.nasa.gov/universe/uni_shape.html , "We now know that the universe is flat with only a 2% margin of error." However, according to the same Web page, the universe is flat only if the density of the universe EXACTLY equals the critical density. I respectfully submit that a 2% margin of error is, by itself, woefully insufficient evidence for the existence of an infinite universe - particularly when other, finite-universe hypotheses are compatible with the WMAP observational data. Second, the Poincare dodecahedral universe (which is finite) is perfectly consistent with the WMAP data, as far as I am aware. Third, even if it could be established on inferential grounds that the universe is infinite, nevertheless, when making design inferences, it might still make perfectly good sense to confine ourselves to the event horizon (i.e the observable universe), which is finite: "The observable universe is the space around us bounded by the event horizon - the distance to which light can have traveled since the universe originated. This space is huge but finite with a radius of 10^28 cm. There are definite total numbers of everything: about 10^11 galaxies, 10^21 stars, 10^78 atoms, 10^88 photons" ( http://universe-review.ca/F02-cosmicbg.htm ).vjtorley
December 12, 2008
December
12
Dec
12
12
2008
08:43 AM
8
08
43
AM
PDT
No function. They are extraordinary in their detailed symmetry. Well, they would be filtered out. Dembski addresses crystals and patterns & such. If they aren't' specified they aren't designed. However, I have since thought of a better example - courtesy of one of the many Daniel Bernoullis in 1734. As you probably know, all the planets (with the possible exception of Pluto) have orbits that are closely aligned. Ahhh, now that's a different subject and you getting into Privileged Planet territory :-)tribune7
December 12, 2008
December
12
Dec
12
12
2008
08:37 AM
8
08
37
AM
PDT
I think the last node on the EF needs split into 2 nodes. Lumping the specification with the small probablity is confusing. Why not separate them? Would this not make your case for CSI stronger?the wonderer
December 12, 2008
December
12
Dec
12
12
2008
08:34 AM
8
08
34
AM
PDT
#46 Kairosfocus Law and design are not simply interchangeable, once we see the sharp contrast in degree of contingency for the two. Can you expand on this. I don't understand what "degree of contingency" means in this sentence. ThanksMark Frank
December 12, 2008
December
12
Dec
12
12
2008
08:15 AM
8
08
15
AM
PDT
# 43 Tribune Mark, what specification do this extraordinary shapes have? What are their functions? No function. They are extraordinary in their detailed symmetry. However, I have since thought of a better example - courtesy of one of the many Daniel Bernoullis in 1734. As you probably know, all the planets (with the possible exception of Pluto) have orbits that are closely aligned. If the orbits were randomly oriented (uniform pdf over angle) then the chances of them being so closely aligned is less than 1 in 3 million. I don't think the concept of specification stands up to detailed scrutinty; but this is close to idea of "specification as simplicity" in the 2005 paper. So now imagine you are Bernoulli or any contemporary. You have done the probability calculation now you wonder - how come? So you apply my filter. Were they designed to be so closely aligned? There is no hypothesis about a designer who has the power and inclination (no pun intended) to align them. So we eliminate design. Did they end up this way by chance? We just did this calculation. Not the UPB but pretty far fetched. Therefore, it must have been necessity from some as yet unknown natural cause. Of course we could use the ID filter in which case the role of design and necessity are reversed. No known natural cause of necessity, therefore as yet unknown design.Mark Frank
December 12, 2008
December
12
Dec
12
12
2008
08:11 AM
8
08
11
AM
PDT
kf[46], I was sloppy; I meant that the order in which they are investigated ought to be interchangeable. Why not start with design?-Prof_P.Olofsson
December 12, 2008
December
12
Dec
12
12
2008
08:11 AM
8
08
11
AM
PDT
Prof PO A brief note: Cf points 5 - 8, no 41. Law and design are not simply interchangeable, once we see the sharp contrast in degree of contingency for the two. GEM of TKIkairosfocus
December 12, 2008
December
12
Dec
12
12
2008
07:52 AM
7
07
52
AM
PDT
PPS: Trib is right to point to the issue of absence of FSCI. The Carlsbad caves can be accounted for on laws plus random chance circumstances, e.g. formation of stalactites and stalagmites, and how they occasionally meet and fuse. So, that is the best current explanation. (BTW Trib, bouncebacks . . . )kairosfocus
December 12, 2008
December
12
Dec
12
12
2008
07:49 AM
7
07
49
AM
PDT
Mark[40], A very good point. Chance is the odd one out because it is eliminated by computing probabilities but the other two are interchangeable. Unless, of course, one have already decided to infer design but that couldn't be the case, could it?Prof_P.Olofsson
December 12, 2008
December
12
Dec
12
12
2008
07:11 AM
7
07
11
AM
PDT
apply it to it to the extraordinary shapes in the Carlsbad caves. Mark, what specification do this extraordinary shapes have? What are their functions?tribune7
December 12, 2008
December
12
Dec
12
12
2008
07:09 AM
7
07
09
AM
PDT
1 2 3 4

Leave a Reply