# A design inference from tennis: Is the fix in?

June 24, 2011 | Posted by News under Intelligent Design, Design inference |

Here:

The conspiracy theorists were busy last month when the Cleveland Cavaliers — spurned by Lebron, desperate for some good fortune, represented by a endearing teenager afflicted with a rare disease — landed the top pick in the NBA Draft. It seemed too perfect for some (not least, Minnesota Timberwolves executive David Kahn) but the odds of that happening were 2.8 percent, almost a lock compared to the odds of Isner-Mahut II.

Question: How come it’s legitimate to reason this way in tennis but not in biology? Oh wait, if we start asking those kinds of questions, we’ll be right back in the Middle Ages when they were so ignorant that …

### 189 Responses to *A design inference from tennis: Is the fix in?*

### Leave a Reply

You must be logged in to post a comment.

Dunno if this is right … aren’t the Cleveland Cavaliers basketball?

I think you may have a dose of the Wimbledons?

Correct me if I’m wrong

Ah! Read the link!

Of course you can reason that way in biology. We do it all the time.

Good question? If we could reason our way to the conclusion that a sports event has been fixed, why couldn’t we reason our way to the conclusion that god designed living things? Oh yeah, because we aren’t morons.

The methodology is exactly the same as is used countless times daily in all the sciences, namely, the probability of the observed under the null hypothesis.

It isn’t the math that is at issue here, it’s the operationalisation of the hypotheses.

In the above scenario, null hypothesis (H0)was the draw was done by the official method. The alternative hypothesis (H1) was that the draw was fixed to favour a particular team.

The probability of the observed data was, apparently very low under the null, so the null could be rejected.

Now apply this to biology:

Normally, again, we have a null hypothesis and an alternate hypothesis, and we test the alternative hypothesis by seeing how likely our observed data is under the null. To do this we have to operationalise an appropriate null. And again, we do this all the time. To demonstrate that some alleles are are advantageous, for instance, we need to demonstrate that the proportion of new alleles that go to fixation is more than we’d expect under the null of no allele being other than neutral. Or to demonstrate that a bacterium has acquired a new antibiotic resistance, we need to demonstrate that a greater proportion of bacteria reproduce successfully is greater than the proportion expected under the null of no new resistance.

And this is where ID goes (interestingly) wrong, IMO. Unlike most hypotheses regarding evidence for the supernatural, in which the miraculous is cast as the null(“science can’t explain it, therefore a miracle”), in ID the miraculous – or at least Design – is cast as the alternative hypothesis (H1).

This means that formulating the null correctly is extremely important – what is required is to characterise what we would expect to see

under any other non-Design hypothesis.So unlike the scenario in the OP, instead of starting with a clearly stated null (whatever the official draw formula is) we have a null that begs the very question at issue:

what non-design process might produce the observed data?An example might be a gene that is shared across multiple species.

The Darwinist reasons, this is just too improbable to have happened by chance.

Therefore, they reason, it must be due to a shared common ancestor.

What is missed in this line of reasoning is that not only is the “chance” hypothesis rejected, but so is the “chance + natural selection” hypothesis. You know, that mechanism that is otherwise supposed to be so all powerful and capable of explaining anything.

And the Darwinian shell game continues.

So Darwinians use the chance + selection “explanation” when it is convenient, but even they have to admit of cases where it’s just too improbable as an explanation, even for them, so they switch to yet another “explanation.” An incredibly flexible theory.

Elizabeth Liddle:

This is false.

OK, can you explain?

There are about 5 billion people in the world. Picking two at random gives a probability of about one chance in 10^19 of picking that pairing. It is obviously a fake.

</parody>

When I look at the cited SI link, I see that they are using conditional probability, as is appropriate (and which rules out my crude calculation). If ID arguments were a little more careful about how they estimate probabilities, they might be taken more seriously.

Of course evolutionists use this type of reasoning themselves when defending common descent. The probability of the same mutation happening in separate species is so low they must have a common ancestor.

http://www.amazon.com/gp/product/B0037QGYFY

Mung, could you explain why you think my assertion is false?

Are you saying that Design, in ID arguments, is

not cast as the Alternative Hypothesis to the null?In which case, how do you account for the Explanatory Filter?

It is, in effect, expressing the claim that if something cannot be explained with very high bar of probability by Chance and Necessity, we must infer Design?

What is the Filter, if not the claim that if observed data are highly improbable under the null we can infer Design?

Just as, in the OP, if the observed data are highly improbable under the null of the official draw protocol, the authorities must infer cheating?oops missed a close italic tag. Sorry.

Elizabeth,

The design inference is based on our knowledge of cause and effect relationships in accordance with uniformitarianism. Meaning it is not the null and we do not say science can’t explain it therefor design. We say science can explain it and it is designed-> using standard scientific methodology.

And the EF requires two things to happen- eliminate chance and necessity and there must be some specification. Otherwise it goes into the heap of “we don’t know (yet)”.

I would say if the lottery were the best two out of three and Cleveland won the first two, there may be something else going on.

Joseph: I agree, that in ID, Design is not cast as the null.

Mung appears to disagree with both of us.

I’d like to know why.

I think you should find out who he really is. The answer may surprise you…

No more outing, please!!!!

Elizabeth Liddle:

1. It’s not where design goes wrong. (Which, admittedly, is just your opinion.) You gave no argument to support it.

2. “It must be a miracle” is not the default choice. The default choice, or null hypothesis, is that it’s not a miracle.

3. Design is not the alternate hypothesis. Answer this question, if design is the alternative hypothesis, what is the null hypothesis?

See here for an example of the EF:

http://conservapedia.com/Explanatory_filter

There is no “not design” to which “design” is the alternative hypothesis.

I know I’ve explained that before, perhaps you missed it. I’m pretty sure I explained it to you, I could be mistaken.

ID theory cannot tell us if something is

not designed.According to you “not design” would be the null or default and “design” would be the alternative.

That’s not the way ID theory works.We can rule design in, but we cannot rule design out. Do you know why?

I think that pretty much covers everything in your statement. Did I miss anything?

Please, your killing my blood pressure. Stop already.

😉

It’s like every other thing you say about ID is mistaken. Why not start over here and at least try to get ID theory right?

I’m sure someone, lot’s of folks, would be glad to run through an example with you.

The Complete IDiot’s Guide to Design Theoryabcdefabcdefabcdefabcdefabcdefabcdef

abcdefabcdefabcdefabcdefabcdefabcdef

abcdefabcdefabcdefabcdefabcdefabcdef

abcdefabcdefabcdefabcdefabcdefabcdef

abcdefabcdefabcdefabcdefabcdefabcdef

abcdefabcdefabcdefabcdefabcdefabcdef

Q1: Design or Not Design?What fix?

next paragraph following quote in OP:

“Yet truth is, events with a likelihood of one percent happen all the time in sports. We just don’t always appreciate the randomness. Now if the Isner-Mahut sequel manages to outstrip the original? Then we’d be within our right to suspect that the fix is in.”

P:

I see your onward clip:

truth is, events with a likelihood of oneNope.

Truth is that which accurately reports reality, as unlikely or unknown as what actually happens is.G

Mung, you aren’t making sense to me at all. I think it’s your (occasionally endearing, but most frustrating) sarcastic style. It’s obscuring your meaning.

I try to avoid sarcasm on the internet because it usually muddies the waters in a medium in which tone is inaudible. So I’m going to try and put this as clearly as I can, and ask you as clear a question as I can. I’d be grateful for a really straightforward answer.

In a simple experiment, perhaps to test the hypothesis that a coin is unfairly weighted, we would toss it, say, 1000 times.

Our hypothesis, often called the “Alternative Hypothesis” and denoted H1, is that the coin is unfair.

Our null hypothesis, denoted H0 is the opposed, that the coin is fair. In other words, our null, H0, is that our H1 is false. There is no excluded middle.

So we note the proportion of heads, p, in our 100 tosses, and the proportion of tails, q. p, will of course, be equal to 1-q.

The expected result, under H0, is that p/q will be near 1. The expected result under H1 is that p/q will be substantially different from one.

The binomial theorem tells us how likely a given observed ratio of p:q is, under H0. If our observed ratio p/q is extremely unlikely under H0, we can reject the null, H0, that the coin is fair, and can infer that H1 is probably true – that the coin is unfair. However, if the ratio is quite likely under H0, even though p/q, is not precisely 1, we can “retain the null”. We do not consider the null “supported”, we merely consider it “unrejected”.

Now, if we take PaV’s example of the Blood of St Januarius, normally, the “miraculous” explanation is cast as the null, and a specific scientific hypothesis would be cast as H1. In other words, PaV tells us that if we can’t find a scientific hypothesis, we must retain the null.

For example, my scientific hypothesis might be that the church in which the vial is prayed for, and in which the liquefaction takes place, is cooler than the bank vault in which it is usually stored. As a result, as the vial is sealed, the dew point will go down in the vial a few hours after it is brought into the church, condensation will form in the vial and liquefy the blood. When it is replaced in the bank fault, the dew point will rise again, and the blood will dry.

That’s my H1. I can test this scientifically, by manipulating the ambient temperature of the vial, and observing whether liquefaction is temperature-dependent.

My null hypothesis is there is no relationship between ambient temperature and liquefaction.

I do the experiment: I subject the vial to an environment in which I adjust ambient temperature according to a randomised schedule, and I measure the liquefaction at a regular sampling rate.

I then plot liquefaction against temperature. If I observe a clear tendency for liquefaction to be observed at low temperatures, and solidity to be observed at high temperatures, and transition states at intermediate temperatures, then I can “reject the null”. However, if my observed data are fairly likely under then null (which we can estimate using an F test, or a t test, for example) then we can “retain the null”.

And PaV can consider his null not “proven”, or “supported”, but

retained, in the sense that we are still without a non-miraculous explanation.However, ID doesn’t work like this, which is its potential strength. In an ID hypothesis, Design (let’s leave out miracles for now) is cast as H1, and non-Design as the null (remember we must not exclude a middle).

And if a complex pattern is observed, the CSI computation allows us to determine how likely it is that a pattern of that degree of complexity, as one of a subset of patterns with that degree of compressibility, would be observed under the null hypothesis (H0) of non-Design.

And if that probability is sufficently low (less than evens, IIRC, to have happened at least once in the entire number of events in the universe), then we can reject the null, H1, of non-Design, and consider Design supported.

That is what I was trying to say.

I hope that it is now clear.

Could you tell me whether or not you consider the above false, and, if so, why?

Thanks

Lizzie

Hi Lizzie,

I asked a couple very simple questions. Could I have answers please?

You wrote:

My first question is in post #19:

I stated in my post what I thought the null would be, and from you post @24 I have no reason to think I was mistaken, but I would like to know what your response is to my question.

My second very simple question was posed in my post @20.

Please view the pattern and answer, design or not design.

If you want to apply what you think is the null and what you think is the alternative to that scenario feel free to do so, but it’s not necessary.

Now, since this is where you think ID goes wrong, I think it’s important to address it, and I am trying to provide a very simple explanation for why you are wrong.

It doesn’t help if you don’t play along. 😉

Thank you

p.s.

You are not required to answer one or the other if you can think of a different option.

Mung:

I’m not sure where in 19 you stated what the null would be. At one point you said that the “null hypothesis, is that it’s not a miracle”.

You then said:

“Design is not the alternate hypothesis. Answer this question, if design is the alternative hypothesis, what is the null hypothesis?”

My answer to that question is that the null is “non-design”.

My answer depends on what you want me to regard as admissable evidence

Obviously, taking all the evidence I have into account, I can infer design, because it is sitting there in a post by Mung, and I have good evidence that Mung is an intelligent intentional designer.

If I found it, however, as it were, on a heath, and the letters are just ways of representing a repeating pattern with four elements, I would have no way of knowing. I’d probably guess non-design, if by that, you meant, not produced by an intentional designer. It could be a bit of geology, produced by some kind of cyclical process. Lake varves, for instance.

No, it isn’t necessary. The best way to tackle a problem like this is not to compare the predictions of a design hypothesis with the predictions under a null, because the null is huge. Better to compare it with a second alternative hypothesis (H2), e.g. some kind of geological process.

Right.

Now, I’ve had a go at addressing your questions – can you now address the question I posed to you in 24?

Thanks.

Elizabeth Liddle:

And that’s why you are mistaken about ID. In ID there is no “non-design” hypothesis. In ID “non-design” is not the null, and therefore “design” is not the alternative hypothesis.

Therefore, I am correct when I write that you are mistaken when you say:

And this is where ID goes (interestingly) wrong, IMO. … in ID … Design – is cast as the alternative hypothesis (H1).Elizabeth Liddle:

My comment in #19 could not have been more clear, imo:

If “design” is the alternative hypothesis, “not design” would therefore be the null hypothesis.

“Not design” is

notthe null hypothesis, therefore it follows that “design” is not the alternative hypothesis.You are miscasting ID as an argument about a null hypothesis and it’s alternative hypothesis.

Elizabeth Liddle:

It was a trick question.

The question was stated in the terms you have been using, with “not design” as the null and “design” as the alternative. That’s not the way ID works.

The first step is to eliminate regularity or natural law as a possibility. The pattern is a regular repeating pattern. We would ascribe it to necessity rather than chance (contingency). Design doesn’t even enter the equation.

See again:

the linked EF.

But you are right, the pattern was designed. I chose the first 6 letters of the alphabet, I repeated those 6 letters 6 times on each row, I did so over 6 rows.

So by ascribing the pattern to necessity, we have not ruled out design. This again demonstrates that

not designis not the null hypothesis.We have not said, “not design.” We have said, “necessity.” We have at most said wrt to the design question, “we don’t know.”

Are we on the same page yet?

Is there anything so far that you don’t understand, or that you disagree with?

Cheers

abcdefabcdefabcdefabcdefabcdefabcdef

abcdefabcdefabcdefabcdefabcdefabcdef

abcdefabcdefabcdefabcdefabcdefabcdef

abcdefabcdefabcdefabcdefabcdefabcdef

abcdefabcdefabcdefabcdefabcdefabcdef

abcdefabcdefabcdefabcdefabcdefabcdef

Q1`: Chance (Contingency) or Necessity?

Mung, would you address my question at 24?

Thanks.

Elizabeth Liddle:

It’s misguided.

It’s not a matter of design pitted against non design.

Think of it as degree of confidence in the design hypothesis. We become more or less confident, but lack of confidence does not give us the basis upon which to say “not designed.”

Do you understand what I’m saying?

Do you disagree?

Regards

Crossposted:

Patience, dear. lol

I was getting to it.

But you should have been able to figure it out after reading my previous posts.

You are casting ID as non-design v. design.

That’s not the way it works.

Let me quote Dembski again:

We can fail to detect design, but from that we cannot conclude that design is not present.

Clear?

I really must go now. Work beckons

Lizzie,

Before Mung gets to this I wanted to just make a small comment:

“And PaV can consider his null not “proven”, or “supported”, but retained, in the sense that we are still without a non-miraculous explanation.

However, ID doesn’t work like this, which is its potential strength. In an ID hypothesis, Design (let’s leave out miracles for now) is cast as H1, and non-Design as the null (remember we must not exclude a middle).”

The fact that you can “leave out the miracles for now” with ID is why there’s the peculiar phenomenon of atheists and agnostics who support ID. It’s rare, but an agnostic in particular can remain so. I believe however, that really thinking about ID’s implications would eventually drive an atheist supporter more towards agnosticism. And what I mean is that you couldn’t accept that kind of evidence and still maintain as Dawkins does that “there almost certainly is no god.” Well you could, but to me it wouldn’t seem at all reasonable.

Mung:

I accept that you believe this, but when we actually turn to the definition of CSI, and the explanatory filter, it turns out, as far as I can see, not to be the case. This is the point I tried to make at 24. I’ll try again:

Yes. Exactly.

Except that as far as I can tell, this is not the case.

huh?

By seeking to eliminate a hypothesis you are casting it as the null.

You do not seek to eliminate “H1” in frequentist statistics. You seek to elminate H0.

No, it demonstrates the opposite. It means we have “retained the null” and failed to support Design (H1).

Retaining the null does not mean you have falsified (ruled out H1); it means you have not supported it.

Well, apart from the fact that you are holding it upside down, yes

Yes.

The null is what we (potentially) reject. The alternative is what we consider supported

ifwe reject the null.If we do not reject the null (i.e. if we “retain the null”) we do not

rejectthe alternative hypothesis (i.e. we do not “rule it out”), we merely, as I said “retain the null”.You seem to have made an understandable error about statistical terminology. Still, I hope we can straighten that out. It’s not a big deal.

http://www.null-hypothesis.co......hypothesis

Cheers

Lizzie

Yes, good point CY – in fact see my recent post on the extraterrestrial thread!

The penny dropped half way as I was writing the post.

You seem to have made an error about ID.

I haven’t made an error about statistical terminology. I pointed out early on that you were casting ID as a null (not design) and an alternative (design).

I made it quite clear that you are wrong to do so. That that is not how ID proceeds.

And who here, other than you, is claiming that ID is an exercise in applied frequentist statistics?

Frequentist statistics isn’t the only game in town.

see here

See my post at #21:

Did you ever answer?

You can say, it

mustbe one or the other. Neither is not an option.Or you can admit the possibility that I am correct, that it does not have to boil down to design or non design.

Since I am the ID supporter, why wouldn’t you consider that I have a valid argument?

ME:

You:

huh?I didn’t stutter. It either is the case that you are casting ID that way, or it is not the case.

If it is the case that that you are casting ID that way, it either is the case or it is not the case that you are miscasting ID.

You are claiming that the ID argument takes the following form, true or false:

Either

not designordesignmust be the case.You wrote:

And this is where ID goes (interestingly) wrong, IMO. … in ID … Design – is cast as the alternative hypothesis (H1).

To which the null, according to you, is “not design.” Correct?

So there can be no doubt that

this is how you are casting the ID argument.I say you are wrong. ID is not cast as an argument of the form:

(H0 aka null hyp): NOT DESIGNED

(H1 aka alternative hyp) DESIGNED.

If I am correct, why is it not true that you are miscasting the ID argument?

Mung:

Without making undue references to simple statistics, we may look at two hyps to be rejected in succession in a context where three are reasonable, and backed up by direct observations.

If soemthing has not got low contingency whereby similar start conditions lead to similar outcomes (as driven by mechanical necessity) we have to deal with high contingency.

If the thing is highly contingent but not complex and specific, we would accept chance. But complex, specific highly contingent events are not plausible chance outcomes and are on massive support explained by design.

A lot of this stuff is now going in circles with Dr Liddle.

I think we should just note for the record.

GEM of TKI

Mung:

Well, no, you have not made that clear. Actually, the way you describe how ID proceeds is, in fact, by casting it as H1.

No, I know it isn’t, and I think it is inappropriate for an ID hypothesis. Some kind of Bayesian format would be much better. However, the CSI calculation is a frequentist calculation, as is the EF. And in both, Design is cast as the null. At least, you haven’t persuaded me that it isn’t, merely asserted it, and you haven’t addressed my careful post in which I demonstrated that it is.

If you want to set up a hypothesis in which there is a third option, then feel free to do so. But I’m not seeing one in either the EF, or in the definition of CSI.

I’m sorry, I can’t parse your statement. What’s the antecedent of “it’s” and who is saying that “it’s [an] alternative hypothesis”?

No, I am not. I am saying that the ID hypothesis takes the following form:

If X is more probable than alpha under the null hypothesis of no-design we must retain the null, i.e. reserve the possibility of design, but consider it undemonstrated in this case.

If X is less probable than alpha under the null, we must reject the null and infer design.

Apha seems to be set normally as a function of the estimated number of events in the universe

That is how every formulation of CSI or the EF that I have seen is cast, and that is casting Design as H1, and no-design as the null.

That’s why the bar is set for Design to climb, not for no-Design to climb.

Yes.

There is, I hope, no doubt that this is the way that the ID is cast. I’m not doing the casting. It’s not my hypothesis. I’m just saying that’s how it’s expressed in the CSI formulation, and that’s how it’s expressed in the EF.

I say you are wrong. ID is not cast as an argument of the form:

(H0 aka null hyp): NOT DESIGNED

(H1 aka alternative hyp) DESIGNED.

Well, I think you’ve mis-parsed it.

If you were correct, I would be miscasting it. However, as I am correct, it is you who are miscasting it.

Sorry, Mung

Game Lizzie.

kf:

Exactly.

Chance is the null (if X doesn’t reach the bar for improbability under the Chance hypothesis), and if we reject Chance (X too improbable under the null) we must infer Design.

See how it’s done, Mung?

Heh.

Having just gone off to bed with The Signature in the Cell, I just had to log back in again….

Mung, check out pages 178-193, on Dembski, Fisher (

Fisher!) and Chance Elimination.Meyer goes through the basics of Fisherian statistical testing (i.e. frequentist stats) and rejection regions and all, making it quite clear that “design” is in the

rejectionregion, then, on page 188, comes right out and says:Then he goes on a bit more about Dembski’s refinement of pattern recognition, but always defining it as the pattern that falls in the “rejection region” after deciding “how improbably is too improbable?” for the chance hypothesis to explain.

In other words, always casting Chance as the null, and Pattern (or Design) as H1.

So it seems that Meyer agrees with me

Elizabeth Liddle:

For now I’ll skip over your previous posts and concentrate on SitC.

Chapter 8:

Chance Elimination and Pattern Recognition.The first thing that comes to mind is

Chance Elimination.Are you now claiming that

CHANCEis the null hypothesis? Because ALL ALONG you have been asserting thatNOT DESIGNis the null hypothesis.So you’re starting to come around?

NOT DESIGNisNOTthe null hypothesis?Meyer:

Alternatives, not alternative.

Since you’re just starting Chapter 8, Chapter 7, where Meyer discusses “inference to the best explanation,” should be fresh in your mind.

Elizabeth Liddle:

I haven’t even tried to persuade you that it’s not. Why would I try to persuade you that design is not the null when you’ve been claiming that

not designis the null and thatdesignis the alternative hypothesis?Elizabeth Liddle:

Now I do have to say you are making no sense. In the same post you have asserted that both A and NOT A are true.

You:

And in both [CSI and the EF],Design is cast as the null.You:

That is how every formulation of CSI or the EF that I have seen is cast, and that is casting Design as H1, andno-design as the null.So leaving that aside for now as an irreconcilable difference in your stated positions, let’s return to Meyer in Ch. 8 of SitC:

Multiple alternative hypotheses. It’s not no design as the null with the alternative being design.

At some point we have to eliminate lawlike processes. One might call them patterns of high probability.

Lizzie, it occurs to me that we may in some sense be talking past each other.

When I hear you say no design or not design I am taking that literally, as being a statement that the null hypothesis is that

there is no design present.If as you claim design is the alternative, that would be the logical null hypothesis.

Is that what you mean, or do you mean by no design and not design that

design may be there, but we just can’t tell. That seems illogical to me, but hey, stuff like that happens.Mung:

Yes indeed, Mung

Right, now let’s try to get ourselves face to face….

Yes, but that is how you have to cast a null. It doesn’t mean that if you “retain the null” you have concluded that there is no design present, merely that you have failed so show that design is present.

It’s a subtle point, but an important point. To take Meyer’s example of the roulette wheel: if the statistician employed at the casino shows that the pattern observed is not particularly improbable under the null of “nothing is going on” (cheating, wonky table, whatever), then that does not rule out “something going on” but it does not allow the casino owner to infer that it is.

It’s just one of the weirdnesses of Fisherian statistics.

Well, I’m saying that the way ID tests are usually cast is with Design as the null. As Meyer explains.

No, I mean what I said above. The null hypothesis is “no design”. However, “retaining the null” doesn’t mean “no design” it just means that design hasn’t been demonstrated.

Yeah, it’s weird, and it’s why Bayesian statistics often makes more sense, but Dembski goes with Fisher, so, hey

It’s interesting, and makes it different from, for example, PaV’s Blood of St Januarius argument.

However, it’s also its biggest flaw, IMO.

But first, let’s agree that that is the way it is

I have one vote from Meyer. I seem to have a vote from the OP. KF seems on board. Just waiting for you, Mung

Elizabeth Liddle:

Well, in that case we haven’t been talking past each other, lol.

Because I took that to be precisely what you were asserting.

If design is the alternative hypothesis (H1), as you claim, then the null hypothesis (H0) is “there is no design present here.” The null is the logical negation of the alternative.

Are we agreed so far?

If so, what then do you make of Dembski’s statement:

Which is not to say that Dembski goes with Neyman and Pearson’s extension of Fisher’s ideas.

Elizabeth Liddle:

Sorry to burst your bubble, but you should have read on.

Meyer, p. 189:

Mung, you are still missing the point I have stated clearly several times:

The whole Fisherian convention is that if we fail to support H1, we merely “retain the null”, we do not reject H1.

So Dembski is absolutely right to say that “When the Explanatory Filter fails to detect design in a thing, can we be sure no intelligent cause underlies it? The answer to this question is No. ”

That is absolutely standard Fisherian inference from failure to support H1 – you merely “retain the null”. You cannot be sure that H1 is not true, you merely remain without evidence that it is.

In fact, everything you say, including your further quote from Meyer, makes it clear: in ID methodology, Design is cast as H1.

Design is what falls in the “rejection region” under the null of “nothing going on here” to use Meyer’s phrase.

The passage from Meyer that you thought might “burst my bubble” does no such thing – it merely raises the bar for inclusion in the rejection region.

And I agree with your quote in 46 – I think Fisherian hypothesis testing is unsuitable for the task, but it is nonetheless the one Dembski uses, and therefore runs a very large risk of a Type II error, especially given his tiny alpha (which, according to one source, which I am not equipped to critique, is still too large, and if the right value was used, would render Type II errors inevitable and the Filter/CSI useless).

However, my own criticism of it is that because Design is cast as H1, it is absolutely vital to correctly calculate the null. And I the filter gives us no way of calculating the null.

Mung?

Do you see what I’m saying here?

Hi Lizzie,

I think you missed my point about Neyman and Pearson.

So would you say that it is “chance” that is the null hypothesis, or something else?

Does rejecting the chance hypothesis get us to design?

You agree that Meyer relies heavily on Dembski, correct?

How does “inference to the best explanation” fit in with your beliefs about how the design argument functions?

From your linked source:

Now I understood you to say that design

does notwork like this. I guess I need to go back and re-read.Mung:

“something else”.

i.e. “not design”.

Rejecting the null hypothesis “not design” gets us to design.

Yes.

I don’t have “beliefs about how the design argument functions”. I am simply pointing out that the EF and CSI both cast Design as H1.

Therefore they cast non-Design as H0.

That is why “Design” is in the “rejection region” of Meyer’s plot, and it is why the EF, in two stages, first rejects “Chance” (i.e. Necessity or Design are H1) then rejects “Necessity and Chance” (i.e. Design is H1).

Design, is, in other words, in the

rejection regionof the distribution of patterns.It is H1.

This isn’t my “belief”, Mung, you can read it straight off the page. It shouldn’t even be controversial!

No, I’m saying that the EF and the CSI formula work exactly like this. They are set up to “refute” the null. And if you manage to “refute” the null (of no-design) you can consider your H1 (the “alternate hypothesis”) supported.

That’s why the EF is a “Filter” – it filters out the null junk and leaves you with Design.

Do we now agree that in the EF and the CSI formula, Design is cast as H1?

In which case, obviously, “no-design” is the null.

EL:

Then who’s beliefs are you posting?

Dembski’s.

Elizabeth Liddle:

As I have been saying all along, that is not correct.

First, there are three stages to the EF.

Dembski:

Design:

Explanatory Filter:

http://www.uncommondescent.com/glossary/

for now I’m just going to post links. Perhaps summarize later:

http://www.arn.org/docs/dembski/wd_explfilter.htm

http://conservapedia.com/Explanatory_filter

http://conservapedia.com/image.....filter.jpg

Then why are you quoting Meyer?

Because he agrees with Dembski.

Mung, you keep asserting that I am wrong, then you quote stuff that demonstrates my point!

It’s actually a pretty trivial point, and I didn’t think it was going to be worth even making. I assumed that everyone agreed with it. I can’t believe I’m explaining how the EF works on UD!

OK, The Annotated Dembski, by Lizzie.

LizzieNotes: Our filter lets Designed things through, and keeps back non-Designed things.

LizzieNotes: If our observed pattern gets through all the filtering stages, we can infer it was designed. (Well, didn’t really need that, Dembski is admirably clear.)

LizzieNotes: is our observed pattern probable under the distribution of patterns expected of the laws of physics and chemistry? Note that this may not sound like a null at first glance but the key word is “probable”. Because it asks whether the pattern is “probable”, not “improbable”, we know it is a null. To support an H1 hypothesis (alternative) we need to show that it is “improbable” under some null.

LizzieNotes: This is the classic null: is our observed pattern probable under the null of “Chance”. See Meyer, as quoted above.

LizzieNotes: As this is where we end, if the observed pattern makes it through the filters, we answer “yes”. This is because Design has fallen into the “rejection region”, where neither “Law” nor “Chance” are

probableexplanation.And that’s absolutely fine. Dembski poses Design as the “alternate hypothesis” (H1) to the “null hypothesis” of no-Design, which he partitions into two sub nulls: “Law” (at which point the rejection region is fairly large, and encompasses both “Chance” and “Design”) and “Chance”, which is conventional Fisherian stuff.

Are we in agreement yet?

The slightly misleading part is “law”.

To give an example that makes might help make sense:

Let’s say someone tells us they saw someone toss 100 consecutive heads. There are three possible explanations: The coin had two heads; it was an incredibly lucky series of throws; the guy had a special tossing technique and managed to consistently ensure that the coin always landed heads.

First of all we examine the coin.

Scenario 1: it has two heads.

What is the probability of 100 heads for a coin with two heads?

Answer: 1. We can retain the null, and consider that the hypotheses that a lucky chance meant that coin landed always the same side up, or that the guy had a special throwing technique are unsupported. Although both these things could still be true.

Scenario 2: it has a head on one side and tails on the other.

Is there a law that governs how a coin with heads on one side and tails on the other will fall? No, there isn’t. So we can actually skip that bit of the filter.

What is the probability of a 100 heads for a coin with heads on one side and tails on the other?

Well, I make it .5^100. Very improbable. So we can reject the null of Chance.

So we’ve made it through the filter. Design Did It. The man is a genius.

And Design was always H1. A bit weird for the first stage, but nonetheless, that’s how it works.

It’s easier for the CSI formula, because that lumps the two (Chance and Necessity) together, in effect, making them add up to “non-design”.

Dr Liddle:

I must confess my disappointment with the just above.

First, in neither the 1998 or so simple presentation (a slightly different form is here — nb under given circumstances if LAW is the driver, the outcome will be highly probable, if chance, it will be intermediate, if choice beyond a threshold of confident ruling, it will be highly improbable if it were assumed to be by chance) nor my more complex per aspect presentation is chance the first node of the filter, but a test for mechanical necessity.

That is precisely because high contingency is the hallmark that allows one to reject necessity as a credible explanation. If something has highly consistent outcomes under similar start points one looks for a more or less deterministic law of nature to explain it, not to chance or choice.

Once something is highly contingent, then the real decision must be made, and in that context the default is chance. Once something is within reasonable reach of a random walk on the accessible resources, it is held that one cannot safely conclude on the signs in hand, that it is not a chance occurrence.

Only if a highly contingent outcome [thus, not necessity] is both complex and specific beyond a threshold [i.e. it must exhibit CSI, often in the form FSCI] will there be an inference to design.

Or, using the log reduced form of the Chi metric:

Chi_500 = I*S – 500, bits that are specific and complex beyond a 500 bit threshold. Only if this value is positive would choice be inferred as the best explanation; essentially on the gamut of the solar system.

Here, raising the complexity threshold to 1,000 bits would put us beyond the credible reach of the observed cosmos.

Why I am disappointed is that you have been presented with flowchart diagrams of the EF especially in the more complex per aspect form [cf Fig A as linked], over and over and this is the second significant error of basic interpretation we are seeing from you on it.

As you can also see, for an item to pass the Chi_500 type threshold, it would have to pas the nodes of the filter so this is an equivalent way to make the decision.

This is why Dr Dembski’s remark some years back on “dispensing with” the explicit EF had a point. (Cf the discussion in the UD correctives. By now you should know that the objectors, as a rule, cannot be counted on to give a fair or accurate summary of any matter of consequence related to ID.)

Please, make sure you have the structure and logic of the EF right for next time.

GEM of TKI

I am quite familiar with the structure and logic of the EF. Indeed, I pointed it out.

All I am saying is that, in terms of Fisherian nomenclature (and it is a frequentist filter), Design is cast as the null.

That is an entirely neutral statement. One could regard it as a strength (although I personally think it leads to a flaw).

But the fact is that if you set up a hypothesis so that you make your inference by rejecting some other hypothesis you are casting that other hypothesis (or hypotheses) as the null, and your own as the “alternative hypothesis” aka H1.

Clearly, the filter is set up to allow us to REJECT Chance and Necessity, if the observed pattern passes through the stages, and infer Design.

That means, in other words, that if the pattern falls in the REJECTION zone, we infer that our Hypothesis is supported.

What is REJECTED in the rejection zone is the NULL. Ergo, Design is the Alternate Hypothesis.

Honestly, this really is Stats 101!

I’m astonished that it’s controversial. And in fact, kf, you agreed with it upthread!

You wrote (#37):

(my bold)In other words, if the observed pattern falls in the “rejection region” we consider Design supported.

No?

Or have you changed your mind?

What I tell my students:

We never “reject” our H1 in frequentist stats – we merely “retain the null”. In other words, even if we “retain the null” H1 remains possible, just not positively supported by the data.

However, we may “reject the null”. In that case we can consider our H1 supported.

So if a “filter” is set up to “reject” a hypothesis, the hypothesis set up to be “rejected” is, technically, called “the null”.

Dr Liddle:

Kindly, look at he diagrams as linked.

Compare your statements, and I trust you will see why we find your descriptions in gross error. And gross error relative to easily ascertained facts.

You may choose to disagree with the point in the EF, but surely where a start node in a flowchart lies, the flow through branch points [decision nodes], and the terminus in a stop point, — as the very use of the relevant shapes alone should tell — is plain.

GEM of TKI

Well, obviously I have not managed to convey my point, because I’m sure if I had, you would agree with it!

Let’s try another tack:

Do you agree that, if an observed pattern succeeds in making it right through the EF, that the answer to each of the first two questions must have been “no”?

(i.e. No to Law; No to Chance)

PS: If you look at the per aspect EF chart, you will see that here are two decision nodes in succession in the per as-ect chart as shown. The first default is that there is a mechanical necessity at work, rejected on finding high contingency. Thereafter, as just stated above, the second is that he result is by chance driven, stochastic contingency. This is rejected on finding CSI. There is no inconsistency in my remarks, and that you think you see that shows that you are misresading what the flowchart and the remarks have been saying. Notice, again, there are two defaults, first necessity,then if that fails, there will be chance, and only if this fails by a threshold of specified complexity where chance is maximally unlikely to be able to account for an outcome, will the inference be to design. Indeed, the filter cheerfully accepts a high possibility of missing cases of design in order to be pretty sure when it does rule design. That is it is an inference to best explanation, with a high degree of confidence demanded for ruling design. Once there is high contingency but you cannot surmount that threshold, it will default to chance as best explanation of a highly contingent outcome. If the outcome is pretty consistent once similar initial conditions obtain, the default is mechanical necessity.

–> I do not know how to make this any clearer, so if someone out there can help I would be grateful.

No, I don’t think you are being inconsisent kairosfocus, and what you have said here is exactly in accordance with what you said earlier.

If a pattern makes it through the filter it means we

reject, in turn, necessity (because of high contingency), then chance (because of CSI).That allows the pattern to make it through to Design.

Correct?

Der Liddle:

Yes, once the relevant criteria of empirically tested and reliable signs as the means of making those two rejections are also acknowledged.

However, it must then

alsobe faced that there are many direct positive confirming instances — and a distinct absence of credible failures [e.g. a whole internet full of cases] — where we can see if it is reliably known that CSI and/or FSCI serves as a reliable positive sign of choice contingency.Which it does.

This is inference to best explanation on cumulative empirical evidence in light of positive induction, not mere elimination on what “must” be the “only” alternative.

GEM of TKI

William A. Dembski:

http://www.designinference.com....._Bayes.pdf

William A. Dembski:

http://www.designinference.com.....bility.pdf

#64 and #65 Mung

Do you understand that null hypothesis significance testing is a conceptual nightmare and only hangs on in statistics because of tradition? It is one of Dembski’s biggest mistakes to hitch the design inference to this. There are many papers on the internet describing this. here is one. To quote:

The null hypothesis significance test (NHST) should not even exist, much less thrive as the dominant method for presenting statistical evidence in the social sciences. It is intellectually bankrupt and deeply flawed on logical and practical grounds.Mung, @ #64:

Thank you for posting that. Yes, it is possible to reject a null without having specified Alternative Hypothesis in detail. That is where the terminology becomes confusing, and perhaps that is where the communication difficulty has arisen.

An Alternative Hypothesis(H1) can be expressed as the negation of the null, just as H0 can be expressed as the negation of H1. The important thing is that there is no Excluded Middle. That’s why one is always expressed as Not The Other.

So we could express the Design Hypothesis as either

H0: Not-Design; H1: Design

Or we could express it as:

H0: Chance; H1: Not Chance.

Or even:

H0: Chance or Necessity; H1: Neither Chance nor Necessity.

It doesn’t matter. A null hypothesis isn’t called “null” because it has a “not” (or a “neither”) in it! And the Alternative Hypothesis can be as vague as “not the null”.

So let’s just call them A and B to avoid terminology problems for now:

In Fisherian hypothesis testing, you set your two hypotheses (A and B) so that you can infer support for A of them if the probability of observing the observed data if B is true is very low.

However, if the probability of observing the observed data is quite high under B, we “retain B”. We do not rule out A.

So it is an assymmetrical test. We plot the distribution of possible data under B, and if the observed data is in one of the extreme tails of B, we conclude that “p<alpha" (where alpha is your rejection criterion) and A is supported.

So the way you tell which is H0 and which is H1 when reading a report of Fisherian hypothesis testing is to ask yourself:

Which hypothesis is rejected if the observed data is improbable (i.e. low probability)? That is your H1.

The other is your H0.

That is how we can tell that Dembski's EF, and indeed his CSI filter, Design (or, if you will, Not Chance Or Necessity), is cast as the Null (H0). Because if the data are high

improbable, Design (or Not Chance Or Necessity) is considered supported.It’s all in Meyer

Does that make sense now?

If so, we can go on to discuss why it might be problematic for Dembski’s method, but at first let us be clear on how the method is parsed in Fisherian terminology!

Dr Liddle:

This is a reversion to the already corrected, and is disappointing.

Let’s go over this one more time: the first default — notice I am NOT using the term null, as it seems to be a source of confusion, is NECESSITY. This is rejected if we have highly contingent outcomes, leading to a situation where on background knowledge chance or choice are the relevant causal factors.

We have whole fields of science in direct empirical support, that necessity expressed in a law is the best explanation for natural regularities. But, not for that which is exactly not going to have substantially the same outcome on the same initial conditions more or less: a dropped heavy object falls at g. (The classic example used over and over and over again on this topic.)

The second default, if that failed, is chance. If we drop one fair (non-loaded) die, the value on tumbling — per various processes boiling down to classes of uncorrelated chains of cause and effect giving rise to scatter — the outcome will be more or less flat random across the set {, 2, . . . 6}, as again has been cited as a classic over and over again.

If we move up to two dice, the sum will however show a peak at 7, i.e. a statistical outcome based on chance may show a peak.

Where we have a sufficiently complex set of possibilites [i.e config space 10^150 – 10^300 or more] and we have results that come form narrow zones that are independently specifiable, especially on particular desirable funciton, we have good reason to infer to choice. For, on overwhelming experience and analysis, sufficiently unusual outcomes will be unobservable on the scope of our solar system or the observed cosmos; due to being swamped out by the statistics. The classic example from statistical thermodynamics is that if you see a room where the O2 molecules are all clumped at one end, that is not a likely chance outcome, as the scattered at random diffused possibilities have such overwhelming statistical weight.

But, on equally notoriously huge bases of observation, deliberate action by choice can put configurations into zones of interest that are otherwise utterly improbable.

Using yet another repeatedly used example, ASCII characters of string length equivalent to this post have int hem more possibilities than the observable cosmos could scan more than effectively a zero fraction of, so there is no reason to infer that his post would be hit on by noise on the Internet in the lifespan of the observed cosmos. But, I have typed it out by design in a few minutes.

(And, making yet another repeatedly used comparative, DNA is similarly digitally coded complex information in zones of interest inexplicable on chance, the only other known source of contingency. And, on yet another long since stale dated objection, natural selection as a culler out of the less successful, REMOVES variation, it does not add it, it is chance variation that is the claimed information source for body plan level macroevo.)

So, Dr Liddle, why is it that on being corrected several times, you so rapidly revert to the errors again corrected?

Do you not see that this looks extraordinarily like insistence on a strawman caricature of an objected to view?

GEM of TKI

PS: Re MF (who uncivilly insists on ignoring anything I have to say, even while hosting a blog in which I am routinely subjected to the nastiest personal attacks that face to face would be well worth a punch in the nose . . . if you picked the wrong sort of person to play that nastiness with), what I will say is that the many clever objections to reasoning by elimination too often overlook the issue of the match between opportunities to observe samples from an underlying population and the likelihood of samples catching very special zones that are small fractions of low relative statistical weight.

My usual example is to do a thought experiment based on a darts and charts exercise. Draw a normal curve on a large sheet of paper, breaking it up into stripes of equal width, and carrying out the tails to the point where they get really thin. Mount a step ladder and drop darts from a height where they will be more or less evenly distributed across the sheet with the chart on it.

After a suitable number of drops, count holes in the stripes, which will be more or less proportional to the relative areas.

One or two drops could be anywhere, but if they are inside the curve will overwhelmingly likely be int eh bulk of it, not the far tails.

But as you drop more and more hits,, up to about 30, you will get a pattern that begins to pick up the relative area of the stripes, and the tails will therefrore be represented by relatively few hits.

The far tails, which are tiny relatively speaking, and are special independently specifiable zones, will receive very few or no hits, within any reasonable number of drops.

So, we see the root of the Fisherian reasoning, which is plainly sound:

with statistical distributions, the relative statistical weight dominates outcomes within reasonable resources to sample.So, if you are found in a suspicious zone, that is not likely to be a matter of chance but choice.The rest is dressing up a basic common sense insight in mathematics.

Or, better, yet, statistical thermodynamics.

The design inference type approach refines this common sense and gives a systematic way to address what it means to be found in special and rare zones in extremely large config spaces.

I am now drawing the conclusion that the torrent of objections and special pleadings and reversions to repeatedly corrected errors that we so often see, are not because of the inherent difficulty of understanding this sort of reasoning, but because the implications cut sharply across worldview expectations and agendas.Kairosfocus, this is becoming a little bizarre!

I am not disagreeing with you!

Let’s go through your post:

Exactly. If we have highly contingent outcomes we REJECT something, namely Necessity. To put it differently, if the observed data are highly improbable under the hypothesis that Necessity produced the observed patter, we reject necessity.

If I have this wrong, please tell me, but it seems to me I am saying exactly what you are saying.

I have no disagreement with any of that.

Exactly. If what we observe is extremely improbable under the hypothesis of Chance, we reject chance as an explanation.

Again, this seems to be what you are saying, and I wholeheartedly agree!

Yes, indeed, natural selection removes variation, it is not responsible for it. Again, I agree.

Because I don’t see anywhere where I have said anything that does not agree with what you are saying here! If I have, it can only be because I have been unclear.

Well, no, because I am happy to completely accept your account, in your own words.

My only point, and it is such a little point I’m amazed that we are even discussing it (and you haven’t even said you disagree) is that the way the analysis is set up is by a series of stages under which we REJECT a series of hypotheses (first Necessity, then Chance) of the observed data are very improbable under those hypothesis.

(Please can you tell me whether or not you disagree with this, because it is all I am seeing, and seems to me exactly what you are saying above.)

And my tiny (but essential for progress) point, is that the technical statistical term for that kind of analysis, in Fisherian statistics, which is what Dembski and yourself are using, we call the hypotheses that are rejected if the pattern is improbable under those hypotheses, are called “null hypotheses”. A silly term, perhaps, but that’s what we use.

That’s all I’m saying – that the technical term for the Chance and Necessity hypotheses is “Null” aka H0, and the technical term for Design is the “Alternate Hypothesis” aka H1, in other words, what you are left with if you have excluded everything else.

Absolutely. And that “suspicious zone” is called the “rejection region”. We agree. And what is “rejected” is the null hypothesis. What is considered supported is the Alternative Hypothesis.

Therefore, in the ID version, Design is the Alternative Hypothesis, and Chance and/or Necessity is the Null.

I think you have just misread the conversation I was having with Mung. I assume you agree with the above, as you seem to be familiar with the quirks of Fisherian statistical nomenclature.

Yes indeed. The only point at issue is the name we give the hypothesis we regard as supported if a pattern falls into one of those “special and rare” zones.

Now that you know what this spat is about, I’m sure you will agree that what we call it is the “Alternate Hypothesis” aka H1, and the hypothesis that would then be rejected is the “Null” aka H0.

Yes?

Well, luckily for me, it will be clear to you by now that your premise is mistaken, due to some kind of communication error that I assume is now sorted out

Cheers

Lizzie

Elizabeth,

You are the most patient person I’ve ever seen on the internet.

Thanks Lizzie!

Dr Liddle:

Kindly look back at your remarks above that excited my comment:

Do you see my concern?

There is no grand null there. There is a first default, necessity. On high contingency, it is rejected.

On seeing high contingency, the two candidates are compared, on signs, chance and necessity. On the strength of the signs, we infer to the default if not sufficiently complex AND specific, and we infer to choice as default where there is CSI, not just on an arbitrary criterion but on an analysis [zone of interest in search space in context of accessible resources to search] backed up by empirical warrant. But, what it is really saying is the old Scotch verdict: case not proven. The BEST EXPLANATION is chance, but the possibility of design has not been actually eliminated.

Which is one of Mung’s points.

None of these causal factor inferences is working off a sample of a population and a simple rejection region as such, though the analysis is related. That is, high contingency vs natural regularity is not directly and simply comparable to where does your sample fall on the distribution relative to tails.

Similarly, while there is a zone in common, the issue on the zone is presence in a specifically describable and narrow zone in a config space large enough to swamp accessible resources [on the gamut of solar system or observed cosmos etc], AND an analysis with a base on positive, direct induction from sign and known test cases of the phenomenon, such as FSCI.

If anything, Fisherian type testing under certain circumstances is a special case of the design inference, where the alternative hyp for the circumstances is in effect choice not chance.For good reason, I am distinctly uncomfortable with any simplistic conflation of two or more of the three factors; that is why there are three decision nodes in the flowchart.

To make it worse, the chart is looking at aspects of as phenomenon, i.e chance necessity and choice can all be at work, on different aspects, and will leave different signs that we can identify and trace on a per aspect basis. As simple a case as a swinging pendulum will show scatter, and that is then analysed as additional effects that are not specifically accounted for and are addressed under the concept, noise.

GEM of TKI

markf:

Hi mark. Thanks for weighing in.

Please note, for the record, that I have been arguing

againstthis characterization of Dembski and the EF.Elizabeth (and now you as well) has yet to comment on Neyman and Pearson, even though I quoted you:

Which approach is Lizzie following, Neyman and Pearson, or Fisher? Shall we pretend it’s not relevant to the current debate?

Which approach does Dembski follow? Does Dembski follow Neyman-Pearson?

I have yet to find anything he has written framed the way that Lizzie claims. That’s why I’ve started citing him directly.

I’ve found no mention of “not design” as the null hypothesis. I’ve found no mention of “design” as the alternate hypothesis to the null hypothesis.

Feel free to quote Dembski if you think otherwise.

From your linked source:

On Neyman-Pearson:

Regards

F/N: best explanation for our purposes . . .

http://www.informationphilosop.....ation.html

So according to Dembski we need a pattern, but not just any kind of pattern. If there were a “null” hypothesis, wouldn’t it be “

no detectable specification“?Mung – I apologise I should not have got involved with this particular discussion. Right now I don’t have anything like the time to do this subject justice – even my reference was a poor one.

F/N: Perhaps we could put it this way — why do lotteries have to be DESIGNED to be winnable?

(Hint, if the acceptable target strings are too isolated in the space of possibilities, the available search resources, predictably, would be fruitlessly exhausted; i.e. the random walk search carried out would not sufficiently sample the field of possibilities to have a good enough chance to hit the zone of interest. That is, despite many dismissals, we are back to the good old infinite monkeys challenge.)

OK, we are slowly reducing the blue water between us I guess.

Kairosfocus: yes, I understand that there is a two stage rejection process.

As long as we agree on that, it is fine.

The normal terminology is to regard a hypothesis that is rejected, the “null”. But if you don’t like the term, fine. We can call it something else. Let’s use the term H0 for what I would call “the null” and H1 for “the alternative”.

The important thing is to recognise, and I think we all do, that in frequentist hypothesis testing (in other words where you plot a probability distribution function based on a frequency histogram) you test one hypothesis against a second, but assymetrically.

The test is assymetrical because you plot the pdf of the probability of data under first hypothesis, and note where the “rejection region” is for that hypothesis. And if your data falls in the rejection region, you consider your second hypothesis “supported”, and your first “rejected”.

However, if it does NOT fall in the rejection region, you do not “reject” the second hypothesis you merely “retain” the first hypothesis as viable.

The first hypothesis – the one you plot the pdf of – is usually denoted H0, and the second as H1.

So in any frequentist hypothesis test there has to be a strategic decision as to which hypothesis is going to be H1 and which H0. It’s usually obvious which way round it should go.

And in none of the ID tests described in this thread is H0 Design.

Do we all agree on this?

Well, I could see why you might. I did quote your site, lol.

William A. Dembski:

Dr Liddle:

The design inference in general is a lot more complex than the sort of “it’s in the far-skirt zone” inference that is common in at least simple statistical testing cases.

The first stage decision on whether the population of samples shows a natural regularity that can then be traced on a suspected mechanical necessity driving it, is itself often a complex process.

The observation that instead we have high contingency on similar start points, raises issues on what explains contingency. And, the note on chance or choice, leads to the exploration of searches in configuration spaces and when we are dealing with inadequate resources to catch a zone of interest if relative statistical weights of clusters of configs are driving the outcome.

The default to chance is really a way of saying that here is no good reason to infer to choice, given teh available resources and credible capacities of chance. That’s an inference to best explanation on warrant, with possibility tipping this way or that, pivoting on complexity and specificity.

The decision that if this is a lottery it is unwinnable on clean chance, so the outcome is rigged, is bringing to bear factors like the quantum state resources of our solar system or even our whole cosmos, and its credible thermodynamic lifespan. These, to define an upper scope of search to compare to the gamut of a config space.

Notice, this is moving AWAY from a probability type estimate, to a search space scope issue. If there is not a credible scope of search, and you are in a narrow zone of interest, the analysis points to being there by choice as now the more plausible explanation.

And then, this is backed up by empirical testing on the credibility of particular signs of causes.

As in, use of symbolic codes with rules of meaning and vocabularies, the presence of multi-part functionality that requires well-matched parts, beyond an implied string length arrived at by identifying he chain of yes/no decisions to specify the object, etc etc.

In short the sort of did we arrive at the number of women in this firm by accident or by discrimination decision is at best a restricted and simple — though more familiar case. (For one, there is a presumption of high contingency and lack of an explaining “law”; what if there is a problem that women on average lack the upper body strength to succeed at this task, or have different career interests etc?)

The real deal has in it much more sophisticated considerations, and the natural home is actually the underlying principles of statistical thermodynamics as used to ground the second law; which is now increasingly accepted as bleeding over fairly directly into information theory.

Problem is, this jumps from the boiling water into the blazing fire below.

GEM of TKI

Mung:

Which, per the log reduction gets us to the simplified case:

Chi_500 = I*S – 500, bits beyond a solar system threshold. Where I is information measure and S is the dummy variable on specificity.

1,000 coins tossed at random will have a high Shannon-Hartley info metric, but no specificity so I* S = 0. 1,000 coins set out in an ascii code pattern will have a lower I value due to redundancy but S is 1, and Chi_500 willbe exceeded.

You are maximally unlikely to see 1,000 coins spelling our a coherent message in English in ASCII, but that will not at all be unlikely to have been caused by an intelligence.

Or equivalently, sufficiently long code strings in this thread are on the SIGN of FSCI, most reasonably explained on design.

We can be highly confident that on the age of our solar system, such a coin string or the equivalent would never happen once, even if the solar system were converted into coins on tables being tossed, for its lifespan.

I think a specific case like this is much more clear and specific on what we mean.

GEM of TKI

Mung:

Well, how you specify your null is critical to the validity of your hypothesis testing.

Much of Dembski’s paper is devoted to how best to specify the probability density function (of the expected data under the null), and how to decide on an appropriate rejection region (i.e. how to decide on “alpha”).

The interesting thing about the CSI concept is that it incorporates its own alpha. It isn’t the hypothesis as such, it’s what you call a pattern that falls in a special rejection region that is so improbable under the null of “no Design” that we can declare it not possible within the probability resources of the universe.

So, according to this paper the null is “no Design”.

Where specification comes in is to calculate the pdf of patterns under the null. It’s a 2D pdf though, because you have two axes – complexity along one, and specificity along the other. So we have a rejection “volume” rather than a rejection “region”. The “rejection volume” is the tiny corner of the “skirt” as kairosfocus put it, where not only is complexity high (lots of bits), but specificity is high too (the patterns are belong to a small subset of similarly compressible patterns).

So yes, I agree with kf, that it’s a lot more complicated than your common-or-garden 1D pdf with an alpha of .05, the workhorse of the lab. A thoroughbred, maybe

But it’s still Fisherian (as Dembski says) and it still involves defining a rejection region under the null hypothesis that your H1 is not true.

Fisher on steroids.

heh.

Nice one

And, with two successive inferences made on different criteria. First necessity vs choice and/or chance, then chance vs choice; all, per aspect.

PS: I just saw some pretty nasty spam in my personal blogs, some of it trying to attack family members who have nothing to do with UD or online debates. That is a measure of what sort of amoral nihilism and ruthless factionism — as Plato warned against — we are dealing with.

–> MF, you need to tell those who hang around with you to back off.

Yes indeed.

Provide the quote, from the paper, please.

Hi kairosfocus,

I regret to hear that your family is being harassed. There may be laws in place to reward the evildoers. Even more so if borders are involved.

Since I have my doubts that EL and I will come to a resolution aside from the direct intervention from the designer himself (aka William Dembski), I’m taking a tangential course.

If we had to say what the null was for the chi metric, what would it sound like? Surely it would not sound like “no design” or “not designed.”

So I’m trying to work my way towards putting Demski’s mathematical CSI measure (or yours) into English and then stating the negation of it.

Something with a bit more meat on it than “no CSI” – lol. But maybe that’s enough. The null is not “no design” but rather “no complex specified information.”

What are the factors we need?

Semiotic agents, replicational resources, a relevant chance hypotheses (H) a target T, etc.

Want to help me deconstruct it?

I’m trying to come up with a way to move things along.

Elizabeth Liddle:

Whethera null is being specified at all is in doubt.“This first step contrasts sharply with Fisher’s

total neglect of an alternative hypothesis.”Why would I think it’s any different for Dembski? Why would Dembski all of a sudden be trying to specify a null and an alternate? Did he turn his back on Fisher?

Why do you think Dembki would agree with you that “how you specify your null is critical to the validity of your hypothesis testing” with regard to his work?

Here’s how I see your argument framing up:

How you specify your null is critical to the validity of your hypothesis testing.

Dembski fails to properly specify the null.

Therefore the ID argument is not valid.

Well, I don’t accept your first premise. Or maybe your second. Who knows. We’ll work it out. Maybe that’s not even where you’re going.

But here’s what I say:

Dembski doesn’t even

tryto specify the null hypothesis.Dembski doesn’t even

tryto specify the alternate hypothesis.Or if he does it’s in ways much more subtle than you have yet to acknowledge.

Mung:

While there are detail debates over Fisher, Neyman, Pearson and Bayes, there is a general practice of hyp testing on distributions that essentially boils down to looking at coming up in tails of distributions, whether normal Chi-Square etc.

I think it is fair comment to observe:

1: Since the presence of these random variable distributions implies high contingency, we are already applying one of the design filter criteria implicitly.

2: The sort of far-skirt rejection regions being used are an example of a separately specifiable, relatively rare zone in a distribution that on the presumed search scope [think of my darts and charts example], you should not expect to be in in a given sample, to a 95 or 99 or 99.9 % confidence, etc.

3: So, there is a conceptual link from the hyp testing as is a commonplace practice in real world investigations, and the explanatory filter type inference.

4: Indeed, in the design by elimination paper of 2005, Dembski spoke of how the EF approach offered to help firm up some of the fuzzier, pre-theoretic concepts involved in hyp testing:

5: In short, once we move beyond the impressive algebraic apparatus to the practical applications of Bayesian type reasoning, we begin to see that it is not so neat, sweet and rigorous after all; indeed that he same sort of thinking that Bayesians like to criticise are slipping right back in the back-door, unbeknownst.

6: In any case, the basic ideas shown in the darts and charts exercise, once we factor in the reasonableness of the independent specification of zones of interest, are plainly sufficiently well warranted not to be so easily brushed aside as some suggest.

7: And, moving beyond a probability calculation based approach, once we look at config spaces and the scope of search resources available in the solar system or observed cosmos — taking a leaf from our thermodynamics notes — we have a reasonable criterion for scope of search, and for identifying in principle what it would have to mean for something to come from a specific, complex and narrow enough zone of interest.

8: In particular, 500 bits specifies a space of 48 orders of magnitude more possibilities than the P-time Q-states of the 10^57 or so atoms in our solar system, where it takes 10^30 P-times to go through the fastest type of chemical reactions.

9: Similarly, 1,000 bits is 150 orders of magnitude more than the states for the 10^80 or so atoms of the observed cosmos.

10: In short on the available resources, the possibilities cannot be sufficiently explored on those ambits, to be sufficiently different from no search at all, to give any credible possibility of a random walk stumbling on any reasonably specific zone of interest.

11: And yet, 72 or 143 ASCII characters, respectively, are a very small amount of space indeed to specify a complex functional organisation or coded informational instructions for such a system. The Infinite Monkeys challenge returns with a vengeance. Indeed, the simplexst credible C-chemistry cell based life forms will be of order 100 k bits up, and novel body plans for complex organisms with organs, etc will be of order 10 – 100+ Mn bits.

12: With a challenge like that, the proper burden of warrant is on those who claim that chance variations whether in still warm ponds or in living organisms, culled out by trial and error with success being rewarded with survival and propagation to the future, to empirically show that their models and theories and speculations not only could work on paper but do work on the ground. Which has not been met.

GEM of TKI

PS: I have alerted first level responders, and hope that onward MF et al will begin to realise what sort of bully-boy wolf-pack factions they are harbouring; and will police themselves. If I see signs of anything more serious than what has already gone on, or any further manifestations, I think that this will become first a web abuse then a police matter, perhaps an international police matter. (I suspect the cyber-bullies involved do not realise how stringent the applicable jurisdictions are. But already, such fascistic thought-police bully-boys have underscored the relevance of the observation that evolutionary materialism is amoral/nihilistic and the effective “morality” therefore comes down to you are “free” to do whatever you think you can get away with; we see here the way that the IS-OUGHT gap inherent in such atheism benumbs the conscience and blinds the mind to just how wrongful and destructive one’s behaviour is, after all you imagine yourself to be of the superior elites who by right of tooth and claw can do anything they can get away with. That’s just what led to the social darwinist and atheistical holocausts of the past century, and silence in the face of such first signs is enabling behaviour, so do not imagine that being genteel is enough when blatant evil is afoot.)

Mung, I was not quoting from the paper.

I read the paper, and the equations, and deduced the null.

It’s one of the things I do. You have to sometimes, when you read scientific papers, where the hypotheses are sometimes not expressed in so many words, and you have to figure it out from the math.

Dembski doesn’t talk about a two-stage process in that paper, and his null seems to be dubbed “Chance”. So I guess I could have called his Alternative Hypothesis “Not Chance”. But that would have been misleading,given that elsewhere he grants “Necessity” as an alternative to Chance, and he is not putting “Necessity” in the Rejection Region.

So let me be as neutral as I can, and say that Dembski’s H1 is “The hypthesis that Dembski considers supported when a pattern falls in the rejection region”, and Dembski’s H0 is “the hypothesis that Dembski considers a sufficient explanation for the pattern if the pattern falls outside the rejection region”.

However, as the Hypothesis that dembski considers supported if a pattern falls in the rejection region is Design, we can, by simple substitution, conclude that Design is H1 and no-Design is H0.

Mung @ 92:

I suggest you stop trying to anticipate “the way [my] argument framing up”. It’s causing you to see spooks behind every bush.

It’s also stopping you reading the actual words I write! These are simply not controversial (and you would, I’m sure, readily agree with them if you were not scared I was going to pull a “Gotcha!” with your agreement!)

No, it is not “in doubt” whether a null is being specified. Dembski talks at length about how to specify the distribution under the null. If that isn’t specifying the null, I don’t know what is. And if it isn’t, then he’d better go back and re-write his paper, because you can’t specify a “rejection region” if you don’t have a null to “reject”!

And it is true that, in null hypothesis testing you actually only have to specify one hypothesis because the other is, by default, that the other is false.

So Fisher is correct. But it remains worth specifying clearly both hypotheses, becauses you want to make sure that the inference you draw if you reject the null is the one you think you are drawing.

Of course he does.

Of course he does.

The entire paper is about specifying the null, and defining the rejection region.

Oh, it’s subtle. As kf says, it’s “Fisher on steroids” – not just any old piece of skirt (heh).

But in terms of expressing it in words, as Dembski is clear as to the inference he draws if a pattern falls in the rejection region (“Design”) then the word description of the null is also clear (“no-Design”).

What is subtle is not the names of the hypotheses, but the computation of the expected distribution under the null.

Hence all the fancy math.

However, I will give you a heads-up on my “argument” – my problem with Dembski’s paper is that, in fact, he is not entitled to draw the inference “Design” from a pattern that falls in the rejection region.

In other words, oddly, I agree with you: that all his paper does is set up a null distribution and a rejection region. Dembski does not, in fact, spell out very clearly what is rejected if we reject the null.

And I think he is wrong to reject “no-Design” because I don’t actually think the pdf he computes for his null is “no-Design”. I think it’s something else.

But let us first agree that Dembski computes a pdf of a null, and concludes that if a pattern falls in the rejection region, he considers his hypothesis supported.

Yes?

Mung,

It seems to me that much of the arguments about the EF can be bypassed if you were to simply give an example of it in use.

You obviously understand it in great detail.

If you were to give a worked example for, say, anything at all biological I imagine that would go a long way to clarifying the questions that Elizabeth has.

Darts and coin tosses are all very well, but I understood the claim that design has been identified in biological systems to be founded upon the EF but I’ve actually never seen an example of the EF as it relates to a biological system. Biological systems are a little bit more messy then easily measured coin tosses and I’m fascinated to see how it’s done.

And darts are not biological!

If such a biological example could be laid out on this thread that would be very illustrative.

Mung, up for it?

WilliamRoache, pardon but perhaps this;

Mathematically Defining Functional Information In Molecular Biology – Kirk Durston – short video

http://www.metacafe.com/watch/3995236

Measuring the functional sequence complexity of proteins – Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors – 2007

Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,,

http://www.tbiomed.com/content/4/1/47

f/n;

Stephen Meyer – Functional Proteins And Information For Body Plans – video

http://www.metacafe.com/watch/4050681

Intelligent Design: Required by Biological Life? K.D. Kalinsky – Pg. 11

Excerpt: It is estimated that the simplest life form would require at least 382 protein-coding genes. Using our estimate in Case Four of 700 bits of functional information required for the average protein, we obtain an estimate of about 267,000 bits for the simplest life form. Again, this is well above Inat and it is about 10^80,000 times more likely that ID (Intelligent Design) could produce the minimal genome than mindless natural processes.

http://www.newscholars.com/pap.....rticle.pdf

Book Review – Meyer, Stephen C. Signature in the Cell. New York: HarperCollins, 2009.

Excerpt: As early as the 1960s, those who approached the problem of the origin of life from the standpoint of information theory and combinatorics observed that something was terribly amiss. Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created. Now of course, elementary particles aren’t chemical laboratories, nor does peptide synthesis take place where most of the baryonic mass of the universe resides: in stars or interstellar and intergalactic clouds. If you look at the chemistry, it gets even worse—almost indescribably so: the precursor molecules of many of these macromolecular structures cannot form under the same prebiotic conditions—they must be catalysed by enzymes created only by preexisting living cells, and the reactions required to assemble them into the molecules of biology will only go when mediated by other enzymes, assembled in the cell by precisely specified information in the genome.

So, it comes down to this: Where did that information come from? The simplest known free living organism (although you may quibble about this, given that it’s a parasite) has a genome of 582,970 base pairs, or about one megabit (assuming two bits of information for each nucleotide, of which there are four possibilities). Now, if you go back to the universe of elementary particle Planck time chemical labs and work the numbers, you find that in the finite time our universe has existed, you could have produced about 500 bits of structured, functional information by random search. Yet here we have a minimal information string which is (if you understand combinatorics) so indescribably improbable to have originated by chance that adjectives fail.

http://www.fourmilab.ch/docume.....k_726.html

etc.. etc..

The Capabilities of Chaos and Complexity: David L. Abel – Null Hypothesis For Information Generation – 2009

To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: “Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration.” A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis.

http://www.mdpi.com/1422-0067/10/1/247/pdf

Can We Falsify Any Of The Following Null Hypothesis (For Information Generation)

1) Mathematical Logic

2) Algorithmic Optimization

3) Cybernetic Programming

4) Computational Halting

5) Integrated Circuits

6) Organization (e.g. homeostatic optimization far from equilibrium)

7) Material Symbol Systems (e.g. genetics)

8) Any Goal Oriented bona fide system

9) Language

10) Formal function of any kind

11) Utilitarian work

http://mdpi.com/1422-0067/10/1/247/ag

================

Why the Quantum? It from Bit? A Participatory Universe?

Excerpt: In conclusion, it may very well be said that information is the irreducible kernel from which everything else flows. Thence the question why nature appears quantized is simply a consequence of the fact that information itself is quantized by necessity. It might even be fair to observe that the concept that information is fundamental is very old knowledge of humanity, witness for example the beginning of gospel according to John: “In the beginning was the Word.” Anton Zeilinger – a leading expert in quantum teleportation:

http://www.metanexus.net/Magaz.....fault.aspx

bornagain77,

Thanks for the links etc. However I fail to see the relevance to my original question to Mung.

Would it be possible for you to clarify your point and how it relates to my question, preferably without links and in your own words?

Unless of course it is a link to a worked example of the Explanatory Filter for a biological entity!

I’d like just check we have agreement on a single point, regarding the Design Inference:

Design is inferred if an observed pattern is improbable under any other hypothesis.Mung, kairosfocus? Can we agree on this?

If not, how would you amend what I have put in bold, above?

WilliamRoache,

The inference is automatic through the exclusion of chance and necessity of the entire material resources of the universe,,, another inference is available, but seeing your aversion for links,,,,

bornagain77

Ah, my apologies. From what I had read so far I understood the EF to be a tool that can be used to determine design/not design on a arbitrary object including those of a biological nature.

I had not realized it was an “automatic” inference. I expect that’s because in a designed universe everything is designed? Or have I got that wrong?

Hi Lizzie.

At first glance I don’t think I can agree with this unless you can explain how it incorporates the idea of a specification.

It’s not sufficient that the pattern be improbable.

kf brings up an excellent point:

We should all know and accept that is a very high bar indeed. Surely things that

are in fact designedand events thatactually do have an intelligent causewill be missed. In other terms, they will fall outside the rejection region.Therefore, the null

cannot logically be no design.All issues of mathematics and hypothesis testing aside, does that help explain where I am coming from?

You are attempting to force Dembski into an illogical position.

So what you gave with one hand you took away with the other

I don’t see how we can do so and remain logically consistent.

I thought we agreed long ago that the null was the logical negation of the alternate. I’m just trying to understand the ground rules.

And can it be just any pattern, or does it need to qualify as a specification?

William,

We are discussing whether the EF and CSI are even valid forms of scientific inference. It would be a bit premature and somewhat an exercise in futility to apply them to an actual test case don’t you think?

How about we start it out like this:

“A design inference is warranted when …”

OK, so we have:

“A design inference is warranted when an observed pattern is improbable under any hypothesis under which a design inference is not warranted”.

OK? It’s a bit weird, but I guess it will do for now

Now, tell me how we calculate the probability of a pattern “under any hypothesis under which a design inference is not warranted”.

Feel free to C&P from Dembski if you like

H0: We do not have sufficient warrant to reach a design inference.

H1: We have sufficient warrant to reach a design inference.

William A. Dembski:

I’m sorry, but I don’t understand this request. I don’t see Dembski trying to calculate the probability of a given pattern. Does he?

Let me quote me (I love to hear myself talk):

oops, sorry Mung, hadn’t noticed that 103 was not your only post since mine.

OK:

Oh boy. gah.

Look, I’m not asking HOW you compute the improbability at this stage, I’m just asking whether you agree, in principle, that the shape of the analysis is: Infer design (or, if you prefer, “consider a Design Inference warranted”) if something a pattern is improbable under some other hypothesis.

Surely you agree with this?

Then we can discuss (and we should) how we figure out whether the pattern is improbable. It’s the

improbablepart I want you to agree to.And I can’t imagine won’t because it’s blindingly obvious!!!

So please, pretty please, just give me a yes?

Unless you really mean no, in which case, I think you might want to have a serious word with both Dembski and Meyer!

Oh, certainly. As I’ve said, approximately a gazillion times, just because a pattern is moderately probable under some other hypothesis, doesn’t mean it wasn’t due to yours. I mean, 5 heads and five tails is perfectly probable under the null of a fair toss of a fair coin, but the pattern could also have been produced by me carefully laying the coins down in a order that happened to take my fancy.

So I’m NOT asking you to say:

we can only infer Design if a pattern is extremely improbableunder some other non-Design hypothesis. I’m simply asking you to agree that Dembski (and Kairofocus actually, and Meyer) are saying that IF a pattern is extremely improbable under some non-Design hypothesis, THEN we are warranted to infer Design.OK?

Mung. Seriously.

Yes. It. Can. Please revise your elementary stats text books!

And let me remind you of Chapter 3 (the one after Means, and Standard Deviations):

1. The null hypothesis is the hypothesis that your alternative hypothesis is false.

2. If your observed data is improbable under the null, you can conclude that your alternative is supported.

3. [b]If your data are perfectly probable under the null you CAN NOT CONCLUDE that your alternative is false, even though the null is that your hypothesis is false.[/b]

So, it is perfectly logical to say that the “null can be no design” because having the null as no design does not allow us to conclude “no design” just because the observed pattern is reasonably probable under the null.

That’s why we say we “reject the null” or “retain the null”. We do not say we “reject the alternative”. Actually, we never “reject the alternative” under Fisherian statistics.

That is the point I’ve been trying to make!!!!!

Yes, it does It explains to me that you have forgotten your elementary stats But then I did know that. That’s why I’ve been giving you a nice stats refresher course on this thread. Trouble is, you’ve been so twitchy about where I might be going you haven’t actually taken it in – so you are still trying to tell me that “no design can’t logically be the null, because if it was, that would mean that we concluded no design sometimes even where there was design”. Even though I’ve told you for the umpteenth time that we can’t do that with a null!!!!!

Sheesh.

No, I’m attempting to get you to see Dembski’s logic.

lol

Well, you’d better take that up with Fisher Not my fault guv.

Yes, it is. But there is a catch. The null is always the negation of the alternative (that’s why actually you only need to state one and you get the other for free), but the TEST is asymmetrical.

If a pattern is in the rejection region (rejection of the null) you get to claim your alternative.

But if it isn’t, you DON’T get to keep your null, I’m afraid.

If you want life to be fair, you have to go for Bayesian methods But if you want Fisher (and, as Dembski says, there are good reasons for going with Fisher), then you have to put up with assymmetrical hypotheses.

Well, yes, but the playing field isn’t flat. It’s more like that gladiatorial game where one guy gets a net and a trident and the other guy gets a dagger. The retiarius and the secutor.

The advantage for the H0 (the “retiarius”) is that most of the distribution is for him – he can cast his net really wide. Poor H1 (the “secutor”) only gets this tiny “rejection region” to aim at – the bit the net doesn’t cover.

However, the secuto, H1, has a sharp dagger, and the retuarius, H0, only has a clumsy trident. If the secutor manages to hit the rejection region, it’s lethal to the null I mean retuarius. Whereas if the retuarius catches the secutor, all he can do is poke him a bit with the trident.

So it kind of works out fair

The thing is rigged in favour of the null, but the quid pro quo is that if H1 wins, he really wins – H0 is considered firmly

rejected. If H0 wins, he still has to concede that H1 might have been true.Cheers

Lizzie

Mung,

He is making an info beyond a threshold estimate, as the log reduction I have done shows.

As a part of that,t her eis a probability estimate, which makes the “chance” hyp usual in the I = – log P explicit.

In so doing this has been a great occasion for the debaters to show off their objections, not realising that this same principle is embedded in essentially all probability of symbols based information metrics.

thewhole point of the estimare is that if the info is complex beyond a thershold and specific, it comes form a zone of interst unlikely to be arrived at by chance, to such a degree that it is reasonable to infer to design as best explanation.

As I have shown based on VJT and Giem, 500 bits is a good threshold, and it is in fact a measure of something beyond the reasonable search resources of our solar system; our effective universe. (The next star over is several light years away. Absent discovery of somnething like the hyperspace discussed in so much popular sci fiction, we will not be visiting such any time soon.)

GEM of TKI

I’ll take that as a yes, then, shall I, kf?

😀

Mung:

Ding ding we have a winner!!!!

And who would have thought that my first victory on UD would be persuading a UDist that Dembski was correct 😉

Now, I’m going to have beer, then bed, because I’ve had a hard day, but maybe tomorrow I’ll have a shot at persuading you that he is also wrong

Mung:

No, he’s doing a probability calculation for a class of patterns under the null hypothesis of no-Design.

He’s also, interestingly, computing a suitable alpha value (i.e. not only computing the pdf under the null, but the cutoff for the rejection region) by which we can be certain that the probability is so low, that there are simply not enough opportunities in the entire universe for it to have occurred with any likelihood worth mentioning.

Which is not to say it didn’t

But don’t let’s start THAT again….

BTW, kf and Mung:

Don’t mind me teasing – I’ve had a hard day

And it’s fun to argue of an evening, with a cat on my lap and the dew falling….

All in good fun Lizzie. I’m a cat lover myself. But here the dew rises ;).

Does this mean you liked my H0 and H1?

If so, I guess it demonstrates we can work together after all.

And I hope you’ll take note that I’m not all about just being disagreeable and have been trying to work towards finding something we can agree on. I’m not trying to stall the debate but rather to find ways to move it forward.

But I really do wish you would stop calling “no design” the null. I’ve repeatedly objected to that and with good reason. So if the null can be phrased without it why not do so? Why not say “within this region design is not distinguishable from other possibilities”? Because that is the actual fact of the matter when it comes to Dembski’s work.

There is a massive difference between not perceiving a thing and the thing not perceived being not present at all.

H0: Design is not distinguishable from other other possibilities/hypohteses.

H1: Design is distinguishable from other other possibilities/hypotheses.

But I really do wish you would stop calling “no design” the null. I’ve repeatedly objected to that and with good reason. So if the null can be phrased without it why not do so? Why not say “within this region design is not distinguishable from other possibilities”? Because that is the actual fact of the matter when it comes to Dembski’s work.

Mung, I do appreciate your post

And I’m not feeling quite so catty after 9 hours sleep :_

Yes, we are getting somewhere.

However….

Well, yes, you can refuse to describe your null in words, if you like However, that means that you cannot describe your H1 in words either! Remember that one is the negation of the other, so if you want to make an inference from support for H1 you need to articulate H1. And so, the null is “not H1”.

And thus, if you infer “Design” if your H1 is supported, you must also characterise your null as “no Design”.

Otherwise you will risk an Unexcluded middle!

However, I do suggest you read my previous posts very carefully because I used all the pedagogical tricks at my disposal to try to explain to you why this does not create a problem for the inference that H1 is still possible even if H0 is not rejected. But I’ll try once more:

Yes. But Fisherian hypothesis testing does not allow you to distinguish between a thing not being present and the thing being present but not noticeable so.

This is a really important point (indeed, it’s almost my only point!) but you are still not seeing it.

You cannot prove a null. But we encounter it time and time again: studies repeatedly show that there is no evidence that vaccines cause autism; yet people insist that it might. And it cannot be ruled out becauseyou can never prove a null. You can prove (probabilistically) that the null should be rejected. But you cannot prove (even probabilistically) that the null istrue.H0: Design is not distinguishable from other other possibilities/hypohteses.

Well, technically that is incorrect! Is what I’m saying. It the inference you draw from rejecting the null is that “Design was responsible for the observed pattern, then your null hypothesis is that

Design is NOT responsible for the observed pattern. Theinference you make from failing to reject the nullis that “Design may have been responsible for the observed pattern, but we cannot reject the possibilit that it was not. That is why the correct phraseology is “fail to reject the null” (“aka “retain the null”) not “prove the null” or “conclude the null”).So your two hypothesis MUST be mutually exclusive; however the assymmetry comes in when you either reject the null (conclude that H1 is true), or retain the null (conclude that H0 may be true).

You never conclude that H0 is true!

As I say, it’s weird but it works. Fairly well, anyway.

Well, no. See above.

Feel free to ask any more questions

It’s a tricky concept, but important.

But the essentials of null hypothesis testing are:

Your two hypotheses (H0 and H1) have to be mutually exclusive – if one is true the other is false.If you reject H0 you can conclude H1 is true.

If you retain H1 you cannot conclude H1 is true. But nor can you conclude that H0 is true.But you are not alone in finding this counterintuitive!

Cheers

Lizzie

I think that’s a typo. Isn’t it “If you retain H0…”?

“If you retain H1 you cannot conclude H1 is true. But nor can you conclude that H0 is true.”

you meant …if you retain H0… right? otherwise i’m really confused/

haha at least driver and i are paying attention

oops yes! Sorry

(note to self: never post before coffee).

Yes:

Should be:

If you retain H0 you cannot conclude H1 is true. But nor can you conclude that H0 is true.Glad to see people aren’t snoozing at the back there

And let me know if you find any more. (Maybe I need a shot of Aricept in my coffee….)

Elizabeth Liddle:

When did I stop using words?

ok, you hadn’t had your coffee yet. I forgive you.

This whole debate right now is revolving around

whichwords to use, notwhetherwords should be used.You insist that H1 must be “design” and that therefore the null must be “not design.” I beg to differ.

H0: We do not have sufficient warrant to reach a design inference.

H1: We have sufficient warrant to reach a design inference.

Is H0 the logical negation of H1 or not?

H0: It is false that we have sufficient warrant to reach a design inference.

H1: It is true that we have sufficient warrant to reach a design inference.

Notice the complete absence of the words “not design” in H0.

Why on earth would you say “ding ding ding” and then not mean it? Or what did you mean by it?

I reject the hypothesis which states that it is false [not true] that we have sufficient warrant to reach a design inference.

Therefore …

I reject the hypothesis that we do not have sufficient warrant to reach a design inference.

Therefore …

What is the logical negation of those statements and why is the logical negation and it’s alternative not mutually exclusive?

You are mistaking your hypotheses for your inferences.

They are not the same thing.

Easily done, though

Here’s how it works:

H0: No design.

H1: Design

Inference if H0 is retained: We do not have sufficient warrant to reach a design inference.

Inference if H0 is rejected: We have sufficient warrant to reach a design inference.

In other words, you make your inference from your test of the null. Your null is not the inference you make if you retain it

Subtle, but nonetheless important.

No, I’m not. It’s not called

The Design Inferencefor nothing.The

Inference to Designis what we’re allowed to make once we’ve tested the hypotheses.Dembski:

Dembski:

Whether or not there is a

specificationis the hypothesis. The presence of a specification is what warrants the design inference.I still don’t think you understand the argument, but hey.

Dembski:

Dembski:

I don’t know how Dembki could make it any more plain, or how you could fail to read him correctly given how plainly it is stated.

A design inference is what is warranted when:

H1: some thing or event exhibits high specified complexity

and therefore

H0: that unguided material processes could not have produced them with anything like a reasonable probability.

That’s Dembski’s argument in a nutshell. Thanks for all your help getting it out in the open and plain for all to see.

Dembski:

The specification

underwritesthe design inference.It is the

specificationthat needs “minimally, to entitle us to eliminate chance.”It’s not “design” that eliminates the “null,” it is

specification.Therefore

the null is not “no design”.At best, the “null” is

no specification.H0: We do not have a specification.

H1: We do have a specification.

How is H0 not the negation of H1? How are the two not mutually exclusive? IOW, how do they fail to meet your requirements for a null and alternate?

Given H1 we can reject H0, and the inference to design is then warranted. That’s Dembski.

Dembski:

What is it that transforms suspicion of design into warranted belief?

Specification.

How could “design” transform the suspicion of design into a warranted belief in design?

That’s just absurd.

It has to be something else, and that something else is, according to Dembski, specification.

Finally got a moment to indulge in one of my favourite discussions – the foundations of hypothesis testing. Of course I agree with everything Lizzie says – but I think there is a deeper point which is more relevant to ID. Dembski’s method is very similar to Fisherian hypothesis testing and shares many of its problems. And Fisherian hypothesis testing has many severe problems. For example,

1) It depends on outcomes that never happened.

The statistician Harold Jeffrey’s famously remarked:

“

What the use of P implies, therefore, is that a hypothesis that may be true may be rejected because it has not predicted observable results that have not occurred. This seems a remarkable procedure”2) In some contexts the significance of an experiment can depend on the intentions of the experimenter and not just the results.

See this for an explanation of both.

But perhaps the most severe fault is that it does not consider whether H1 explains the data better than H0. The significance is the probability of the observed outcome falling into the rejection region

given the null hypothesis H0. It is assumed that the probability of the observed outcome falling into this region is greater given H1 but no attempt is made to prove it or calculate what this value is or how much greater. The argument is purely: “it is very unlikely we should get a result this extreme if H0 is true – therefore H1 is true”. This neatly encapsulates the design argument and also one of its weaknesses. There is no attempt to even discuss whether the design hypothesis explains the data better.As Cohen puts it – the Fisherian argument is like saying:

“If you are an American it is very unlikely you will be a member of Congress. X is a member of Congress. Therefore it is very unlikely X is an American.”

This fails because it has not considered the probability of being a member of Congress if you are not an American.

The design argument says – an outcome (e.g. bacterial flagellum) is incredibly unlikely given certain “Darwinian” assumptions about how life evolved. The bacterial flagellum exists. Therefore, these Darwinian assumptions are false.

It never stops to consider the probability of the bacterial flagellum existing given certain assumptions about design because it forbids formulation of a design hypothesis.Finally got a moment to indulge in one of my favourite discussions – the foundations of hypothesis testing. Of course I agree with everything Lizzie says – but I think there is a deeper point which is more relevant to ID. Dembski’s method is very similar to Fisherian hypothesis testing and shares many of its problems. And Fisherian hypothesis testing has many severe problems. For example,

1) It depends on outcomes that never happened.

The statistician Harold Jeffrey’s famously remarked:

“

What the use of P implies, therefore, is that a hypothesis that may be true may be rejected because it has not predicted observable results that have not occurred. This seems a remarkable procedure”2) In some contexts the significance of an experiment can depend on the intentions of the experimenter and not just the results.

See this for an explanation of both.

But perhaps the most severe fault is that it does not consider whether H1 explains the data better than H0. The significance is the probability of the observed outcome falling into the rejection region

given the null hypothesis H0. It is assumed that the probability of the observed outcome falling into this region is greater given H1 but no attempt is made to prove it or calculate what this value is or how much greater. The argument is purely: “it is very unlikely we should get a result this extreme if H0 is true – therefore H1 is true”. This neatly encapsulates the design argument and also one of its weaknesses. There is no attempt to even discuss whether the design hypothesis explains the data better.As Cohen puts it – the Fisherian argument is like saying:

If you are an American it is very unlikely you will be a member of Congress. X is a member of Congress. Therefore it is very unlikely X is an American.

This fails because it has not considered the probability of being a member of Congress if you are not an American.

The design argument says – an outcome (e.g. bacterial flagellum) is incredibly unlikely given certain “Darwinian” assumptions about how life evolved. The bacterial flagellum exists. Therefore, these Darwinian assumptions are false.

It never stops to consider the probability of the bacterial flagellum existing given certain assumptions about design because it forbids formulation of a design hypothesis.

Mark F:Pardon an O/T, but it seems to me that you and ilk have some fairly serious ‘splainin’ to do, as the mess I just linked on had its roots in your blog.

G’day

GEM of TKI

PS: After so many years, you are still not straight on what the design inference is and tries to do, how. Your presentation above is a strawman, I am afraid; surely after all these years you can do better. It would help to start by summarising in the design thinker’s own words, a true and fair view of what they are saying.

Then why are you contradicting her?

Restate your examples and argument as a null hypothesis and an alternative hypothesis and make sure the two are mutually exclusive.

Perhaps you two should go have a chat over a beer

Mung:

Yes.

Well, in your quotes above, Dembski actually specifies the null: Chance.

So in this case, H0 is “Chance” and H1 is “not Chance”.

I was avoiding that one, because I’m not sure what we are supposed to do with Necessity, but I’ll leave that to you. Anyway, it doesn’t matter, because here Dembski seems to be saying that if we reject (“eliminate”) Chance we can infer Design.

So he seems to have H0 as Chance and H2 as Design where Chance and Design are the only two possibilities and mutually exclusive.

The point remains that your H0 and H1 have to be mutually exclusive; however retention of H0 does not exclude H1, although rejection of H0 excludes H0 (that’s why we say it is “rejected”. So your inferences do NOT have to be mutually exclusive (and aren’t).

That’s fine. I’m not (right now!) attempting to refute Dembski’s argument. I’m attempting to point out Mung’s confusion between an inference and a hypothesis. Oh, and an alpha value.

This: “some thing or event exhibits high specified complexity” is not a hypothesis (in this context). It’s the test of a hypothesis. It’s actually the definition of the rejection region, i.e. the alpha value.

This: “that unguided material processes could not have produced them with anything like a reasonable probability” is not a hypothesis. Its an inference made from the test of a hypothesis.

Listen:

Complex Specified Information (CSI) is a pattern that falls in the rejection region under the null, and the rejection region itself (the alpha cutoff) is part of the definition of CSI.

IF a pattern falls within the rejection region under the null, it is regarded as possessing CSI. So the hypothesis isn’t: “this pattern has CSI”. The hypothesis is either: This pattern was Designed; or “This pattern was not due to Chance”. The

testof that hypothesis is whether the pattern has CSI.Otherwise the whole thing would be circular; it would be saying: Oh, look, this pattern has CSI, therefore it falls in the rejection region, therefore we can infer Design. But as you can only conclude that it has CSI

ifit falls in the rejection region, you are back to square one!As for your H0, it’s even more circular! (also I think you made a typo – your turn for coffee I think) You can’t include the alpha cut off probability (“reasonable probability” as part of your null hypothesis! It’s completely incoherent.

You were so close earlier!

The whole reason for this null hypothesis malarkey is that you have to have a probability distribution, the tail or tails of which form the rejection region for your null.

So a null is useless if we cannot construct from it a probability distribution for the class of event we are trying to investigate.

So we can construct a probability distribution fairly easily for, say, percentages of heads in 100 coin tosses, under the null that the coin is fair. And if the observed percentage falls in the rejection region of that distribution, we can reject the null of a fair coin, and infer skulduggery.

Dembski’s null is very complex, and seems to have two dimensions, as I said above – complexity and specificity. Patterns that are both complex and specified are those that fall in a very small tail of skirt, defined by the CSI formula.

So we have our pdf under the null, and we also have our rejection region, defined by the CSI formula.

And if a pattern falls in that region, we reject the null (either “Chance” or “no Design) and accept our H1 (Not Chance, or Design).

If it does not, we retain the null, which means we are not entitled to infer Design, though it could still be responsible for the pattern.

But note: I am not disagreeing with Dembski. This is what he is saying. And it’s fine. (Well, it is statistically parsable, and in principle is much more powerful casting Design as the null).

My only beef is with you

Mung:

Yes, because it is part of the definition of the rejection region under the null.

Yes, because it is part of the definition of the rejection region under the null.

Yes, exactly. Because specificaiton is part of the definiton of the rejection region under the null, and is thus the criterion by which we reject the null. And

ifwe reject the null, we can infer design.Aaaaaaarrrrrrgggghhhhhh!!!!!!

nope.

Oh, they’d meet the mutual exclusion criterion OK, it’s just that now you are stuck with no definition of your rejection region because you blew it all on your hypotheses!

Nope. Dembski took stats 101

Will sleep on this. Will try to come up with an explanation that will hit the spot with an engineer I’ve cracked tougher nuts.

But this time I’ll have coffee before I try

Hi Lizzie,

Your problem is that you’re attempting to frame the null in terms of the alternative. It’s the other way around. 😉

True, I never took Stats 101. It follows that I never

failedStats 101 :). And I think Dembski went far beyond Stats 101.I have my “Statistics for Dummies” book, and some others as well:

So if the rejection region is the alpha value, what’s the p-value?

you:

The way I see it, it [the specification] is

notpart of thedefinitionof the rejection region, but must fall within the rejection region.The rejection region is too improbable, given all available non-intelligent resources.

The specification is that extra bit that’s required within the range of the too improbable and warrants the inference to design.

I’m pretty sure that is what Dembski says.

What allows us to reject Chance and to infer design? Specification.

You mentioned Chance as the null a few times I think. If you review I bet I did not object.

Now I ask you to consider carefully what you wrote:

“in your quotes above, Dembski actually specifies the null: Chance.”So in this case, H0 is “Chance” and H1 is “not Chance”.Ding ding we have a winner!!!!Let us stop, pause, reflect. I assure you, you will find it hard to take back those words.

No. The null is chance. The alternative is not chance. You don’t get to change the rules all of a sudden.

That leaves only one more thing to address:

How long have you been disputing/debating Dembski? At least 4 years, right? And you don’t know how he addresses necessity in the context of chance as the null?

To me, this speaks volumes, for it is not as if Dembski has not addressed this very issue.

I’m sorry. But I just got a mental picture of you banging your head on a table and I burst out laughing.

It’s not my intent that you cause yourself physical, mental or emotional harm.

#130 Mung

First – I apologise for accidentally repeating my entire comment in #128 above.

No I am not contradicting Lizzie. She is trying to rephrase Dembski’s work in terms of classical hypothesis testing. I am saying that indeed it is possible to rephrase Dembski’s work this way (or almost). However, classical hypothesis testing itself has enormous conceptual problems which Dembski’s method shares.

I absolutely agree that the null hypothesis (H0) for Dembski is “chance”. H1 can be expressed as either:

* Not chance

Or

* Design or necessity

The rejection region is not as clear as it might be but is something to do with low Kolgomorov complexity (according to his most recent paper on the subject).

But underlying this is the common problem with both hypothesis testing and the design inference. The underlying argument for both is:

Given H0 it is extremely improbable that outcome X would fall into the rejection region.X falls into the rejection region.

Therefore H0 is extremely improbable. (This may be phrased as “therefore we are justified in rejecting H0”).

This has exactly the same logical form as:

Given that X is an American it is extremely unlikely that X will be a member of Congress.X is a member of congress,

Therefore it is extremely unlikely X is an American (or alternatively “therefore we are justified in rejecting the hypothesis that X is an American”).

In practice classical hypothesis testing often gets away with it because there is an implicit assumption that outcome X falling into the rejection region is more probable under H1 than under H0. But this is just likelihood comparison sneaking in through the back door – which ID cannot handle because it requires examining the probability of the outcome given design.

Mung:

Good morning! I have my coffee beside me (although my cats have gone outside for their daily half hour butterfly chase).

And the bruise is quite small.

Right.

Either is fine with me, as long one is Not The Other.

Yes indeed, so you have a little catching up to do. That’s fine. I didn’t know what a standard deviation was until I was 49, and I’ve never looked back.

Good question. It’s amazing how many students fail this question on exam papers.

But I won’t the p value is the probability of your observed data under the null hypothesis. If that value is less than your alpha value (e.g. p=.05) you can reject the null.

Exactly. This is how we know that however we phrase the hypotheses in Dembski’s work, the H1 is the one that allows us to infer Design, not H0, because what emerges from calculations where Design is inferred is a very low p value.

But I think we agree on that now Yes?

Well, it’s possible I’m misunderstanding Dembski here, but I don’t think so.

Let’s go back to basics. Take the classic example of a deck of cards.

1: Any one sequence of 52 cards is has a tiny probability; however, if you lay out 52 cards you are bound to get one of them, so there’s nothing odd about the one you get having a tiny probability. The sequence is

complex(and all sequences are equally complex, i.e. have equal amounts of Shannon Information, which I could give you the formula for but can’t type in html – it has factorials and logs in it though), but are notspecified2: However, if you specify a sequence in advance, get someone else to shuffle the pack, then that person lays out that exact sequence, that is quite extraordinary improbable

under the null hypothesis that every sequence has equal probabilitySo there is something very fishy about the process. We can reject the null. And we say that the pattern has “Specified Complexity”.3: Let’s say you don’t specify the sequence in advance, but you say there is a class of sequences that have something special about them. All the sequences that have the suits in sequential order, and the cards within each suit as A 2 3 4 5 6 7 8 9 10 J Q K, for example. There are 4! such sequences (i.e. 24). So getting one of the specials is slightly more probable than a single specified sequence. Therefore, if you see any one of them dealt, you have reason to be suspicious. And perhaps one might also include variations – the sequences in reverse, for example, or: all the aces, all the ones, all the twos, etcs. So, if we can find a way of describing a subset of all possible sequences that have some sort of special quality, then getting any one of them is a little more probable than getting a single specified example. And many of Dembski’s papers involve some kind of definition of that subset – often in terms of Kolmogorov compressibility.

That shrinks the rejection region a bit, because the larger the number of patterns that exhibit this “Specified Complexity” under the null, then the greater the probability of one of them coming up under the null of “nothing fishy going on”.

So the specification (the class of patterns that we would regard as specified) is very important, not because simply being specified is enough to allow us to reject the null, but because we need to know the expected

frequencyof members of that class under the null (remember this isfrequentiststatistics we are doing here), in order to figure out how improbable any member of that class is under that null.So that part of the process is part of computing the pdf under the null, which, as I said, is quite complicated because it has two dimensions – Shannon Information content and something like compressibility (interestingly these dimensions are not orthogonal, but they are not collinear either, which is why the CSI concept is interesting). For the pack of cards it is easy, because there is no variance along the SI axis (all sequences have equal SI) and the only variance is along the compressibility axis.

However, for patterns in nature, both axes are relevant.

But having defined our pdf under the null, we now have to set the alpha value, and, again, Dembski’s definition of CSI actually includes that alpha. So the presence of CSI is not evidence that the null is rejected – it’s what we declare the pattern to have IF the null is rejected. So to actually set about checking to see whether we can reject the null we have to unpack CSI and place its parameters in the right places in the test!

So, thinking about this (over coffee, hope I don’t regret this), yes, there is a sense in which “CSI” is our H1. But it’s a rather strange H1 – it’s a bit like saying our H1 for a series of coin tosses is “a pattern that is more improbable than my alpha under my null”. And you still have to unpack all that before you do your test! Far better to say that H1 is the hypothesis that the coin is not fair, that you will set an alpha of p=.01, and that under your null heads and tails have equal probability.

Then things are clear. I think Dembski is clear. ish.

(The reason for the -ish, is that in the bits you’ve quoted recently, he regards Chance as the null – explicitly, as does Meyer, but elsewhere he also includes Necessity. This is a problem.)

No. The rejection region is

definedas an improbable region. What is “improbable” is that a given pattern would fall within it, given all available non-intelligent resources. If a pattern did, it would be reasonable to reject “non-intelligent sources” as a likely explanation.In which case “non-intelligent sources” is your null and “intelligent sources” is your H1

But we getting there

Well, sorta, but sorta not. What I said is closer.

Well Specified Complexity, sure. That’s the tail (well, corner-of-skirt, as there are two dimensions) of your pdf under the null.

I agree with you that all these bits are important! I’m just trying to assign them the right roles in the hypothesis-testing procedure.

OK, fine. As long as you aren’t worried about Necessity as an non-design alternative to Chance, that’s fine. After all, I don’t think Dembski uses the EF any more, does he? The EF is a sequential rejection of two nulls in turn. But one is fine with me

No, that’s fine. It isn’t my hypothesis after all! I’m fine with that. I’m not sure that kf is, but it’s fine with me.

No. The null is chance. The alternative is not chance. You don’t get to change the rules all of a sudden.

Mindful of that bruise on my forehead, I’m going to be very careful here ….

Mung:

I’m not the one changing the rulesHere are some choices of hypothesis pairs, with their corollaries:

1.

Chance is the null and “not Chance” is H1. This is fine, but we can’t therefore infer “design” from a rejection of the null unless we assume that all not-Chance causes must be Design causes. In which case, you could rewrite H1 as “Design”. If you don’t assume this, then fine, but then you can’t infer Design. Could be something different, e.g. “Necessity”.2.

Chance is the null, and “not Chance” is H1. Then, if “Chance is rejected,“Necessity” is the new null and “not Necessity” is H1. And “not Necessity”, and, as the only alternative to Chance and Necessity is Design, you might as will write your H1 as “Design” in the first place. This is the EF.3.

Not Design is the null, and Design is H1. Now you lump Chance and Necessity in together as the null. as being the only two Not-Design possibilities.But they all boil down to the same thing, so pick the one you are happiest with.

Well, not to my satisfaction! But that’s a separate issue. I am not debating Dembski here (and never have), I’m debating you. If you are happy with Chance as H0, that’s fine.

And I’d be grateful if you would link to a source in which Dembski “addresses necessity in the context of chance as the null”. I could have missed something Anyway, while we may not yet be on the same page, at least we now both seem to be holding the book the same way up.

No problem I have a tough nut.

PS: I found the comment by Dembski in which he says:

http://www.uncommondescent.com.....ent-299021

And the link he gives is to the paper we’ve both just been looking at, so I think I’m up to date.

Dr Liddle:

Pardon an intervention.

Perhaps, you need to look at WAC 30, top right this and every UD page; on the EF.

You will see that he is in effect saying that the conceptual framework of the EF needed updating (hence the per aspects approach I have used) and that the relevant part of the EF is captured in the CSI concept. That is,

once one is dealing with high contingency, addressing the presence of CSI is equivalent for the relevant aspect of an object or process.This is exactly so, and a further update would address the log reduced form of the Chi metric, say:

Chi_500 = I*S – 500, bits beyond the solar system threshold.

And, for those who have so hastily forgotten that the Durston et al table of values for FSC will fit right into this form and yield 35 values based on information metrics for protein families published in the literature in 2007, let me clip the relevant UD CSI Newsflash post on that:

Or to use the case of 1,000 coins that was already drawn to your attention, a string of such copins tossed at random will have ahigh I-value on the Hartley-Shannon measure I = – log p, due to the quirks involved. But S = 0 as any value would be acceptable, i.e. it is not specific, coming from a describable target zone T (apart form painting the target around the value ex post facto, which is not an independent description).

If on the otehr hand, the same set of coins were to hold the successive bit values for the first 143 or so ASCII characters of this post, then we now have an independent description. And, a very specific set of values indeed. so S = 1. I would have a somewhat lower information-carrying bit value than in the first case, as English text has in it certain redundancies. However that would not make a material difference to the conclusion. Case 2 will pass the 500 bit threshold and would be deemed best explained on design. Indeed, if you were to convert the solar system into coins and tables and flippers and recorders, then try to get to the specific zone of interest as defined, by overwhelming likelihood on exhaustion of the P-time Q-state resources of the about 10^57 atoms involved, you would predictably fail.

And yet, by intelligence, I wrote those first 20 or so words in a matter of minutes, by intelligence.

This is an example of how the best explanation for FSCI is intelligence.

And, it is not so hard to understand, or ill-defined, etc etc as ever so many objectors like to pretend.

So, Dr Liddle, please understand our take on all this since March or so especially. What we keep on seeing from our end is drumbeat repetition of long since adequately answered and corrected talking points, backed up by trumpeting of patently false claims all over the Internet; accompanied by the worst sorts of personal attacks, as I will link on just now.

That tells us, frankly, that we are not dealing with people who are interested in truth but in agendas.

So, please, please, please, do not try to mainly get your “understanding” of ID from the sort of critics who hang out at Anti-Evo [see below and onward on fellow traveller Y], ATBC and even MF’s blog [as in TFT who submitted the headlined comment in the linked below] etc, not to mention Wikipedia’s hit-piece that so severely undermines their credibility.

Such are increasingly showing themselves to be driven by deeply questionable agendas.

FYI, right now, updating my own retort yesterday to a barefaced, Mafioso-style attempt to threaten my wife and children, I am finding out that someone else who was posting in pretty much the same manner and at the same time is associated with questionable photography of young girls.

FYFI, one of my children happens to be a young girl.

GEM of TKI

Right. I re-read the paper.

Well, I seem to have got it right. Dembski does indeed define the pdf as a 2D distribution of patterns under what he calls a “Chance” hypothesis”.

And we seem to have Shannon Information along one axis and Kolmogorov Compressibility along the other.

So, let’s take a look at the pdf:

Let the east-west axis be the SI axis:

A short string, or a longer string but consisting of only a small number of possible characters, will tend to be low in Shannon Information, whereas a longer string, especially if consisting of a large number of possible characters (e.g English letters) will be higher in Shannon Information.

And let the north south axis be the compressibility axis:

A string that is easy to describe is highly compressible, while string whose shortest description is itself is not compressible at all.

And on the up-down axis we have frequency (which, later, we can divide by the volume under the curve to give probability).

All values are positive.

Now there are lots of low SI strings that are highly compressible (sine waves, for example). So we have a high peak up in the north-east corner of the plot.

There are also lots of high SI strings that are not compressible at all (white noise, for instance). So we also have a peak at the SoutheWest coner of the plot.

However, we have only small numbers of low SI, low compressible patterns because even if the shortest description of a pattern is itself, if the pattern itself doesn’t contain much information, even if itself is the shortest description, that description will be quite short. So we have a low plain, near sea level, in the South East corner.

The interesting part is the North West corner – here, are patterns that have high SI (lots of bits) but are also fairly compressible.

They won’t be

verycompressible of course, because they are so rich in SI, so the actual North West corner, like the South East corner will be pretty well at sea level – near zero numbers of high SI, highly compressible patterns.So now we have the topography of the pdf under the null.

It’s a saddle, interestingly, not a bell (that’s because the two dimensions are not orthogonal – they are negatively correlated).

However, if we take a diagonal section from the South East corner to the North West corner, we will in fact see a bell curve (actually, any diagonal SE-NW section will be a bell), and that’s the one we are interested in. We are not interested in White noise (South West) nor in sine waves (North East).

And nor are we actually interested in the very low probability patterns in the South East, where SI is low and compressibility is low. That’s the kind of pattern produced by clumping processes. But as we travel from South East in a North Westerly direction, the terrain rises, and we start to encounter some interesting patterns with greater frequency – pattern that have quite a lot of SI, but also quite a lot of compressibility. And these are quite common – snowflakes, vortices, fractals.

So, here we have, subsumed under the null hypothesis Dembski calls “Chance”, processes that he once referred to as “Necessity”, and which kf would call “low contingency processes” – compressibility is high (a simple contingency statement will generate the string) but SI is also high (patterns are large, and may have many different possible features, so there are a lot of bits).

But we continue to travel NW. Recall that this null is called “Chance” and the if rejected, we infer “Design”.

Dembki’s contention is that as we continue to travel North-East, that under the null landscape of “Chance”, the land will start to fall. We will reach a region in which compressibility is high, and SI is high, but there is are very few, if any, patterns.

But, lo and behold – we find some!

First of all we find the Complete Works of Shakespeare. Then we find A Watch on a Heath. Then we find a living cell!

These objects shouldn’t be here! Under the null, this level of compressibility is not compatible with this level of Shannon Information! Sure, it’s not

verycompressible, but it’s a heck of a lot more compressible than we’d expect under the null!Indeed, under the null, the probability of finding such a thing is so low that we should have no more than an even chance of finding just one out of all the patterns in the universe! And here we have lots!

Something Fishy Is Going On.

Yes?

Mr Mark F:Pardon an intervention, but I believe in light of what you may read in the linked (starting with the headline, clipped from TFT) and in the overnight addenda at F/N2 and F/N 3, you have some explaining to do in the context of the behaviour of those who have been regular commenters in your blog.

This matter is serious enough — cf my just above to Dr Liddle — that your habitual tactic of ignoring anything I have to say at UD is not good enough.

GEM of TKI

kairosfocus:

I am NOT “getting my “understanding” of ID from the sort of critics who hang out at Anti-Evo etc”.

I’m reading Dembski’s papers.

And as I’ve said a few times, I’m not even disagreeing with you.

Your inference that I am not posting in good faith is false.

I have been absolutely up front about the fact that I don’t think Dembski’s argument works, but before we discuss that, I want to make sure we are on the same page as to what the argument actually is.

Namely:

That it is based on Fisherian hypothesis testing.

That H1, however we phrase it, is the hypothesis we consider supported if our observed pattern falls in the rejection region under the null.

That we can use either a two-stage filter (Chance, and Necessity, in turn, as the null), or subsume the null into a Chance hypothesis (as Dembski does in the paper I just linked to).

That the null space has two dimensions – Shannon information, and Kolmogorov Compressiblity.

And that if something falls in the skirt tail at the North East corner, where both Shannon Information and Kolmogorov Compressibility are both high, if the probability is low enough (i.e. below an alpha set as a function of the number of events in the universe) we can conclude Design.

I got this nowhere except from Dembski’s papers.

I see nothing in your posts that conflicts with it.

It seems to me a perfectly good, at least in principle, way of making an interesting Inference about causality from a pattern.

Obviously I am a guest here, and if you do not want me to raise issues that you think have already been addressed, that your prerogative.

But right now, the only issue I have raised is simply Mung’s methodological parsing of Dembski’s methods, not the methods themselves.

tbh, I prefer the EF.

Dr Liddle:

Your just above, at 135 is again a bit disappointing, given the discussions that have already been gone over again and again in recent days. I particularly refer to:

I must again point out that — as the two successive decision nodes in the flowchart shown here emphasise — the whole EF process begins with NECESSITY as default, contrasted with high contingency.

This, is as can consistently be seen in both descriptions and diagrams, since the 1990’s.

This was already pointed out to you, complete with links and clips from Dr Dembski where he said in more or less these words, that the first default is necessity.

It is in the context where we see wide variety of possible and/or observed outcomes under sufficiently close starting conditions, that we see that such high contingency must be explained on chance and/or necessity. For, we have already seen that the sign of natural regularity pointing to underlying law of mechanical necessity like F = m*a, is not relevant.

Once we are in the regime of high contingency, we then observe certain differentiating signs: chance processes follow blind stochastic patterns such as are modelled on random variables. So, if we see the evidence of such a pattern one may not safely infer on best explanation to choice not chance. This, even though choice can indeed imitate chance.

It is when we find ourselves dealing with an independently specifiable zone in the field of possible configurations, and where at the same time, that set of possibilirtes is so vast as rto swamp the resources of the solar system or teh cosmos as a whole, that we pay attention tothe contrasrting cpabilites of choice. As I just agian put up, if the first 1,000 ASCII characers of this post were to be seen in a set of coins, then we have strong grounds to infer to choice not chance as best explanation.

That is because, even though strictly, chance could toss up any particular outcome, the dominance of relative statistical weights of meaningless clusters of outcomes in light of available resources, once we pass 300 – 1,000 bits worth of configs [10^150 to 10^301] would make it maximally unlikely ont eh face of it that we would ever observe such a special config by chance.

Indeed,this sort of reasoning on relative statistical weights of clusters of microstates, is close to the heart of he statistical justification for the second law of thermodynamics. For instance, if you were to see a room in which all the O2 atoms were mysteriously clustered at one end, and a dead man were at the other, dead of asphyxiation, you might not know how tweredun, or whodunit — could not be a human being under present circumstances — but you would know that the man was dead by deliberate action.

And, this shows by the way just how relevant a design inference is to scientific contexts of thought.

So, Dr Liddle, your repeated error is to think of a single inferntial decision to be made, not a structured pattern of desicions, where on empirically and analytically warranted signs, we first expect necessity, then on seeing that we find contingency, we then expect chance and only conclude choice when we find that the pattern of the highly contingent outcome is well fitted to choice and ill fitted to chance.

I find ti both extremely puzzling and even frustrating to see us going back over this point again and again and again, when the diagram — as long since drawn to your attention, repeatedly — is direct and simple.

Please explain.

GEM of TKI

F/N: to add braces to belts, let me specify:

FIRST DECISION NODE:

DEFAULT, 1: Mechanical necessity leading to natural regularity, e.g. F = m* a, as in how a dropped heavy object falls at g

REJECTED: If there is high contingency for the relevant aspect of the , such as the variety of readings of a common die, from 1 to 6.

REMAINING ALTERNATIVES: Chance or choice, per empirical and analytical grounds.

SECOND DECISION NODE:

DEFAULT, 2: Chance, i.e. stochastic outcomes similar to what a die will tumble to and read, if it is fair.

REJECTED: If we find FSCI or the like, whereby the outcomes are from zones sufficiently isolated and independently specified in the space of possibilities, that chance is much less reasonable — though strictly logically possible — than choice. For instance, text in coherent English in this thread is explained on choice not chance.

REMAINING ALTERNATIVE: Such a phenomenon is best explained on choice, not chance.

Oh, and kf, I’m not sure what has been going on with regard to the cyberstalking issue, but I just want to make it absolutely clear that you have my total sympathy.

I’ve been on the receiving end of that kind of thing myself, and it is an experience I do not wish to repeat.

There is no excuse for that kind of behaviour. It’s appalling.

I wish you and your family the very best.

Lizzie

F/N 2: Dr Liddle, you do not come across as a sock-puppet, which is what Mg turned out to plainly be. His drumbeat repetition of adequately answered-to points was joined to utter unresponsiveness on the merits and unwillingness to do event he most basic courtesies of discussion e.g. in the context of Dr Torley’s long and patient explanations. (Anti evo et al, your attempts to turn the issue around would be amusing if they were not pathetic.)

You do not fit that profile, so the issue is, that there is something that seems to be blocking a meeting of minds. Even, after it SEEMS that minds have met, as the post I just clipped from indicates. For I am not seeing the two stage empirically and analytically referenced decision process that the diagram indicates and as has been repeatedly discussed, but an attempt to collapse two nodes into one, like it is being force-fitted into an alien framework.

What is the problem, please, and why is it there?

Is it that the classic inference on being in a far enough skirt to reject the null is a one-stage inference?

If that is it, please be assured that the design inference is a case reasoning complex decision node structure not a simple branch; it is in this sense irreducibly complex. (In today’s Object oriented computing world do they teach structured programming structures? Sequences, branches, loops of various kinds, case decision structures, and the like? Is this the problem, that flowcharting has advantages after all that should not be neglected in the rush to pseudocode everything? A picture is worth a thousand words and all that? [BTW, I find the UML diagrams interesting as a newish approach. Anybody out there want to do a UML version of the EF, that us old fogeys cannot easily do?])

kf: I apologise, yes it seems I misordered the sequence of the filter.

However, the sequence of the nulls was not the issue I was raising. I fully accept that Necessity is the first null to be eliminated in a two stage process.

But if the CSI is used as the criterion for rejection of “a Chance Hypothesis” Dembski appears to be including “Necessity” in that portmanteau null.

I assume that’s why he feels that the separate stages can (not must!) be dispensed with.

After all, the null space for CSI must include both chance events (noise) and Law-like events (sine waves, crystals) etc, as these can all be found somewhere on the SI-KC grid.

I take it, though, kf, you agree with Dembski when he says that:

If so, would you agree that CSI patterns are those that have high Shannon Information, High Compressiblity, and are extremely improbable under the null of “non-choice” (that seems a good way of phrasing it) given the finite probabilistic resources of our universe?

Hi, kairosfocus:

No, I’m not. Elizabeth Liddle is my real name, if you google it, a lot of the hits are me (not the pet sitter and not the deceased!).

Quite a lot of first hits on google scholar are me too (not the first one, though).

Well, that’s what I’m trying to sort out!

Try my posts at 138 and 145, and see if they make sense

Well, CSI seems to be a one stage inference, which is quite neat. But I have no problems with a two-stager.

Don’t know much about UML, but yes, sequences, branches, loops, case decision structures etc are all still there.

And yes, a picture is worth a thousand words, which is why I tried to paint at least a word picture of the null space in 138.

But that doesn’t mean it is an alternative to the decision tree stuff – you’d certainly need decision trees, IMO, to construct that null space for real patterns, because you’d have to figure out just how deeply nested your contingencies could be under that null. As we travel “north east” on my plot, the depth of contingency must increase.

So what I have there (courtesy of Dembski) is simply a continuous version of the filter, where, as you travel north-west (increasing both SI and KC together), you require processes of ever- increasingly deeply nested contingency layers to produce. And so, if deeply nested contingency is the hall mark of “choice” processes, and “choice” is excluded under the null, if you find more than you’d expect under the null in that North East corner, you can make your Design Inference.

I’m pretty sure that is what both you and Dembski are saying!

See what you think.

Going out on my boat shortly, so I’ll catch you later

#139 KF

I am sorry – I just noticed this. As you know I avoid being drawn into debates with you because I find your posts and comments extremely hard to understand (quite possibly because of my limitations). I am afraid this is also true of the post you linked to. It appears someone has made a very unpleasant and silly comment about you. I am sorry about that. I am unclear who did it, what exactly they did, what it has to do with me, and what you want me to do about it.

Please can you reply in concise clear plain English – I am slow of study when it comes to your writing.

Tx

Dr Liddle:

Appreciated, and I certainly hope this will be reasonably resolved shortly. I hope your own situation was positively resolved.

What this started with was abusive commentary by TFT at MF’s blog, hence my repeated request that MF explain himself.

I then received was it blog comments at my personal blog by TFT announcing his own new attack blog, in the most vulgar and malicious terms, accusing me of being involved in a homosexual circle with UD’s staff.

It went downhill from there, and of course along the way some seemed to think it their right to do web searches, dig up my name and plaster it all over derogatory and vulgar comments. My objection to the use of my name, in recent years has been that this leads to spam waves in my email.

Unfortunately, I must now add that it has led to outright cyberstalking.

The headline for the already linked post in reply is an excerpt from a submitted KF blog posting that has to be seen to be believed, and this is a part of a cluster of about 20 abusive submitted comments. (At least, since my post yesterday,t he spate of abusive commentary seems to have stopped at least for the moment. At least, in my in-box and spam folder.)

Now, my wife has almost no internet presence, and my wife and our children are utterly irrelevant to any debates over the explanatory filter the mathematics of CSI, or linked worldview and theological issues etc. So, you can understand my shock to see in the midst of pretty nasty commentary on my theology —

. . . TFT let out the Mafioso tactic, snide threat of a “greeting” to my wife and children, with an attempt to name my wife.

Now, that is a threat, as anyone who has watched the old Mafia movies would know.

Worse, the set of comments had in them repeated extremely unhealthy remarks on sexual matters, as already indicated and other things of similar order.

There were two persons involved, one we are calling TFT, and let us just say person of interest Y. Y’s comments plainly fit in with TFTs in timing and substance.

Y happened to post his main contribution in the cluster of comments that were captured by Blogger’s moderation feature, in response to a post I made on the Internet Porn statistics published by the Pink Cross Foundation.

This foundation is a group of former Porn so-called stars, who expose the abuses and destructive nature of the porn industry so called.

The picture they paint is unquestionably accurate and utterly horrific; porn is . . . I am at a loss for strong enough words.

I have a mother, I had grandmas, I have a wife, I had a mother in law, I have sisters in law, I have second mothers of the heart of several races and nationalities, I have aunts, one semi-official big sister, many other sisters of the heart [I just responded by email to one who just got married], I have a daughter, I have daughters of the heart.

I would NEVER want any of these degraded and abused like that.

Period.

Women and girls are not meat toys to be ogled, prodded, played with, misled, intimidated into all sorts of acts, be-drugged, ripped up physically and emotionally and spiritually for the delectation and profit of drooling demonised dupes.

Period.

Here is a clip from commenter Y’s remark (which he has trumpeted to others elsewhere), in response to PC’s observation that the most popular day for viewing web porn — remember, they are saying this is implicated in 58% of divorces, according to the claim of divorce lawyers — is Sunday:

Now, in trumpeting the post I will not publish at my personal blog, Y gave a link to a sexually themed site he operates (the title is about “seduction . . .”).

In following up that link, I came across a site where he advertises “intimate” photography, which on my opening up led with a picture of a young girl in a nudity context, at least this is distorted by the use of a hot tub. But, that is what he puts up in

public.This clearly confirms to me that I am right to be seriously concerned about cyberstalking, and that the extensions of privacy violations from myself to an attempt to “out”: my family, is a dangerous nuclear threshold escalation.

I am taking the steps I indicated, and of course there is more evidence than I have said here.

This is a watershed issue, and the time has come for the decent to stand on one side of the divide. the sort of abusive, hostile, privacy violating, vulgar and slanderous commentary routinely tolerated or even encouraged at anti-design sites must stop. Now.

And, since this started at MF’s blog, he needs to be a part of that stopping.

In particular, note, I have no way of knowing if these online thugs have confederates or fellow travellers here. So, I have but little choice, other than to make sure prior complaint is on record,and to initiate investigatory proceedings as appropriate.

(I do know that in this jurisdiction, the law embeds a principle that UK law will be applied where there is no local statute, and as I have cited in my onward linked KF blog post, that law is quite stringent. Indeed TFT, Y et al should reflect on the point that in the UK law, harassment aggravated by religious bigotry multiplies the relevant gaol sentence 14 times over, to the equivalent of what in the US would be a serious felony. they have made their anti-religious antipathy plain as a key motivating factor in their misbehaviour.)

These men have crossed a nuclear threshold, one that cannot be called back.

Bydand

GEM of TKI

Erratum (me at 147):

Mr Mark F:Pardon, given the gravity of what is happening, I MUST BE DIRECT:

kindly, drop the pretence of “misunderstanding.”You are qualified in philosophy and have been in a major business, by your own admission; presumably for many years. So, we can presume a reasonable level of understanding.

Beyond this, I have had more than enough responses to the post to know the message is quite clear — right from the headline — to people or reasonable intelligence, and the underlying event is an outrage that needs to be addressed on the substance, not on clever rhetorical side tracks.

Nor, am I interested in a debate with you or anyone else; but in corrective action on your part, after you have explained yourself for harbouring the sort of character that has stooped to the behaviour I had to headline.

In other words [and as I warned against on the record both here and at your blog],

your blog has a cyber stalker who has been entertained there and has now gone on to even more extreme behaviours.A man who you have harboured at your blog site, who goes by the monicker The Whole Truth, who set up an attack blog, has indulged himself in outing behaviour that targets my wife and children. Who have nothing whatsoever to do with design theory, or debates over worldviews or related theology etc etc. But, there has been an attempt to out my wife by name and our children by allusion; echoing the notorious mafioso tactic of taking hostages by threat.

That is a direct threat, and the linked factor of repeated unwholesome sexual references multiplies my reason for concern. Overnight, I have learned that the second participant in the wave of nasty outing themed spam, is involved in questionable sexually themed matters related to porn.

That confirms the reasons for my concern that I am dealing with cyberstalking of horribly familiar patterns.

Remember, too, I do not know whether these have confederates or fellow travellers here. That is a direct context in which next week, having first served notice of warning, I will be speaking with the local prosecutors’ office and the local police.

So, the time for word games is over.

Please, explain yourself, and take corrective action.

BYDAND

GEM of TKI

Kf:

Well, not exactly resolved, but it blew over. But there was a time when googling my name threw up all kinds of derogatory claims about me, and I was embarrassed (to say the least) about what anyone who knew me would think (of course they were all false). The worst was when someone sent me a fat letter full of abuse, and the envelope was so stuffed that it was intercepted by the police who checked it for anthrax (it was, fortunately, just pages and pages of handwritten rant).

That’s when things got scary.

But even google forgets!

Do what you have to do, and try not to let it distress you (easier said than done – I lost both weight – which I could afford! – and sleep – which I couldn’t).

It passes.

:hug:

Lizzie

Dr Liddle:

Pardon, something of graver import came up.

The EF and the use of CSI on the sort of reduced form I show are equivalent, once it is understood that the relevance of the concept Information as measured on or reduced to strings of symbols with multiple alternative values in each place implies high contingency. If information is a possibility, you have passed the first node;the question now is if the contingency is meaningful/ functional, specific and complex in the sense repeatedly discussed.

So, on your:

. . . in actuality, the very fact that information is an open question means that necessity has been broken as first default. This is also not a process of simple elimination on a test, as there is an inference on empirically tested reliable sign involved.

Similarly, when you say:

. . . the problem is that the first stage of analysis turns on something that is common to chance and choice but not necessity, so there cannot properly be any explicit clustering of chance and necessity that carries the import that they have been sliced away with one test. One may IMPLICITLY rule out necessity as information is a possibility, so high contingency is implicated, but that is different. First lo/hi contingency, THEN y/n on specified complexity or a comparable.

In this context, the CSI metric therefore implies applicaiton of the EF, which is why it can be “dispensed with” in analytical terms. Though as well, it is important to observe the elaboration I supplied, on ASPECTS. for different aspects of an object, process or phenomenon can be traceable to each factor. the oerall causal story may be complex.

Take a falling die. As a massive and unsupported object, it suffers mechanical necessity of falling under g. It hits, rolls and tumbles. On several samples we find tha the distribution is materially different form flat random, though ti does vary stochastically as well. Then we look and see it has been subtly loaded, with a tiny dot of lead under the 1 pip.

All three factors are involved, each on a different aspect and the overall causal story has to account for all three. Just ask the Las Vegas Gaming Houses.

Next you ask:

Shannon information is a bit loaded, as that strictly means average info per symbol, not the Hartley-Shannon metric:

I = – log p, or

more strictly

Ij = [log a posteriori prob of symbol xj]/[log a priori prob of xj]

Which reduces, where a posteriori probability is 1.

High compressibility, I take it stands in for independently and simply describable, i.e. not by copying the same configuration like we have to do with a lottery number that wins.

There is no null of non choice.

There is a possibility of necessity ruled out explicitly or implicitly on seeing high contingency. Then, the second default is chance, but that is obviously conditioned on necessity having already been ruled out.

With chance sd default, this is ruled otu on onbserving that the sort of config and the set from which it comes per its description, will be so narrow in the scope of possibilities, that the resources of either the solar system [our effective cosmos] or the cosmos as a whole as we observe it, would be inadequate to mount a significant sample by search.

Remember you are trying to capture something that is

extremely UNREPRESENTATIVEof the config space as a whole. If your search is at random or comparable thereto — see M & D on active information and cost of search — to credibly have a reasonable chance to catch something like that, your search has to be very extensive relative to the space. Needle in a haystack on steroids.the relevant thresholds are set precisely where such searches on the resources of solar system or cosmos as a whole are grossly inadequate by a conservative margin. And I favour 1,000 bits as threshold because it makes the point so plain.

Of course, I am aware you are real, as real as I am, real enough to be plagued by cyberstalkers.

Back on topic, oops:

. . . is an error. The use of CSI is based on implicitly ruling out necessity, but in analysing why it works that is needed to be understood.

Also,

A d-tree structure is not the right one here. What is happening – as the loaded die story shows — is a complex inference to best explanation in light of expertise, not just a decision-making that comes out at an overall value. The flowchart, understood as an iterative exercise that iteratively explores the material

aspectsof significance of an object, system, process or phenomenon, is a better approach. That is, this is actually an exercise in scientific methodologies across the three major categories of causal factors, with relevant adjustments as we go.Hope this helps,

GEM of TKI

Dr Liddle:

Thanks for the advice. And,the cyber-hug.

I have had to further counsel my children to watch for stalkers and what to do.

Also on online security, with special reference to those ever so prevalent social networking sites.

And next week, I will have some unpleasant interviews, on top of everything else that seems to be at crisis stage.

But, once someone has crossed that cyberstalking threshold and the stench of porn-skunk is in the air, I will see the battle through.

I am publicly sworn at our wedding to defend my wife, and I am duty-bound as father to protect my children.

Bydand

GEM of TKI

KF #151

I am sorry. To be precise I expect I can understand your posts and comments but I find it extremely hard work. No doubt others are prepared to work harder, are cleverer or are more in tune with your style. You may not believe me but that is the truth.

I am still unclear as to what you want me to do. If this unsavoury comment came from “The Whole Truth” I asked him or her many weeks ago to stop posting on my blog and he/she did stop. I think you know that.

I get the impression that this comment comprises a threat to you and/or your family. This is obviously serious and it is important that as few people as possible read or even know about the comment. May I suggest you minimise the risk by:

1) Deleting the offending comment

2) Banning that person from further comments

(I imagine you have done both these)

3) Removing all references to that comment whereever you can, including your post and comments on UD (including this one)

4) Ceasing public discussion of the comment

If you wish to take it up further feel free to contact me by personal e-mail (where the comment will get less public exposure). My e-mail address is: mark dot t dot frank at gmail dot com.

Mr Frank:

I am glad to see that you asked TWT to leave your blog as a forum; which is news to me.

This commenter is responsible for the misbehaviour that I headlined, and the spurt of comments in which this occurred is also connected to another I am for the moment calling Y, who has very unsavoury connexions onwards.

I will communicate onwards, and obviously I am not providing details here beyond the summary already given. It does seem that the spurt of abusive commentary has halted for the moment, as of my public notice and warning.

This incident should make it very clear to all that “outing” behaviour is not a harmless form of teasing.

As the frog in the Caribbean story said to the small boy approaching, stone in hand:

“fun fe yuh is death to me!”Good day

GEM of TKI

PS: I found the comment by Dembski in which he says:

I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI is clearer as a criterion for design detection.Hehe, too funny. I was going to post that same quote and the link to it.

Do you see where he says they are

notmutually exclusive?I think I need to go buy cat food. It’s such a nice day here today. I’ll try to spend more time in this thread when I get back. I have a lot of catching up to do.

You and markf both sort of hit on the same thing:

markf:

I absolutely agree that the null hypothesis (H0) for Dembski is “chance”. H1 can be expressed as either:

* Not chance

Or

* Design or necessity

you:

As long as you aren’t worried about Necessity as an non-design alternative to Chance, that’s fine. After all, I don’t think Dembski uses the EF any more, does he?

Question:

How could something which exhibits a pattern we would attribute to

necessityfall into Dembski’s rejection region?To put it another way, doesn’t the rejection region embody

contingency, which is rather the opposite ofnecessity?Mung:

This is a case where once information is a possibility, you already are implying high contingency. That is the first node has been passed on the high contingency side. Under rather similar initial conditions many outcomes are possible, tracing to chance and/or choice.

Necessity being off the table the options to explain high contingency are chance and choice. Then, the issue of CSI tells the difference, on grounds repeatedly discussed: chance is going to be dominated by the stochastic patters reflective of relative statistical weights of different clusters of microstates. Where also, the sort of things that you get by choice — like the text string in this post — are strongly UNREPRESENTATIVE of the overwhelming bulk of the possibilities..

But, as I pointed out above, it is easy to misread this through overlooking — let’s be nice — the implicit point that if high contingency is on the table, for a given aspect of an object, or process or phenomenon, then mechanical necessity issuing in natural regularity is not a very good explanation. That is a mechanical force of necessity would e.g. mean that objects tend to feel a down-pull of 9.8 N/kg, leading to a certain rate of fall if they are dropped.

But, if one is determined not to see the obvious or to make objections, one could probably make the above sound like nonsense [nope, it is just a sketched outline that can be filled in in much more details], and if there is an attempt to cover all bases in a description then one can object that it is convoluted, not simple. To every approach there is an objection, for those determined to object . . .

BTW, I addressed this above, giving as well the link to UD WAC 30 that addresses it, top right this and every UD page.

GEM of TKI

yes

The former first stage of the EF.

Not to revisit ground already covered, but perhaps you can see now where some of the objections were originating from.

And necessity is subsumed under chance. In

Debating Design, from 2004, for example, Dembski writes:“To sum up, in order for specified complexity to eliminate chance and detect design, it is not enough that the probability be small with respect to some arbitrarily chosen probability distribution. Rather, it must be small with respect to every probability distribution that might characterize the chance occurrence of the thing in question. If that is the case, then a design inference follows. The use of chance here is very broad and includes anything that can be captured mathematically by a stochastic process. It thus includes deterministic processes whose probabilities all collapse to zero and one (cf. necessities, regularities, and natural laws). It also includes nondeterministic processes, such as evolutionary processes that combine random variation and natural selection. Indeed, chance so construed characterizes all material mechanisms.”“Indeed, chance so construed characterizes all material mechanisms.”

And if intelligence and intentionality (the ability to design) can be embodied in a purely material mechanism then design is also chance?

Excuse me Elizabeth, by why do you continue to refer to the EF as a two stage process (see 145)?

Dembski:

regards

Hi DrBot.

I’m not sure I understand the question.

Are you essentially asking whether if we were to write a computer program and save it to disk and then run the program, that we would have to attribute the design of the program to chance because it had been embodied in a physical medium and been run using a purely material mechanism?

Or are you asking about if we were to design a program that could itself design programs?

Mung, I quite agree that those possibilities are not mutually exclusive. But the two hypothese tested have to be, under Fisherian hypothesis testing.

No matter. We are now “cooking on gas” as they say around here

Cool.

kf – yes, thanks for the correction re Shannon Information. Yes, I meant total bits not mean bits.

Do we agree then, that if we plot the complexity of a pattern (in bits) along one axis, and the compressibility (in some units to be decided!) along the other, then, if we plot patterns found in nature along these two axes, the two will tend to be negatively correlated?

But that there will be a bell curve through a section cut at right angles to the negative slope?

And that CSI patterns will those towards the top right hand corner?

(I wish I could post a plot – I’ll try to host somewhere and post a link).

Hi Lizzie, I think the EF is easier on the eye. But I also want to understand how the CSI calc works into things as presented in Dembski’s 2005.

For an example of the first stage of the EF see again my post as far back as #29.

I think I was trying to make two points.

The first thing we try to eliminate is necessity. But the other option is not chance, it’s contingency.

And that is because contingency includes both chance and choice (as kf likes to put it – and I think is a good way to put it as well).

I really liked your description in 138. A different mental picture from the EF, but an image nonetheless.

Not a problem, I’m getting used to it 😉

Dembski is defining anything that arises from the operation of the physical world as chance. We are capable of design so if our ability to design is a result of the operation of the physical world then Dembski is including design under the category of chance. IF our design abilities are based on physical law then any comparison of human design with biology is a comparison of chance (our ability to design) with biology (an unknown origin).

kairosfocus:

Excellent point. High information content requires high contingency.

I also like how you phrase the issue in terms of a search for the zone of interest.

Given all the material resources at the disposal of the universe/solar system since it’s inception, would we expect a search to land here?

So we can frame the hypotheses in terms of a search.

And we can combine the concepts of search and information. One can ask much information a search would require to find the item of interest.

It could definitely help to think in terms of a search, and the information required for the search, and that is for sure the direction Dembki/Marks have taken.

That’s an interesting conundrum DrBot. We’ll have to see if we can work it out.

At first blush I’d say it begs the question of whether the human capacity to design is the result of purely materialist physical forces.

If there’s some mechanical law at work it’s indistinguishable from chance, so i don’t know why we’d think there was some physical law that our design abilities are based on.

Do you dispute that human designers serve as an intelligent cause? It seems that you have to at least accept that much or you’d object to the design inference on those grounds.

Do you believe the material world, all that is physical/material is intelligent?

Surely configurations of matter and energy which exhibit the capacity for intelligence and design are ubiquitous in the solar system.

A search for just one should be a simple thing for unguided, non-intelligent, materialist-physical forces to carry out with success.

Or not.

Mung and others:

Footnotes; as we may know I have had to be busy elsewhere this weekend, including a first conversation with an attorney from my local prosecutor’s office.

Now, the EF is first concerned with the question of contingency. Things that under similar initial conditions run out under deterministic dynamics of mechanical necessity per some differential equation model or another, are following lawlike regularities.

BTW, I appreciate that this is a limiting case of a chance process as a random variable that is always 1 is technically still a distribution; necessity can thus be enfolded into chance. But that is a tad pedantic, pardon. And, it cuts across the insight of the EF that it is high contingency that leads us to infer — on massive empirical base — to chance or choice.

And, yes, one may build a tri-nodal form of the case structure of the EF. But, it seems simpler and plainer to just do SPECIFIED

AND— logical operator sense — COMPLEX BEYOND A THRESHOLD.This also emphasises the point that the thing must be jointly — and simultaneously on the same aspect — specified AND complex beyond the bound in question to be credibly inferred as best explained on choice not chance contingency.

Also, there is indeed a model by Abel et al that visualises a three dimensional frame with particular reference to random, ordered and functional sequence complexity. This is the framework for Durston et al’s metric of FSC, which I have slotted into the log-reduced form of the Dembski et al metric. In doing that, I took advantage of the generally relaxed attitude of practising engineers to information metrics: there’s more than one way to skin a cat-fish. But once skinned they fry up real nice and tasty.

So, bearing in mind the issue of contingency, we can then see how CSI and the EF are both formally equivalent in force and complementary in how we seek to understand what the design inference is doing.

And yes, search is a reasonable frame of thought, as Marks and Dembsky are now profitably exploring using the concept of active information.

GEM of TKI

F/N 2: Following up on links from my personal blog [in connexion with the threats against my family], I see where Seversky [aka MG? at minimum, of the same ilk] is still propagating the demonstrably false talking point at Anti-evo that CSI cannot reasonably be calculated.

This is an illustration of the willful resistance to plain truth and patent hostility that have so poisoned the atmosphere on discussions of ID; now culminated in cyberstalking.

Let me link and excerpt, slightly adjusting to emphasise the way that specificity can be captured in the log reduced form of the Chi equation.

The same, that was deduced and presented to MG et al in APRIL — with real world biological cases on the Durston et al metric of information — and has had no reasonable response for over two months. Namely:

In the face of such a log reduction and specific real world biologically applicable calculated results, Seversky how are you still trying to push the talking point that CSI cannot be calculated and that design thinkers are not providing metrics of CSI?Is that not grounds for deeming your talking point a patently slanderous, willful, deceptive misrepresentation maintained for two months in the teeth of easy access to corrective evidence?

Seversky, you have had ample opportunity to know that your claim is false, and patently so, so your propagation of a hostility-provoking falsehood in an already polarised context, is willfully misleading and incitatory.

STOP IT NOW, in the name of duties of care.

Onlookers, those who are so willfully poisoning the atmosphere in that way need to think again about what they are doing, and its consequences in the hands of those who are intoxicated on the rage they are stoking. Dawkins’ notorious attempt to characterise those who challenge his evolutionary materialism as ignorant, stupid, insane and/or wicked, is provably slanderous and incendiary, provoking the extremism I have had to now take police action over — notice the allusion that to bring children up in a Christian home and community is “child abuse” in the linked headline, Seversky, and ponder on the sort of dogs of war your side is letting loose.

FYFI, Seversky, it is now DEMONSTRATED fact, about what the consequences of the sort of willful slanderous misrepresentation of design theory and design thinkers are.

So, you and your ilk have as duty of care to correct what you have done, and to work to defuse a dangerous situation.

Further irresponsible misconduct on your part — like I linked above — is inexcusable.

And BTW, you will see from discussion above, that Dr Liddle, a decent woman, is in basic agreement with me on the nature of the inference to design.

Good day, sir.

GEM of TKI

F/N:

Apparently TWT does not understand that

making a mafioso style cyberstalking threat is not a private matter, even if comumnicated on the assumption that the intimidation will work its fell work in private. And, he has now compounded his crime by publishing the incorrect allusion to my wife’s name.Worse, he has tried to further “justify” his dragging in as hostages by implied threat, of people not connected to the issues and debates, that someone at UD published an expose on the sock-puppet MG. An expose that the author deemed going too far, apologised for and has corrected.

And if TWT cared, he would have seen that I registered my objection to such outing, on learning of it the next day.

His further escalation is wholly unjustified and outrageous.

And BTW, the fact that the name given — and which I X’ed out in my own headline [can’t you even take a simple hint like that, TWT?] — is incorrect, is irrelevant to the highly material point that

by your personal insistence on publicising my name and now my family connexions, you have publicly painted a target around me and my family.I hope you are proud of yourself.

This is continued harassment in the teeth of a public warning to cease and desist.(a requirement of some possibly relevant jurisdictions.) And even in the teeth of a situation where some of the anti evo crowd have stated warnings that this is going too far.Indeed, it is a tripping of a nuclear threshold.

This is added to the dossier that will go to the authorities, as prior complaint.

It is quite plain that only police force, judiciously applied, will stop you in your mad path.

Good day sir.

GEM of TKI

PS: Onlookers, you will observe that I have not responded to the insults addressed to me. They do not deserve reply.

Well, I’ve read Dembski’s paper

http://www.designinference.com.....cation.pdf

yet again (printed it out, took it to bed with me!) and it seems to me, given that Dembski himself seems to prefer CSI as Design Detector, it’s worth unpacking!

And, thanks, Mung,for your endorsement of my post #137 (I still make a few East-for-West errors, but I think the principle is sound).

And I’m reassured, because, putting aside for now, the null hypothesis issue (!), and just looking at the axes, it does seem clear that if we plot patterns on a 2D plot, in which one axis is some measure of “Complexity” (my East-West Axis), which Dembski defines informally as “difficulty of reproducing the corresponding event by chance” (and so a long string of characters is going to be more complex than a short string, the number of potential characters being equal), and the other is “Pattern simplicity” (my North-South axis, which I called “Compressibility) which Dembski defines informally as “easy description of pattern”, then it is clear that patterns he calls “specifications” are those that are high in both (my North East corner):

So, bear with me while I think this through aloud (as it were):

It seems clear to me (again leaving aside any hypotheses) that specifications are going to be fairly rare – because in general, complexity and compressiblity (“easy description of pattern”) are negatively correlated: Complex patterns (long strings of stuff with low probability of repetition i.e. drawn from a large set of possible patterns) tend not to have long descriptions, while patterns with short descriptions will tend to be drawn from a much smaller set of possible patterns.

That’s essentially what I was getting at in post #137.

And it occurs to me that actually, the complexity axis large embraces stochastic (Chance) processes, while the shortest-description-length axis largly embraces “Necessity” (i.e. “Law-like”) processes. To explain: I made “sine waves” my poster child for short-description-length strings with low complexity, ad in general these are generated by simple physical laws; similarly, I made “white noise” my poster child for very long description length strings with high complexity, and these are generated by stochastic process like radioactive decay.

So we’ve incorporated both concepts in our 2D matrix.

And, as I’ve said, these properties will tend to be negatively correlated. There will be very few patterns that have low complexity and long description, because even if the shortest description is the whole pattern, if that pattern isn’t very long, it will still have a pretty short description.

The interesting part is the other corner – patterns with high complexity (drawn from a large set of possible patterns) and relatively short shortest-descriptions.

Hence the negative correlation, of course – the density of patterns will be high along the North-West:South-East axis, but rarified in the South West corner and the North-East corner.

And the North East corner is where the intersting stuff is.

Gotta go to the supermarket – are you guys with me so far?

Dr Liddle:

You may find this new post of interest.

GEM of TKI

PS: As noted earlier, the best way to visualise the CSI challenge is here, on a 3-d scale, as Abel et al have shown us.

Figure now here at UD

Absolutely. I think you make some really good points. The shortness of a sequence is itself a limiting factor, limiting the number of potential patterns or sets of patterns.

A bit string of length 4:

2^4 = 16

But within those 16 how many sets of patterns?

0000

1111

0101

1010

Very nice kf!

Yes, we really do seem to be on the same page here.

And yes, agreed, Mung.

Shall we take this to kf’s new thread?

See you there!

Dr Liddle:

the image thread is locked, it was just meant to hold an image.

The other thread is on CSI as a numerical metric.

GEM of TKI

OK, let’s carry on here then!

Obviously, I like the X and Y axes on that plot, but I’m not sure I would have placed the FSC spike quite where they do (I do realise it’s just a diagram).

What they seem to be suggesting is that there is a tight negative function that relates complexity to compressibility and that “FSC” is found within a particular range of complexity values (towards the upper end of the range).

I’d have thought the relationship was much looser, and that if you plotted actual observed patterns on that matrix, you’d find a broadly negative correlation, but with some outliers. And I’m also suggesting, that, if I am reading Dembski aright, the Specified Complexity axis runs orthogonal to the negative best fit line, rather than being a privileged segment of that line.

Still, that’s sort of a quibble for now.

The really important question is: under the null hypothesis of “no design” (that’s my null, now!), what distribution of patterns would we expect to see? In other words, what sorts of processes might generate patterns that would fall in the four quadrants of that matrix?

Contingency is important of course, and my hunch right now is that what determines how the bottom left and top right quadrants are populated is how deeply nested the contingencies are.

Be back later with more thoughts….

Hope you are feeling a little more at ease this evening!

We had a glorious evening last night on our boat – went a couple of miles up the river, and barbecued some chicken on the bank. It was the kind of “perfect English summer evening” that happens maybe once or twice per English summer!

If we log-transform their axes so that their curve becomes a straight line (just so it’s easier to envisage), what we would seem to be looking for, according to Dembski, is some data points that buck the trend, as it were, and display more

Dr Liddle

The onward linked paper explains in detail, in a peer reviewed document.

Essentially, functionally specific information will not be very periodic, as a rule, but will also have some redundancy and correlations so it is not going to have the patterns of flat random configs.

The peak they show has to do with how the range of specifically functional configs will be narrow relative to a space of possibilities. In short do much perturbation and function vanishes.

They speak a lot about algorithmic function, but the same extends to things that are structurally functional [wiring diagrams and the like] etc.

Here is a suggestion. Make an empty word doc, then use notepad or the like to inspect it, at raw ASCII symbol level.

Tell us what you see.

Finally, this is not a map of a mapping of function to a config space but of what metrics of types of sequence complexity would correlate to where we are dealing with function. The paper gives details. Config spaces are going to be topologically monstrous if we try to visualise as anything beyond a 3-d map or maybe a 3-d one.

Remember, we are dealing with cut-down phase spaces here. think of islands sticking out of a vast ocean. the issue is to get to the islands that are atypical of the bulk of the space, without intelligent guidance.

Note the 2007 Durston et al paper is giving numerical values of FSC, based on the H-metric of functional vs ground states.

I extended this to apply resulting H values for protein strings, to a reduced form of the Dembski Chi metric.

GEM of TKI

Elizabeth Liddle:

kairosfocus:

IOW, it’s not so much about what can generate the pattern as it is about what can reasonably find the pattern.

Well, interesting point, Mung, but let’s be careful not to conflate two separate issues.

On the one hand we are looking for the distribution of patterns, produced by non-intelligent processes (however we want to define that) along those two axis, complexity and shortest-description length.

That gives us our null distribution.

Now, those non-intelligent processes will, I assume, include search algorithms such as blind searches and evolutionary algorithms, right?

And the expectation is that evolutionary algorithms can’t produce patterns that will populate the top right hand corner of the page, but that known intelligently produced patterns (e.g. The Complete Works Of Shakespeare) will, right?

(Will pause here for response….)

Dr Liddle:

The issue is that function to reward through evolutionary algors is rare in and unrepresentative of relevant spaces. That is the context for that sharp little peak you just saw in the curve.

So, without providing an oracle, you have to get TO such isolated islands. Starting inside such an island is already begging the biggest questions, and the ones most directly on the table.

GEM of TKI

A source of information.

And how do we

findthe right oracle for the particular search?Well, maybe we can consult another oracle.

It’s a mystery wrapped in a riddle inside an enigma!

Would you agree, that if the universe stumbled upon an evolutionary algorithm, it did so without using an evolutionary algorithm?

How did it get so lucky? What sort of search did it use? Are evolutionary algorithms just widely spread throughout the search space? Having found one, how was it put to use?

So no. Evolutionary algorithms require information to function. I’m not willing to cede information until we get life.

That’s interesting, Mung.

So is it your view (it is mine, actually!) that an evolutionary algorithm i, in a sense, an intelligent algorithm?

In fact, this is the core of my criticism of Intelligent Design – not that certain patterns found in nature don’t indicate

somethingthat is also common to human-designed patterns, but that the common denominator is evolutionary-type search algorithms, not what we normally refer to as “intelligence”, which normally implies “intention”.I do think that evolutionary algorithms can produce complex, specified information.

I also think it is fairly easy to demonstrate that they occur “naturally”.

The challenge for evolutionary theory, in the face of the ID challenge, is not, I suggest, to demonstrate that evolutionary algorithms can produce complex specified information, but can account for phenomena like the ribosome, which seem to be required for biological-evolution-as-we-know-it to take place.

Or, indeed, for the emergence of evolutionary algorithms in the first place, which takes us, finally, back to Upright BiPed’s challenge, to which I should return!

But this has been useful, and I don’t think we are quite done, yet

I have great difficulty in thinking of any algorithm as intelligent, in any meaningful sense of the word.

Well, let’s build one and find out!

I find it hard to believe that an algorithm can produce information, but I’m up for investigating the matter.

I think they exist in nature. That they occur naturally is a rather loaded way to put it.

Dr Liddle:

Observed evolutionary algorithms, like all other algorithms, are artifacts of intelligent design.

They start within complex islands of function, and they work within such islands of function.

GEM of TKI

I’m away for a couple of days.

See you guys when I get back