Home » Intelligent Design » Dembski speaking at Texas A&M Thursday, November 12, 2009, 7:00pm

Dembski speaking at Texas A&M Thursday, November 12, 2009, 7:00pm

I’ll be speaking on the Texas A&M campus at Rudder Theater from 7:00 to 9:00pm tomorrow (Thursday) November 12, 2009. For the flyer describing my talk, click here:

www.uncommondescent.com/images/flyer

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

36 Responses to Dembski speaking at Texas A&M Thursday, November 12, 2009, 7:00pm

  1. Will you talk about the powerful results – as you stated it in your first interview with Casey Luskin – of your upcoming article “The Search for a Search”?

  2. What happened to that article?

  3. What happened to that article?

    The article you are asking for is currently still available at the Design Inference Website. Robert Marks may have had problems with Dieb’s comments.

  4. 4

    osteonectin[3],
    Thanks, but I get an error message when I click on the link.

  5. The abstract of the article is still on the site of the Evolutionary Informatics Lab, but the link to the draft given there leads to the previous article, Conservation of Information in Search. So, that’s a dead end, too.

  6. 6

    Strange. I thought it had been accepted for publication. I did notify the authors of a significant simplification of one of their proofs, so perhaps they are rewriting it? Mysterious. Does anybody know anything?

  7. Please, please, please make this talk available on the internet. We NEED to have this stuff Bill… pay the $$$ and get it online. Please :-)

  8. Yes, it would be nice to get these things on audio.

  9. @P. Olofsson,
    a simplification of one of their proofs? May I ask which one? IMO, chapter IV should be headlined A Guess for a Good Guess R. Marks and W. Dembski are modeling only guesses, not searches and so, none of their statements is relevant to searches. And personally I don’t see a way to fix this easily: at least, you have to take a recurrent search instead of the uniform probability as (what they call) a baseline of the effectiveness of blind search. Then, you have to do something about the space Ω^m…

  10. @Dr. Dembski,

    do you think that there is any valid point in the article which allegedly infringes your copyright? If not, wouldn’t it be the easiest way to get rid of the article by just saying so?

  11. 11

    DiEb[9],
    The proof of “conservation of uniformity” for finite Omega. It is pretty much trivial.

  12. @P. Olofsson

    I don’t think that theorem IV, Conservation of Uniformity works for anything else but finite Ω: the integrals involved aren’t defined properly….

  13. 13

    DiEb[12],
    Indeed, and it’s worse since M(Omega), the space of probability measures on Omega is not even a vector space (not additive, no 0 element, etc). So the proof for finite Omega is elementary and there is no proof for general Omega. Is this article really accepted for publication???

  14. @P. Oloffson
    I really don’t know. In his interview with Casey Luskin, W. Dembski indicated that there were only some formal matters left. I don’t know what the status of the article is at the moment, perhaps R. Marks or W. Dembski could enlighten us…

  15. Dieb and Prof_P.Olofsson,

    Regarding the Horizontal NFLT, can either of you explain the logic by which the Kullback-Leibler distance indicates that one distribution performs better than another on average? Since the Kullback-Leibler distance is always positive, and “active entropy” is therefore always negative, wouldn’t this imply (absurdly) that every distribution is worse than every other distribution?

    Also, can either of you explain why any distribution would be better or worse than any other distribution on average? Why would they not all be the same, when averaged over all possible targets?

    I’m missing something here. Thanks in advance if you’re able to find the time to help me out on this.

  16. @R0b:

    IMO:
    1: all distributions are doing equally well for a guess – so this Kullback-Leibler based entropy is a little bit of a distraction
    2: you are right, they are all the same when averaged over all possible targets
    3: you are not missing anything, you’re spot on

  17. Thanks, DiEb.

  18. I’d love to get some input from R. Marks and W. Dembski on this!

  19. 19

    Rob and DiEb,
    The Kullback-Leibler distance doesn’t say anything about search performance. It just says that different searches are different. You are right that for any two searches, S and T, that are different, you get both H(S|T)<0 and H(T|S)<0.

  20. P. Olofsson,
    I try to avoid the term search in this context: their is no meaningful partition of the search space Ω^m for a non-trivial search. And such a partition is needed for the calculation of the Kullback-Leibler distance…

  21. Thanks, Prof. Olofsson. So would you say that the logic in the following sentence is invalid?

    Because the active entropy is strictly negative unless the two probability measures φ and ψ agree on the partition ̃T, any assisted search (φ) will on average perform worse than the baseline search probability (U).

  22. 22

    ROb[21],
    I think so. We get H(&phi|U)<0 and H(U|&phi)<0. What are we supposed to conclude from that?

  23. 23

    DiEb[20],
    Perhaps. I just look at the formalism of the theorem.
    About my comment [12], it is however the case that the space of measures (not restricted to probability measures) is a vector space. I don’t know exactly how much structure is needed for the vector-valued integrals to be well-defined. Banach space?

  24. 24

    My &phi didn’t come out right.

  25. 25

    It looked fine in the preview. Weird.

  26. So the lecture went very well, for all of you who are interested in the actual topic of this post.

  27. Below is an exchange with Peter Oloffson. Begin at the bottom, which repeats his criticism in comment #13. Note that these spaces of probability measures form compact metric spaces, so this is even better than the completeness of Banach spaces:

    —–Original Message—–
    From: “Olofsson, Peter”
    To: “William A. Dembski” , “Marks, Robert J.”
    Date: Fri, 20 Nov 2009 11:23:21 -0600
    Subject: RE: Vector-valued integration.

    Hi Bill and Bob,

    Yeah, you’re right, spaces of measures are vector spaces and if that’s all that’s needed, the integrals are well-defined. I spoke too quickly. The critique on RationalWiki claims that more is needed, such as a Banach space, which I don’t know if you get when you generate a norm from the K-W metric, but I’m no expert and I’m sure you’ve thought it through. I see some problems with interpretations and applicability of the results regarding ID/evolution, but I’m not spending a whole lot of time on it. I checked out your old JTB paper and I think it’s somewhat counterintuitive that a “uniform” distribution on a countable set can be a unit point mass and one of the points. Certainly, that’s what you get from your definition, but such a uniform distribution can hardly be equated with “chosing randomly.” Anyways, it’s interesting stuff.

    Best of luck with the revision!

    Peter

    PS. I might attend the Conference of Texas Statisticians at Baylor in the spring. I’ll try to get hold of y’all again.

    —–Original Message—–
    From: William A. Dembski
    [Sent: Friday, November 20, 2009 8:30 AM
    To: Marks, Robert J.; Olofsson, Peter
    Subject: RE: Vector-valued integration.

    Hi Peter,

    The space of probability measures, as a space of all convex linear combinations, is embedded in the space of all measures on the underlying compact metric space(s). And spaces of measures do form a vector space. Moreover, those spaces of probability measures are themselves compact. I see no problem with convergence and no loss of generality. You're welcome to write up your objections in detail, but what you say below cuts no ice with me.

    --Bill

    -----Original Message-----
    From: Olofsson, Peter [mailto:[email protected]]
    Sent: Wednesday, November 18, 2009 1:41 PM
    To: Marks, Robert J. Cc: [email protected]
    Subject: Vector-valued integration.

    Hi Bill and Bob,

    I’m sorry you didn’t want to get together in San Antonio. I hope the conference was good. I looked at your “search for a search” paper again, and I don’t think the integrals in the “conservation of uniformity” theorem are defined. The space of probability measures is obviously not a vector space so I don’t see how any of the references you give would have results that apply (although I haven’t looked them up). The result is still valid form finite Omega in which case there is an elementary proof that avoids combinatorics and weak convergence. I found problems with the “horizontal no free lunch” theorem as well. It doesn’t seem to say anything about search *performance* but only measures a distance in the sense of Kullback-Leibler.

    Cheers,
    Peter

    PS. I don’t have anything to to with RationalWiki but it seems there is some valid criticism there as well.

  28. Dear Dr. W. Dembski,

    you are right about the vector-valued integration – and therefore, I was wrong: I saw a problem where none was. My apologies to you, and to P. Olofsson whom I may have lead astray with my previous comment.

    If you could address the concerns about the HNLT?

  29. 29

    Do you often post private email conversation without consent? Strange behavior. At least spell my name right.

  30. 30

    Clive[26],
    Very well. Sorry for not staying on topic but there hasn’t been any discussion here about the talk.

  31. Dear Dr. W. Dembski,

    your formulation of a more generally version of theorem IV Conservation of Uniformity unintentionally doesn’t exclude the case k = 0. That’s where my problem came from: Ω = M^0 can be a compact metric space without being a vector space, while you have a natural vector space structure on the measure spaces containing M^k, k>0 …
    I don’t think that Prof. P. Olofsson would have objected the publication of your email-exchange, but your unilateral act seems to be a little odd given your concerns about copyright law.

  32. Peter O.: You publicly charged me with making a very elementary error in my field of expertise, probability/measure theory. I then pointed the error out to you in correspondence. You had the opportunity to correct it here on this blog, where you made the charge. You didn’t. Also, the correspondence in question was copied to my colleague Robert Marks. So no, I don’t make a habit of posting private emails. And yes, I make exceptions when I perceive that people aren’t playing straight with me.

    I’ll get your last name right in the future.

  33. 33

    Bill[30],
    I did correct it, in my comment 23 (which was supposed to refer to 13, not 12). Since the discussion about vector-valued integration is highly technical, and only DiEb and I were involved, I didn’t elaborate.

    I certainly did not mean to discredit your expertise in probability/measure theory. If you feel that I did, I sincerely aplogize.

    I’m still not clear over the mathematical technicalities. When you introduce the measure-valued integral, its existence is either due to (1) directly applying a theorem from one of the references in your article, or (2) mimicking the construction of the integral in a different setting. Perhaps it’s trivial.

    Actually, I’ve seen a lot of different spellings of my last name and I accept most of them. :)

  34. Dear Peter,

    You’re very kind. Thanks also for your private note to me. As the constant target of criticisms I am sometimes overly sensitive. In any case, I misssed your comment #23. But even if I hadn’t, I should have contacted you first privately about posting a correction before posting our correspondence. I apologize.

    And as you pointed out, I left out in posting our corresondence that I had invited you to lunch. The offer stands if you’re ever passing through Waco.

  35. 35

    Bill[34],
    Thanks for your note. WFL sounds good.

  36. Dr. Dembski,

    I’m wondering if you can comment on Prof. Olofsson’s final point in his email, namely:

    I found problems with the “horizontal no free lunch” theorem as well. It doesn’t seem to say anything about search *performance* but only measures a distance in the sense of Kullback-Leibler.

    Indeed, I’m having a hard time seeing the significance of your definition of active entropy, other than it being the negative relative entropy. If the intent is to average the active information over a set of potential targets, then I don’t see why it’s weighted by ψ(Ti), as ψ(Ti) is not the probability of Ti being a target.

    Also, since the active entropy of φ with respect to ψ is always negative, it can’t tell us whether φ or ψ is the better performing distribution. Furthermore, it seems that all distributions should perform equally when averaged over all possible targets.

    I would sincerely appreciate your help in clearing up my confusion on these issues.

Leave a Reply