Home » Religion, Science » Jason Rosenhouse at the Pandasthumb

Jason Rosenhouse at the Pandasthumb

Jason Rosenhouse has been writing an series of reports on the 2005 Creation Mega Conference that took place last week. I refer readers specifically to his latest report (#5) on the talk of Werner Gitt, who is the author of In the Beginning Was Information. In reading that book, I had many of the same concerns about Gitt’s concept of information that Rosenhouse raised in response to Gitt’s talk. As for Rosenhouse’s dismissal of my work on information, I refer readers to the following two articles on my DesignInference website:

Searching Large Spaces: Displacement and the No Free Lunch Paradox
Specification: The Pattern That Signifies Intelligence

I’ve stressed that, as part of a planned monograph on the mathematical foundations of intelligent design, these papers fill in the technical details of my previous work. The reaction to them at the Pandasthumb has been anemic (see here and here, for instance).

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

4 Responses to Jason Rosenhouse at the Pandasthumb

  1. I would hope Rosenhouse will post a written critique of your work somewhere. He mentions in passing some thoughts on your work when he attends my Intelligent Design talks at his university, James Madison.

    He offers comments before the IDEA students when he attends my ID talks. Though I did not think his objections were sufficient to refute your work, I thought his objections are the kind that should be considered by serious students of the design hypothesis. Regrettably, since these were talk and discussion times, I did not have time to write down what he was saying. I seem to recall however it was along the lines of what he said at Pandas Thumb:

    “Establishing that information is complex in his sense requires that we carry out probability calculations that in any practical situation can not be carried out….”

    I hope he will be kind enough to post his objections again in a more formal sense. Since he is a mathematician, he strikes as having a decent grasp of your work.

    In answer to Rosenhouse’s question about how we measure information in the genome, I say, “what researchers consider operationally effective”. There are actually several measures of information used by those studying bio-informatics and biology. One is the traditional combinatorial approach of measuring base pairs, and another the measure of functional sequences against all possible combinations. Both are used as metrics, and both can be used to measure CSI depending whether the context makes the metric useful or not.

    But to use a metric that is “operationally effective” one is building a coincidence between man-defined specifications and the physical information of biotic reality. Thus this invites a measure of CSI which is based on the CSI found in the analogous man-made objects, and begins a better answer for Rosenhouse’s very probing question which Gitt fumbled through, but which your work can clarify.

    For example, the specification of a Turing machine has a minimum certain number of bits, independent of the physical substrate that implements that Turing machine. If a Turing machine is discovered in biotic reality, we have a measure of the minimum amount of information in that machine, and it is based on an independently given specification. The same is true with the interface compatibility specifications for the proteins in the bacterial flagellum.

    Another measure of CSI that is a candidate, less obviously so, are molecular convergences in DNA sites that are subject to weak or neutral selection. Simple combinatorics can be applied in that case.

    The incident Rosenhouse refers to with the little boy has a glimmer of truth, where Rosenhouse boasted, “I suspect 5 years from now her son is going to rebel hard.” The AiG/ICR folk make their appeals within insulated environments, they are not accustomed to having to defend their claims in environments where the audiences are quite educated and hostile.

    IDists in contrast, engage the critics frequently in such a public arena. They have at least published articles in peer-reviewed journals and their books published by respected secular publishing houses like Oxford University Press and The Philosophical Library.

    It is for that reason, to some extent, campus Christian organizations like Campus Crusade for Christ and InterVarsity Christian Fellowship have sided with ID and effectively dissed AiG/ICR in matters of origins. These campus Evangelical organizations recognize the design arguments made by the IDists (independent of religious texts) rather than AiG’s theology have the kind of force which will make an impact in the tougher intellectual environments science students at secular universities are confronted with.

    AiG/ICR effectively demands blind acceptance of their claims, and is quick to villify and demean everyone, even Evangelicals, who do not agree with them. They effectively even turn off even the intellectual Evangelicals on the campuses and thus, it is understandable Campus Crusade for Christ and InterVarsity Christian Fellowship have embraced ID and dissed AiG/ICR.

    Of note (and Rosenhouse can correct me if I mis-interpret his position) I don’t recall he ever appealed to Elsberry or Shallit’s critiques of your work. Perhaps it’s because he (being a bright guy) knows better than to appeal to their flawed arguments. He can of course see what I had to say on the matter at:
    http://www.iscid.org/ubb/ultim.....6;t=000543
    and is invited to defend their critique against my objections. Better yet, I invite Rosenhouse to side with me and point out Elsberry and Shallit’s rather uncharitble rendering of your work.

    I’m afraid however, that would be too much to ask, for I would be asking him to turn on his comrades. The probing question I would pose to Jason would be, “How can you justify the fact Elsberry and Shallit made no mention of the central definition of CSI on page 141 of No Free Lunch. After all if one is going to critique Dembski’s CSI, doesn’t it make sense to make reference to the definition of CSI?”

    He also knows I don’t shy away from posting on his weblog or Panda’s thumb, nor do I shy away from inviting him to make a appearances at my ID talks as well as invited him to discussions before the students. I have the confidence to do so Bill, because your writings and those of your colleagues have equipped us IDists in the trenches to engage very capable defenders of evolutionary theory like Rosenhouse.

    Rosenhouse is more than welcome to host public discussions with me at his university or on his weblog, or at the KCFS forum. Further at my ID meetings, I’m quite happy to direct listeners to his weblog as I am confident, despite his very well-argued positions, the educated IDists will see the superiority of the design arguments. I’m pleased to say, unlike the little boy he alluded to, IDists at his university are not rebelling nor do anticipate that they will, the will remain IDists and graduating with science degrees from his university as several did in the recent class of 2005. Some of them made an appearance at the now infamous Smithsonian event. 

    I do however have high respect for Roesnhouse’s intellect (the guy is brilliant, one can sense it just by talking to him). And though Rosenhouse is a math professor, he seems to argue Darwinian theory more capably and passionately than several of the biology professors at his university.

  2. After reading again the paper on specification several thoughts occurred to me when applying it to RM+NS in protein synthesis.

    There appears to be three factors that must be identified in making a design inference:

    1) specification
    2) complexity
    3) probabilistic resources

    Specification seems to be the easy one. If it’s a coding gene for a well characterized life-critical protein it’s specified. Natural selection was the specifier. This can be determined empirically by modern biology on living tissue. There’s an ample list of sequenced and well understood coding genes in the literature.

    Complexity is harder. The measure of complexity to me is compressibility. I searched the literature six months or so ago for work on compressing genomic sequences. There isn’t much but there’s some. After accounting for the obvious (like polyploidy in some genomes) it appears that coding genes are uncompressible by any method so far devised. That doesn’t mean that no compression is possible by any means. For instance, once we can predict how any arbitrary amino acid sequence will fold and we crunch through all the coding genes we might find that they’re constructed from a dictionary of folds. The compression ratio achieved could in principle be quite high. Complexity in a gene is falsifiable by finding a way to substantially compress its information content. There’s plenty of good science being done in protein engineering on fold prediction since this has vast practical application. Probably not enough work is being done in genome compression. Finding heretofore unknown compressible patterns in it might lead to discovery of great import.

    Probalistic resources seem in principle to be reasonably boundable in selected cases. However, mechanisms like the mutation floodgate recently described by Scripps scientists can potentially alter the picture for probalistic resources. Probalistic resources seem the most difficult to quantify. There’s a lot of investigation into mutation events by kind, frequency, and consequence so the science is being done. Horizontal gene transfer could play big into probalistic resources. Lynn Margulis is a well respected proponent of interspecies sharing of genomic resources and also says the RM+NS paradigm is a lame duck because of it.

    I’d be interested in seeing a design detection study done on the point mutation responsible for sickle cell anemia. The unsurprising result I would think is that it was very likely due to chance. The parameterization of the problem would be interesting and enlightening in any case.

    The gotcha is having a compressor that knows the language (if any) employed. About six months ago I searched the literature looking for compression algorithms employed on genome databases. There’s some but not a lot of reference to it. The results are that genomes are largely uncompressible by any well known and/or modified algorithms. The problem is that there might be hidden patterns the compression algorithms as coded cannot find. For instance, once we can computationally predict how any arbitrary amino acid string will fold we might find that all the specified proteins in nature are constructed out of a “dictionary” of folds that compresses at a high ratio. For the nonce, complexity looks real enough to me and the notion of complexity can in principle be falsified by successful high level compression outside the chance rejection zone.

    Probabilistic resources are probably the hardest.

  3. Oops. The last paragraph was a rewrite of the first compressibility paragraph. I forgot to pick just one.

    I hate when I do that! :-)

  4. Rosenhouse brings up a good point which I am confronted with when I am discussing and defending ID literature, and that is the difficulty of calculating probabilies.

    I have said, in absence of a calculation, measurements of improbability can be made, and tentative conclusions drawn. My ability to recognize music is rooted in the ability of my ears and audio processing system to be able to measure measure improbability. Formal calculations of improbability are sufficient but not necessary to establish complexity, imho. I do not think establishing complexity need be as rigorous as Rosenhouse might demand, for a design hypothesis to be put forward.

Leave a Reply