Home » Comp. Sci. / Eng., Informatics, Intelligent Design, Science » Jeff Shallit — leveling the charge of incompetence incompetently

Jeff Shallit — leveling the charge of incompetence incompetently

Jeff Shallit charges Jonathan Wells with incompetence for claiming that duplicating a gene does not increase the available genetic information. To justify this charge, Shallit notes that a symbol string X has strictly less Kolmogorov information than the symbol string XX. Shallit, as a computational number theorist, seems stuck on a single definition of information. Fine, Kolmogorov’s theory implies that duplication leads to a (slight) increase in information. But there are lots and lots of other definitions of information out there. There’s Fisher information. There’s Shannon information. There’s Jack Szostak’s functional information.

Information, when quantified, typically takes the form of a complexity measure. Seth Lloyd has catalogued numerous different types of complexity measures used by mathematicians, engineers, and scientists. Here are a few that he shared with John Horgan:

Complexity Measures

Note that one of the forms of information on this list is “algorithmic information content,” which is the one that Shallit attributes to Jonathan Wells. But with so many information/complexity measures floating around, why in the world does Shallit think that this is the one that Wells intended? A charitable interpretation of Wells’s remarks would suggest that he was thinking of nothing more complicated that duplicating X by X means that the probability of X given X is 1, implying that all the uncertainty from X has been removed, implying in turn that its information is zero since information can, in one incarnation, be taken as a measure of uncertainty.

The latter type of information is the one I used in my book INTELLIGENT DESIGN: THE BRIDGE BETWEEN SCIENCE AND THEOLOGY, and it’s likely that Wells got it from me since I consider that very problem of duplication in it. So Shallit should probably be going after me. But if he were to do that, he should also go after my latest work on information theory, which can readily be found at www.evoinfo.org. Why doesn’t he? Several years back Shallit actually tracked down my home phone number and called me, urging that I ramp up my popular information theory arguments, develop them with full mathematical rigor, and get them published in the appropriate peer-reviewed literature. Well, I’m doing it. And what is Shallit doing? He’s prefers setting up strawmen and chopping them down.

At one point I was interested in Shallit’s critique, but no more. He has proven himself an extremely narrow computational number theorist who shoehorns everything mathematical that ID people do into his own little world of algorithmic information theory — like the drunk who’s looking for a coin under a street light because there’s no light down the street where he dropped the coin. Shallit was one of my teachers at the University of Chicago. I liked him at the time. But his addiction to substituting insult for civil discourse and sound argument has, it appears, severely constricted his intellectual reach.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

24 Responses to Jeff Shallit — leveling the charge of incompetence incompetently

  1. “[as well as] Kolmogorov there are lots and lots of other definitions of information out there. There’s Fisher information. There’s Shannon information. There’s Jack Szostak’s functional information.”

    So it would help greatly if ID scholars would indicate exactly what information they mean when they say, for example, that duplicating a gene does not increase the available genetic information. Dr Dembski – the Isaac Newton of Information Theory – hasn’t told us what sort of information Dr Wells was referring to, but this post would be an excellent opportunity to do so.

    [Reg:] You’re on thin ice so stop with the flip comments. As it is, I defined what concept of information I was using when discussing duplication in my book INTELLIGENT DESIGN. As for Wells, should he have explicitly laid out what concept of information he was using? I don’t think it was necessary. The concept he was using has wide currency, both within and outside the ID movement. Shallit’s fault is that he was so intent on making Wells look ridiculous that he refused to employ the principle of charity in interpreting his remarks and then used a concept of information that Wells could not have been using except in error. On another note, my harsh moderation policies have been considerably reined in at UD given the new management, but I have no problem banning you from this thread and I still have enough clout here to get you banned period. So keep it civil or head for the exit. –WmAD

  2. It seems to me that there is a not-seeing-the-forest-for-the-trees phenomenon at work here, concerning the nature of information in biological systems.

    You buy a “some assembly required” tricycle for your kid. You open the box and dump out a bunch of parts. There are bolts and nuts of various sizes, strange-looking metal parts, holes drilled in tubes, etc. But there is an instruction manual that tells you, step by step and in which order, to assemble the parts. Do it in the wrong order, put the wrong bolt in the wrong hole, or screw up in any of a number of ways, and you’ll have to backtrack and start over. (I hate when that happens.) The manual also tells you what tools you’ll need for the task.

    The instruction manual is the kind of information we are talking about in living systems. Instruction manuals don’t write themselves. This is not hard to figure out, except for certain academic intellectuals who have been mysteriously immunized against simple logic.

  3. While Reg may be unwise to make flip comments it is not unreasonable to ask which definition of information Wells and ID generally are using without having to buy yet another book.

    I have always assumed that the ID movement adopted the definition of information used in the glossary above:

    ? = – log2[10^120 ·?S(T)·P(T|H)]

    This appears to be consistent with your post and is also in line with your recent paper. Can you confirm it still stands?

  4. “My own profession that of a university teacher is in this way dangerous. If we are any good we must always be working towards the moment at which our pupils are fit to become our critics and rivals. We should be delighted when it arrives, as the fencing master is delighted when his pupil can pink and disarm him. And many are.

    But not all. I am old enough to remember the sad case of Dr. Quartz. No university boasted a more effective or devoted teacher. He spent the whole of himself on his pupils. He made an indelible impression on nearly all of them. He was the object of much well merited hero-worship. Naturally, and delightfully, they continued to visit him after the tutorial relation had ended went round to his house of an evening and had famous discussions. But the curious thing is that this never lasted. Sooner or later—it might be within a few months or even a few weeks—came the fatal evening when they knocked on his door and were told that the Doctor was engaged.

    After that he would always be engaged. They were banished from him forever. This was because, at their last meeting, they had rebelled. They had asserted their independence—differed from their master and supported their own view, perhaps not without success. Faced with that very independence which he had laboured to produce and which it was his duty to produce if he could, Dr. Quartz could not bear it. Wotan had toiled to create the free Siegfried; presented with the free Siegfried, he was enraged. Dr. Quartz was an unhappy man.”
    ~C.S. Lewis, The Four Loves

    http://www.archive.org/stream/.....p_djvu.txt

  5. Reg is now moderated.

  6. Reg,

    Dr Stephen C Meyer goes over the type of information IDists are talking about in “Signature in the Cell”.

    Information. The information age. Information technology. Information theory.

    When IDists speak of complex specified information they are using it in the following sense:

    information- the attribute inherent in and communicated by one of two or more alternative sequences or arrangements of something (as nucleotides in DNA or binary digits in a computer program) that produce specific effects

    When Shannon developed his information theory he was not concerned about “specific effects”.

    It is producing those specific events which make the information specified!

    And that is what separates mere complexity from specified complexity.

  7. BTW what Dr Wells said about gene duplication was already stated by Dr Lee Spetner back in the ’90s.

    You make a copy of something and altough you have two of it the information is the same.

    That is if I give a copy to two different people those two people have the same information in their hands.

  8. Dr Dembski,

    I have to admit, I haven’t read your book, and I’m not familiar with that definition of information. I see that this definition of information is a measure of uncertainty, as you say.

    Just working from the few sentences here, it appears to me that you assume the symbols are communicating discrete amounts of information, X = “George Washington was the 1st President of the US.”

    If this accurately captures the idea, I can easily see why XX contains the same information content as X. If the list of facts you’ve told me about already included this fact about President Washington, then I’ve got no more information by being told twice.

    I’m going to extrapolate from this that XyX has the same information content as Xy. Facts are discrete and unordered in this interpretation.

    Is this anywhere near what you have in your book? I think it is close to many peoples conversational definition of information, how much you told me that I didn’t know before.

  9. Nakashima & Frank: The ID movement does not use just one concept of information.

  10. If this accurately captures the idea, I can easily see why XX contains the same information content as X. If the list of facts you’ve told me about already included this fact about President Washington, then I’ve got no more information by being told twice.

    But this is not how biology works. In the case of worms two X chromosomes would be a hermaphrodite, while one X chromosome would be male. There copy number does provide additional information. There are countless examples within biology where the number of copies of a gene or chromosome leads to different phenotypes.

    Secondly what if your second copy of the statement had a mutation and read
    George Washington was the 1st President of the UK.

    This would provide additional information.

    Lets go back to Well’s quote:

    The questioner became agitated and shouted out something to the effect that HOX gene duplication explained the increase in information needed for the diversification of animal body plans. I replied that duplicating a gene doesn’t increase information content any more than photocopying a paper increases its information content. She obviously wanted to continue the argument, but the moderator took the microphone to someone else.

    The problem with this is that HOX genes have not just duplicated, they have also had mutations, and therefore there would definitely be a difference in the amount of information between X and XX’

  11. Secondly what if your second copy of the statement had a mutation and read
    George Washington was the 1st President of the UK.

    This would provide additional information.

    Or would it be noise?

    Anyway, you make a good point as that is how mutations generally work in the real world making something useful less so, maybe even useless.

  12. Dr Dembski,

    Indeed. I was simply trying to clarify what the definition advanced in your book was. I was trying to back that out from the brief description in your post. Could you confirm or deny whether I am on target?

    Thank you.

  13. Mr hdx,

    You raise good points re the biological plausibility or relevance of particular definition of information, but I think the issue is still nailing down what people think is the definition of information that Dr. Wells was using in his response. My post was because I was not clear what the definition being advanced by Dr Dembski was, based simply on his brief description.

    I agree that we will have to get to what “available” means under the relevant definition.

  14. Dr. Dembski and Reg both draw attention to the same problem: Overloaded terms can lead to ambiguity, or even worse, to equivocation.

    Consider, for instance, whether Leslie Orgel was talking about improbability when he spoke of complexity.

    Or consider whether the following from Leon Brillouin, frequently quoted by the EIL, has anything to do with the EIL’s conservation of information:

    The [computing] machine does not create any new information, but it performs a very valuable transformation of known information.

    Brillouin was referring to a hypothetical machine described by Edgar Allan Poe thusly:

    Arithmetical or algebraical calculations are, from their very nature, fixed and determinate. Certain data being given, certain results necessarily and inevitably follow. These results have dependence upon nothing, and are influenced by nothing but the data originally given. And the question to be solved proceeds, or should proceed, to its final determination, by a succession of unerring steps liable to no change, and subject to no modification. This being the case, we can without difficulty conceive the possibility of so arranging a piece of mechanism, that upon starting it in accordance with the data of the question to be solved, it would continue its movements regularly, progressively, and undeviatingly toward the required solution, since these movements, however complex, are never imagined to be otherwise than finite and determinate.

    So Brillouin’s conservation of information referred explicitly to the determinacy of automata. Contrast this with the EIL’s concept of information conservation, which compares the probabilities of hitting the respective targets in lower- and higher-level searches, assuming that the higher-level search meets NFL conditions.

  15. In comment #10, hdx points out a basic biological fact: the number of copies of genetic information is often crucial in determining phenotype, even if the copies are identical.

    For example, trisomies in humans are almost always fatal, despite that fact that the extra (i.e. third) copy of the relevant chromosome may be (indeed, often is) identical to one or the other of the pair of chromosomes with which it is paired. So devastating is the addition of an extra copy of a chromosome that only one such trisomy is survivable in humans: trisomy 21, which causes Down syndrome.

    The problem here is that biological information is fundamentally different from most (perhaps all) of the different definitions of information listed in Dr. Dembski’s post. As I have pointed out in other threads, biological information is meaningful information, in that biological information (especially that contained in the genetic material) is encoded. That is, it “stands for” something else in the same way that the letters in a phonetic alphabet “stand for” phonemes, or in the way that strings of letters “stand for” words, which of course “stand for” the concepts associated with them. Biological systems contain multiple layers of such meaningful information, and as hdx points out, the quantity of such information matters as much as its quality (i.e. its “meaningful” content).

    In particular, both Shannon and Kolmogorov information lack any trace of “meaningfulness”. Indeed, both are essentially measures of the relationship between the bits of a message, and have no intrinsic relationship with the meaning of the message (if it has any, which it need not).

    Following this same line of reasoning, it seems to me that the same can be said for “complex specified information” (CSI). There is nothing in Dr. Dembski’s various explanations of CSI that necessarily requires that such information be meaningful. Indeed, I have never seen a clear and concise description of the “meaningfulness” of biological information in any of the various theories of information, including Dr. Dembski’s. Until such a description is forthcoming, it seems unlikely that any such theories (including Dr. Dembski’s) will have any bearing on the nature or properties of biological information.

  16. Having read over my previous entry, I would like to make one correction: the only survivable autosomal trisomy is trisomy 21. The various aneuploidies of the X and y chromosomes (Klinefelter’s and Turner’s syndrome, along with the so-called “supermale” and “Xyy” syndromes) are indeed survivable. However, they also result in significant phenotypic variation, again underlining the fact that multiple copies (or deficient copies, in the case of Turner’s syndrome) violate precisely the point that Dr. Well’s asserted: that identical duplications of information (including biological information, in the form of DNA sequences or chromosomes) have no significant biological effects. This is clearly not the case, and so Dr. Well’s assertion fails.

  17. Allen_MacNeill,
    You are quite right to point out that biological information has meaning. This is the semantic meaning of symbolic information. By its nature, symbolic meaning is extrinsic to the symbols. It comes from an associated convention that maps between the symbols and their meaning. Thus, symbolic meaning is never intrinsic to the symbols themselves.

    That does not invalidate specified complexity, a concept that existed before Dr. Dembski’s work advancing that area. Specification can be in terms of function and is not limited to symbolic meaning, but it does include cases of meaningful symbolic sequences.

    A hypothetical example of specified complexity that is not symbolic would be peer-replicating strands of RNA. The arrangement of the bases would matter, making it specified complexity, not random complexity. Yet in that case, the specification (e.g. ability to replicate) would be functional without being symbolic.

  18. Allen_MacNeill,

    The problem for materialists is that this makes the origin of symbolic information impossible to solve by means of undirected chemical and physical processes. The reason is that you can never get to symbolic information until you have the machinery to implement a symbolic language (e.g. a genetic code between codons and functional proteins).

    Yet, without having information-driven construction, building something like ribosomes is beyond the reach of undirected, mindless processes that neither know of nor care about creating symbolic information.

    The laws of chemistry and physics have never needed symbolic information. They can be entirely fulfilled without it. Rocks, tars, and other meaningless goo will serve just fine. Mindless matter has neither need nor intention nor the means to pursue symbolic information translation machinery.

    p.s. BTW, I believe you misrepresented Wells. He never claimed that duplication of genes/information would never have any effect, and he certainly never claimed that deleterious duplications are not possible. His intended point was sound, i.e. duplicating the recipe for building X does not, of itself, enable you to build something other than X (which was already possible). That is a very reasonable observation. It would uncharitable to deny it has a legitimate meaning.

  19. To all, it is very timely that the Nobel Prize in chemistry has just been awarded to three scientists for their work in determining the structure of ribosomes — a structure that some have suggested as a more daunting illustration of specified complexity than the flagellum. Excerpts from an Associated Press news story:

    Ribosomes are crucial to life because they produce the proteins that control the chemistry of plants, animals and humans. Working separately, the three laureates used a method called X-ray crystallography to pinpoint the positions of the hundreds of thousands of atoms that make up the ribosome.

    Ramakrishnan described his work on ribosomes as an attempt to understand “this large molecular machine that takes information from genes and uses it to stitch together protein.”

    He said he and others had been using X-ray crystallography to build an “atomic picture of this enormous machine.”

    “Now we can start figure out how it does this complicated process,” he said.

    It is interesting also how the author did not fail to insert an utterly gratuitous association with Darwin’s theory — something that has nothing at all to do mapping the structure of ribosomes.

    Their work builds on Charles Darwin’s theory of evolution and, more directly, on the work done by James Watson, Francis Crick and Maurice Wilkins, who won the 1962 Nobel Prize in medicine for mapping DNA’s double helix, the citation said.

    This is another sad example of how Darwinism tends to ride on other coattails for credibility.

  20. p.s. BTW, I believe you misrepresented Wells. He never claimed that duplication of genes/information would never have any effect, and he certainly never claimed that deleterious duplications are not possible. His intended point was sound, i.e. duplicating the recipe for building X does not, of itself, enable you to build something other than X (which was already possible). That is a very reasonable observation.

    It’s not reasonable if we are talking biology.

    Allow me to introduce the Hawaiian Silversword Alliance. A collection of genera that contradicts this particular assertion. Enjoy.

  21. A hypothetical example of specified complexity that is not symbolic would be peer-replicating strands of RNA. The arrangement of the bases would matter, making it specified complexity, not random complexity. Yet in that case, the specification (e.g. ability to replicate) would be functional without being symbolic.

    Sounds like a viroid to me.

    Or, if you want something really remarkable, these.

  22. Allen MacNeil, you raise a very fine point with:

    Following this same line of reasoning, it seems to me that the same can be said for “complex specified information” (CSI).

    There is nothing in Dr. Dembski’s various explanations of CSI that necessarily requires that such information be meaningful.

    But that wouldn’t be necessary with regard to the intent of CSI, which is to detect design rather than to ascertain the success of the design, and I guess that’s why the emphasis on specificity rather than meaning.

    For instance, CSI (and Kolmogorov) would not indicate that there is less information in “George Washington was the first president of the U.K.” than “George Washington was the first president of the U.S.” yet CSI would accurately indicate “George Washington was the first president of the U.K.” to be designed despite it’s obviously lower information content.

  23. Arthur Hunt @ 20 appears to want to claim that the Hawaiian Silversword Alliance is a collection of genera that contradicts the assertion

    duplicating the recipe for building X does not, of itself, enable you to build something other than X (which was already possible).

    I would like to hear the rest of your argument filled out explicitly.

    For example, are you supposing instead duplication plus subsequent modification of the duplicated gene(s)? (If so, the argument would fail. Please note, “of itself”.)

    But what I would really like to hear is your response to the main point, i.e. whether it one can reasonably draw the conclusion that undirected matter and energy would construct the first symbolic information translation machinery.

    Another poster alluded to some post of yours that he seemed to think was relevant. However, he neglected to provide a link or reference to it.

    Regarding your post at 21, I have the vague feeling that you want us to draw some conclusion from your reference, but I don’t know what you intend. So… what then? Is there an reasoned point that follows?

  24. Allen_MacNeill, I do hope you will respond to tribune7, as well as to my points above.

    As I said, you were quite correct to point out that the information in DNA has meaning, i.e. is symbolic information. This is a stronger claim than saying merely “specified complexity.” However, as tribune7 and I have pointed out, that doesn’t reduce the strength or the significance of the fact that it has high specified complexity, which by itself indicates intelligent agency.

    Even though a quantitative measure of the specified complexity in DNA may look only at the intrinsic complexity in the DNA itself, by pointing out it has meaning, your point also shows that the complexity of the associate translating machinery must be counted as well. Without that, there can be no symbolic meaning. Symbolic meaning is extrinsic.

    In other words, you have succeeded in making the case for intelligent design much, much stronger, and not any weaker. Thanks for making an important point.

Leave a Reply