Home » Intelligent Design » ScienceBlogs praises disses Dembski-Marks paper on Conservation of Information

ScienceBlogs praises disses Dembski-Marks paper on Conservation of Information

ScienceBlogs has just posted what can only be called a rant (go here) against the paper by Robert Marks and me that was the subject of a post here at UD (for the paper, “Life’s Conservation Law,” go here; for the UD post, go here).

According to ScienceBlogs, the paper fails (or as they put it, “it’s stupid”) because

(1) As a search, evolution is a multidimensional search. Most of our intuitions about search landscapes is based on two or three dimensions. But evolution as a landscape has hundreds or thousands of dimensions; our intuitions don’t work.

(2) Evolution is a dynamic landscape – that is, a landscape that changes in response to the progress of the search. Pretty much every argument that Dembski makes can be thrown out on the basis of this one fact: all of his arguments are based on static landscapes. Once the landscape can change, every single one of his arguments become invalid – none of them work in dynamic landscapes.

(3) As a search, evolution doesn’t have to work on all possible landscapes. It doesn’t even need to work on most landscapes. It works on landscapes that have a particular kind of structure. It doesn’t matter whether evolution will work in every possible landscape — just like it doesn’t matter that fraction notation doesn’t work for every possible real number. What matters is whether it works in the particular kind of landscape in which our theory says it works. And on that question, the answer is quite clear: yes, it works.

Regarding (1), the work by Robert Marks and me typically focuses on compact metric spaces, which can include infinite dimensional spaces; for the purposes of this paper, which simplifies some of our previous work, we went with finite spaces. But even these can approximate any dimensionality we like for empirical investigations. Regarding (2), we explicitly point out that our approach is general enough to model time-dependent fitness functions (see section 8 — hey, why bother reading a paper if you know it’s wrong and can simply intuit the mistakes the authors must make). What ScienceBlogs appears not to appreciate or understand is that time-dependent fitness functions can be modeled by time-independent fitness functions (“static landscapes”) provided that one represents the search space with sufficiently many dimensions (by going to a Cartesian product — we point this out explicitly in our paper). Regarding (3), our point is that precisely because evolution works with constrained landscapes, those constraints require prior information. Yes, the environment is pumping in information; so where did that information come from? ScienceBlogs resents the very question. But what’s the alternative? Simply to say, “Oh, it’s just there.” The Law of Conservation of Information, despite ScienceBlog’s caricatures, provides cogent grounds for thinking that the information had to come from somewhere, i.e., from an information source.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

82 Responses to ScienceBlogs praises disses Dembski-Marks paper on Conservation of Information

  1. 1

    Marks and I.

    Sorry, I don’t speak math, but English I understand. ;)

  2. In regards to the first rebuttal point (1)

    I fail to see how this has any relevance or validity to the paper at all but I asks why don’t they just say Evolution has 9 dimensions? After all physicists just add dimensions all the time why cant evolutionists?

    On point (2)- They just basically showed that they fail to grasp that it is evolution’s willy nilly, anything goes allowances that are being critiqued here. Basically they just said “we reject this paper because it doesn’t “agree” with our view- not because it fails to accurately and sufficiently criticize it.

    (3)Apparently they have, for the sake of criticizing Dembski’s paper, limited the theory of evolution to only cases that it works for. Of course outside of the context of the critique of this paper they will call it a universal law/ theory or what have you. Moreover they have not defended or explained why their theory should be limited only to cases it works for. That sounds like Cherry picking to me… because it is. And lastly they once again fail to recognize that even the cases they do think their theory applies to can still be criticized in a greater general context. If not so then why not? This argument is absurd. It is no different that creationists saying you cant use fossils as evidence for evolution because our theory only applied to cases it works for- hence the modern human being.

    Stupid indeed…

  3. As the original thread has become rather long I will take this opportunity to point out (politely I hope) what appear to me to be significant problems with the LCI. I would be interested in your comments.

    One way of summarising the paper might be this.

    “The NFL theorem shows that if a search space has no structure then no search algorithm can do better than a blind search. An algorithm can only do better than a blind search if it is adapted to the structure of the search space. However, the search space of the possible genomes of living organisms does have a structure. For the purposes of the paper you are assuming that the Darwinian mechanism is able to take advantage of this structure to increase the probability of “finding” a viable genome to practical levels. But, the LCI shows that the probability of finding an appropriate algorithm among all the possible algorithms will always be so low that it more than compensates for the increase in probability gained through the algorithm. So we are still talking about a fantastically improbable outcome – finding a viable genome.”

    To prove this you have to make some statements about the probability of finding an algorithm.

    To do this you make two crucial assumptions.

    1) You identify an algorithm with the subset of the search space which it produces. So any method that comes up with Bora Bora as the only possible place to look is the same algorithm.

    2) You assume that each such algorithm is equally probable – the good old universal probabilty distribution.

    Both of these seem wrong.

    (1) leads to strange conclusions, for example, that all algorithms that potentially search the entire space until the target is found are the same as a blind search.

    (2) might make sense in a context where a searcher can choose from a set of algorithms. But when we are talking about a natural “search” such as natural selection then the search space itself greatly changes the probability of a particular algorithm being used. In fact it may increase the probability to 1. Given selection, replication, and
    mutation then natural selection must take place. It is caused by these features of the search space.

    There may be a question as to whether natural selection is able to find viable genomes. But that is a different question (and you are assuming it is possible for the purposes of this paper).

  4. I’m aware of the universal law of, “the conservation of energy”, well tested, indisputable, accepted fact, part of our universe, uncontroversial, true!
    This LAW, of the “conservation of information”, why exactly is it a LAW?

  5. 5

    IRQ Conflict, it’s “me.” “Me” is the object of the preposition “by.” The sentence is correct (if inelegant).

  6. Marks and I.

    Sorry, I don’t speak math, but English I understand.

    Actually, I think Dr. Dembski’s usage is correct here, since “Marks and me” is not the subject of the sentence. The subject is “The work by Marks and me…”. The pronoun here is part of a prepositional modifier to the noun phrase, and so it should appear in its objective rather than subjective form.

    Imagine dropping “Marks”. Which sounds correct: “The work by I…” or “The work by me…”?

    Apologies for the grammatical derail.

  7. Re: 3. On rereading I realise that I got assumption (1) quite wrong. That is not your criterion for the identify of an algorithm.

    However, my point about assumption 2 still stands.

  8. Dr. Dembski,
    I believe your, and Marks, work may find empirical resolution all the down to the quantum level of “our” reality.

    ———–
    Quotes by Anton Zeilinger

    Quantum theory, correctly interpreted, is information theory.

    ————–

    Put dramatically, Information and reality is the same thing

    http://www.signandsight.com/features/614.html
    ——————

    Science & Ultimate Reality Symposium ———

    Thence the question why nature appears quantized is simply a consequence of the fact that information itself is quantized by necessity. It might even be fair to observe that the concept that information is fundamental is very old knowledge of humanity, witness for example the beginning of gospel according to John: “In the beginning was the Word.”

    http://www.metanexus.net/Magaz.....+Container

  9. 9

    @ #5 David Kellogg and #6 Sotto Voce

    Thanks, I stand corrected. I ASSumed that because it didn’t sound right when I read it that it was wrong.

    My apologies Mr Dembski. At least I learned a few lessons here. :)

  10. 10

    Dr. Dembski,
    Though I don’t understand the math of the LCI all that well, it seems to me, from your response to science blog, you have covered all your bases as far as the “infinite” probabilistic resources available to an evolutionary search are concerned.
    It seems to me this “infinite landscape” objection, on science blog, is very similar to the “Many Worlds” route Koonin tried to take in His paper on the Cambrian explosion, in his trying to find a successful naturalistic resolution for evolution.

    The Biological Big Bang model:

    http://www.biology-direct.com/content/2/1/21

    And in the criticism to Koonin’s paper, it was fair to note, To what recourse will evolution take to access this infinite probabilistic resource available on the quantum level? Does evolution have the magical power to trespass the universal constants? Universal constants under which the infinite probabilistic resources of quantum mechanics are clearly bound to obey in a overarching non-chaotic form? Of course not! Evolution has no power to dictate whether the information reality will present to any evolutionary search will be useful or detrimental for evolution to use in the first place.
    But there is a much clearer truth in exposing this “infinite possibilities/ Many Worlds” fallacy of evolution. To open up a search to an “infinity of landscapes”, which are not bound in any overarching way by any of the universal constants of our reality, in fact greatly increases the percentage of “totally useless/detrimental information” that will be available in the evolutionary search, and in fact makes recourse to a “Information Giver” all the more necessary. i.e. the recourse evolutionists seek for escape, from the implications of intelligent design theory, actually turns into bottomless pit of meaningless information the further they press their case for a truly infinite landscape to search in for evolution. i.e. give them enough rope and they hang themselves.

  11. Hmmm. I thought I had posted something, but maybe I didn’t hit submit. The “rant” mentioned in the opening post here was by Mark Chu-Carroll who has a blog at ScienceBlogs. However ScienceBlogs contains many blogs, so it seems misleading to write “ScienceBlogs disses …” when really it was only one person. Most likely other blogs at ScienceBlogs have not addressed Dembski’s paper at all, and it is at least possible in theory that someone at ScienceBlogs could praise it.

  12. This is too funny.

    People may not like the paper but all they have to do to refute its premise is to actually support their position that information can arise and increase via nature, operating freely.

    That said in a non-ID scenario the word “search” does not belong as there isn’t anything to search for. Only survival counts- well that and the ability to reproduce.

  13. That said in a non-ID scenario the word “search” does not belong as there isn’t anything to search for. Only survival counts- well that and the ability to reproduce.

    Joe is right :)

  14. Mr Joseph,

    As I read it, Mr Chu-Carroll’s criticism is not so much of the idea or definition of active information, as much as the leap to situating the active information in a teleological source.

  15. it is at least possible in theory that someone at ScienceBlogs could praise it.

    Yes, we mustn’t jump to conclusions about what P.Z. Myers will say. ;P

    Seriously, you’re right that the posts should have credited Chu-Carroll (or “Good Math, Bad Math”). But despite all protestations to the contrary, scienceblogs.com seems to have developed something not too far from a coherent editorial identity in many respects. So it’s not all that surprising when others start to talk in terms of that corporate identity rather than just the individual bloggers.

    Anyhow, here’s hoping that a worthwhile exchange develops from this. MarkCC is prone to high-horsing (he is on ScienceBlogs) but he evidently knows his mathematics and I get the impression that he’s actually fairly open to reasonable discussion after you filter away the static.

  16. Joseph:

    People may not like the paper but all they have to do to refute its premise is to actually support their position that information can arise and increase via nature, operating freely.

    I could be wrong here, but IF I read Dembski and Marks correctly, then if active information was found to arise “via nature, operating freely”, then this would, in fact, have been caused by intelligence smuggling the information in somehow.

    IF this is the case, then I guess that the oft repeated argument that ID would be falsified if natural processes could produce CSI is wrong.

  17. 17

    On page 30 of the paper (my emphasis):

    For Häggström, “realistic models” of fitness presuppose “geographical structures,” “link structures,” search space “clustering,” and smooth surfaces conducive to “hill climbing.” All such structures, however, merely reinforce the teleological conclusion we are drawing, which is that the success of evolutionary search depends on the front-loading or environmental contribution of active information. Simply put, if a realistic model of evolutionary processes employs less than the full complement of fitness functions, that’s because active information was employed to constrain their permissible range.

    I’m a novice in these matters, but don’t evolutionists consider the environment to be the selective force? If everyone agrees that the environment inserts active information, what other source of active information is required?

  18. Nakashim-San,

    Every time we have observed active information and knew the source, said source has always been an intelligent agency. Always.

    We have NEVER observed active information arising from nature, operating freely. Never.

    Therefor when we observe active information and don’t know the source it is safe to infer an intelligent agency brought it about.

    So to refute that inference all one has to do is demonstate that the active info in question can arise via nature, operating freely.

  19. Hoki,

    Remove the requirement for a designer and the design inference falls.

  20. Hoki,

    If that were the case I would be the first to fight against Dembski and Marks.

    IOW if someone demonstrated that nature, operating freely can produce a bacterial flagellum, and Dembski/ Marks stepped in and said that it does not falsify the design inference for it, I would be inclined to do something about that.

  21. 21

    Bill Dembski,

    Yes, the environment is pumping in information; so where did that information come from? [Mark Chu-Carroll] resents the very question. But what’s the alternative? Simply to say, “Oh, it’s just there.”

    The environment is the physical universe. And there is no physical probability of the universe. This is not to say that we do not know the probability. The phrase “physical probability of the universe” is meaningless. I said more about this in a previous comment.

    If you treat negative logarithms of probabilities as physical information, then the probabilities themselves must be physical. I cannot find a philosopher specializing in the interpretation of probability who allows you to speak of physical probability in the absence of a repeatable experiment. Please cite any I have missed.

    Hugh Mellor, professor emeritus of philosophy at Cambridge University and author The Matter of Chance (free PDF available), says outright that

    the initial state, if any, of a universe, or of a multiverse, which by definition lacks precursors, has no physical explanation, since there is nothing earlier to give it any physical probability, high or low.

    There is no repeatable experiment. From a scientific perspective, the universe simply is what it is. (This works against Dawkins as much as you.)

    You use rhetoric to put the “activity” into active information. Specifically, you depict Dawkins, Adami, Schneider, and Ray as intelligent agents who create information by violating Laplace’s Principle of Indifference. Loosely speaking, they create the environments in their evolutionary processes. Then you leap to nature in general, and leave the reader to “reason” by analogy — if computational search processes succeed more often than null search only because humans add information to the virtual environments, then evolutionary processes that succeed in what is improbable under a null search model must operate in environments to which information has been added (by what, you decline to say). You ultimately fall back on “Hey, the information can’t just come from nowhere.”

    When we move from your pure mathematics to modeling of nature, the “search for a search” regress is not infinite. It resolves in a single step. The universe is the environment of any physical search. And you cannot make the universe itself into a search process, because there is no physical probability distribution on universes. There is no imbalance in the ledger of physical information, for the simple reason that your notion of physical information is ultimately meaningless.

  22. 22

    TM,

    We have been here before.

    Like a returning star in a triumphant rebuttal; the search space a la cart.

    Your argument is trivial in the face of the evidence.

    You suggest that finding a pot of boiling water on the stove can be explained by the information contained in the pot, stove, and water. No one should be required to ignore the evidence in order to believe you.

  23. Mr Joseph,

    As Mr DiBagno quotes,
    All such structures, however, merely reinforce the teleological conclusion we are drawing, which is that the success of evolutionary search depends on the front-loading or environmental contribution of active information.

    It would seem that the paper is open to the possibility that some active information is teleological (front loaded) and some is environmental, and it is an unsolved problem to tease them apart.

    As I said on the original thread, these issues can be analyzed more clearly in computatinal settings than in nature. Let us compare an evolutionary algorithm at generation 0 and at generation 100. The fitness function contains some active information – a given.

    In generation 0 the EA produces a series of queries, each of which has the same probability of success as a query in the null hypothesis. As the EA continues running until generation 100, it consumes other informational inputs. It consumes ticks of a clock, counting generations. It consumes a list of pseudo-random numbers, or actual random numbers (for example, from random.org). And it constructs and re-uses a history of previous queries. The EA doesn’t have to know the actual fitness values of these queries, only their relative values.

    In generation 100, the EA creates a series of queries, each of which will have a significantly higher probability of hitting the target than any of the queries from generation 0 or the null search. That differential probability of success represents active information, but where did it come from? The fitness function remained the same. The inputs were a sequence of clock ticks, a sequence of random numbers, and a ranked set of queries.

    As Drs Dembski and Marks say in the original paper, the next step in the ID research program is to be forensic accountants of information, and account for all the active information that exists at generation 100 that did not exist at generation 0. No need to worry about the purity of chemicals in re-runs of a Miller-Urey experiment. Simply analyze those inputs.

  24. 24

    Upright BiPed,

    Dembski and Marks have presented math and claimed that it models nature. Their argument that they have stated a law of nature hinges on their claim to have a model. It is entirely appropriate for me to challenge the model.

    You suggest that finding a pot of boiling water on the stove can be explained by the information contained in the pot, stove, and water.

    No, I indicate that we are water in the universal pot, incapable of observing the metaphysical stove.

  25. 25

    P.S.–Perhaps half of the universal pot of water is heated by hell, and the other half refrigerated by heaven. Metaphysically far-from-equilibrium thermodynamics, anyone?

  26. 26

    TM,

    The realities of universe did not begin with our observation of it.

    To now suggest we may not question our assumptions (in the face substaintial contradictory evidence) makes a sad case for empiricism.

    Its also more than a little too convenient.

  27. 27

    TM

    Perhaps half of the universal pot of water is heated by hell, and the other half refrigerated by heaven.

    …too convenient hubris.

  28. 28

    Upright BiPed,

    That we are parts of the universe observing parts of the universe (including ourselves observing) is an aspect of physical reality.

    Speaking of hubris, that is what bugs me about the epistemological presumptions of Dembski-Dawkins. We who humbly contemplate the limits of scientific knowledge are not so loud and so proud.

    I encourage people to think of scientific accounts as least-common-denominator knowledge. Explanation-by-committee does not yield more valuable beliefs than what the individual can obtain in relationship with the Creator, the Absolute, God, Brahman, Mind…. There are no proofs of the things most important to know.

    I do not buy into Gould’s “non-overlapping magisteria.” Science is not a magisterium. It’s fun, and it comes in handy. Technology may extend your life and enhance your physical comfort, but it cannot save you from the most fundamental miseries.

  29. T M English writes,

    I encourage people to think of scientific accounts as least-common-denominator knowledge. Explanation-by-committee does not yield more valuable beliefs than what the individual can obtain in relationship with the Creator, the Absolute, God, Brahman, Mind…. There are no proofs of the things most important to know.

    Very nice – especially the last sentence.

  30. Tom English: I’m not sure why you equate environments with Nature writ large. An environment with Karl Marx, paper, and pen in it will output Das Kapital. Environments, it seems, can be quite cozy and the information they contain and the sources from which they obtain it may be studied and assigned probabilities.

  31. Tom English — We who humbly contemplate the limits of scientific knowledge are not so loud and so proud.

    How is design detection beyond the limit of knowledge?

  32. Its all sounding like angels on the head of a pin. Surely, evolution has described how information (or whatever you want to call it) can arise. Its generation of junk DNA, that, with further random changes, happens to become useful. Bingo, creation of information. Whats so hard to understand ?

    We can argue about the probability, but the mechanism is clear and doesnt violate any laws.

  33. “We can argue about the probability, but the mechanism is clear and doesnt violate any laws.”

    No one says it does.

    It is the probability that is the issue. The changes required quickly get outside any possibility in a trillion universes let alone the 3.5 billion years that life has been on the planet.

  34. Tom English–The environment is the physical universe. And there is no physical probability of the universe.

    But it isn’t the probability of the universe that is trying to be calculated but the events of things that have occurred within it.

    And these calculations certainly should be possible assuming the universe is finite.

  35. Joseph:

    Remove the requirement for a designer and the design inference falls.

    But if Dembski and Marks’ paper is correct and if nature, operating freely, was found to create information, then this would be due to intelligence smuggling the information in? I.e. nature, operating freely did not create the information after all.

  36. 36

    TM,

    So, what are your objections to the epistemological presumptions of Dawkins?

    Nevermind, pretend I didn’t ask.

    “I encourage people to think of scientific accounts as least-common-denominator knowledge.”

    I hope you’re not suggesting that I as one of those people should dumb myself down. For instance, its okay if I ask how a physically inert sequence of symbols was formulated by chance, isn’t it? Its also okay to ask why chance and necessity are ruled the answer, when neither has a chance in hell of being right, right?

    “Explanation-by-committee does not yield…”

    I hear ya.

    “There are no proofs of the things most important to know.”

    So where do you suggest we stop looking?

    “Technology may extend your life and enhance your physical comfort, but it cannot save you from the most fundamental miseries.”

    I understand the warning, thanks. But that technology also created the means to know that Life flows from information, and it also shows that the script didn’t write itself.

    I’ll take that as one less misery.

  37. 37

    “Its all sounding like angels on the head of a pin. Surely, evolution has described how information (or whatever you want to call it) can arise.”

    Sure it has!

    “Its generation of junk DNA, that, with further random changes, happens to become useful.”

    Exactly!!

    “Bingo, creation of information.”

    POOF!!!

    “Whats so hard to understand?”

    I dont know!!!

  38. Graham,

    No one has demonstrated that information can arise without agency involvement.

  39. Hoki,

    See my response in comment 20

  40. Re #34

    “But it isn’t the probability of the universe that is trying to be calculated but the events of things that have occurred within it”

    Are you sure?

    The claim of the LCI can be expressed this way:

    Pr(C ) <= Pr(R )/Pr(R|C)

    where R is the result of interest e.g. a configuration of DNA that leads to a viable organism and C is a context such as natural selection. (I use “context” rather than “search” because in a real situation a search process requires a physical context in which to operate e.g natural selection is not just a method – it is chemistry operating in an environment.)

    Pr(C ) is the probability of the context arising in the first place.

    Pr(R ) is the probability of the result without a context i.e a blind search.

    Pr(R|C) is the probability of the result given the context. In the case where R is “viable organism” and C is “natural selection” (many on this forum would argue that Pr(R|C) is negligibly small – but that is not the issue in this paper).

    For this claim to make sense there has to be some meaning to the idea that Pr(R ) and Pr(C ) are somehow free of context. (This is where the principle of indifference is summoned to do sterling work). But the universe is a context. It includes the laws of nature, and everything in it – including the presence of DNA. The universe greatly limits the range of possible contexts.

  41. Perhaps some else (besides Joseph) can help shed some light on a point I brought up earlier.

    I wrote:

    I could be wrong here, but IF I read Dembski and Marks correctly, then if active information was found to arise “via nature, operating freely”, then this would, in fact, have been caused by intelligence smuggling the information in somehow.

    From Dembski’s post:

    Searches that operate by Darwinian selection, for instance, often significantly outperform blind search. But when they do, it is because they exploit information supplied by a fitness function—information that is unavailable to blind search. Searches that have a greater probability of success than blind search do not just magically materialize. They form by some process. According to LCI, any such search-forming process must build into the search at least as much information as the search displays in raising the probability of success.

    (Joseph did write a response, but I just fail to see how it addresses my point)

  42. Dembski also wrote above:

    Yes, the environment is pumping in information; so where did that information come from?

  43. David Springer sent this interesting comment to me via email, and since he’s no longer on UD I thought I’d share:

    Bill Dembski posted an article at Uncommon Descent discussing a science blog rant against the proposed law of conservation of information.

    I don’t understand the resistance to this. It is a commonly assumed tenet of science that in principle complete information about a system at one point in time is sufficient to determine its state at any other point in time. It’s the underlying principle behind materialism. It’s the refuting principle employed against mind/brain dualism i.e. that all thoughts and behaviors can be, in principle with complete information, reduced to brain chemistry and physics. Also, in principle, all the information in the universe was present at the instant of its creation (Big Bang) some 14 billion years ago.

    This principle is so widely held that when Stephen Hawking proposed that information might be permanently lost when matter falls into a black it annoyed a great many physicists and resulted in a famous bet. In 1997 John Preskill bet Stephen Hawking and Kip Thorne that information isn’t lost even in a black hole. In 2005 Hawking published a paper describing how quantum perturbations at the event horizon of the black hole allows information to escape and along with the paper he conceded the bet to Preskill and paid it off. Thorne refused to concede and didn’t contribute his share of the payment.

    Why an intelligent materialist would argue against the conservation of information is is beyond me except perhaps as a kneejerk, indefensible reaction when someone like Dembski asks the loaded question – what is the original source of the information in the universe? The answer for many great thinkers, such as Albert Einstein and a good fraction of the founders of the United States of America, minimally leads to deism i.e. God created a clockwork universe where everything was predetermined at the instant of creation.

  44. David Springer:

    It is a commonly assumed tenet of science that in principle complete information about a system at one point in time is sufficient to determine its state at any other point in time. It’s the underlying principle behind materialism.

    Springer is confusing materialism with determinism.

    Why an intelligent materialist would argue against the conservation of information is is beyond me except perhaps as a kneejerk, indefensible reaction when someone like Dembski asks the loaded question – what is the original source of the information in the universe?

    Even if we assume determinism, there are plenty of problems with the CoI claims. Springer seems to have missed the fact that Dembski and Marks are using their own custom definition of information.

  45. Dr. Dembski:

    An environment with Karl Marx, paper, and pen in it will output Das Kapital. Environments, it seems, can be quite cozy and the information they contain and the sources from which they obtain it may be studied and assigned probabilities.

    I’m rather surprised at this comment, given your previous position regarding design hypotheses conferring probabilities on events.

    If Karl Marx is likely to produce Das Kapital, then that high probability constitutes active information, does it not? So Marx is not creating, but rather “reshuffling”, information as he writes, and the active info still needs to be accounted for. It appears that all active info needs to be regressed at least to the origin of the universe.

  46. BTW, I agree that Chu-Carroll is prone to read carelessly and jump to erroneous conclusions, and I agree that he has done so here.

    But I disagree with this:

    The Law of Conservation of Information, despite ScienceBlog’s caricatures, provides cogent grounds for thinking that the information had to come from somewhere, i.e., from an information source.

    To calculate an information cost, we assume that the information had to come from somewhere, except in the case of design, which for some reason is exempt from that assumption.

    There are at least two problems with the claim that designers are active information sources:

    1) There’s no evidence that designers have an ability to find small targets without problem-specific information. Can anyone give an example of a designer pulling something from an informational void? The security of devices like combination locks and passwords depends on humans’ inability to do so.

    2) As I said in 44, if designers have a high probability of successfully finding targets, then design is nothing more than reshuffling existing information. By definition of active information, it can only be produced by something that is unlikely to find targets. That is, it can only come about through luck.

  47. rvb8:

    This LAW, of the “conservation of information”, why exactly is it a LAW?

    I submit that law is a poor word choice. From the paper:

    Thus, instead of LCI constituting a theorem, it characterizes situations in which we may legitimately expect to prove a conservation of information theorem.

    We might be tempted to see this as a tacit admission that there are situations in which information is not conserved, but everywhere else in the paper, Marks and Dembski characterize this law as universal. They even present the challenge of finding cases in which the law doesn’t hold.

    Unfortunately for Marks and Dembski, coming up with such cases is trivially easy. As they say in the paper, the ways in which alternative searches can be instantiated is “endlessly varied”, and their statement of the LCI puts no constraints on our assumptions regarding that instantiation. We can always assume that it came from a higher-order search space that contains only efficient searches.

    And if anyone thinks that this space of efficient searches needs to be accounted for by an even higher-order search, consider that this logic demands an infinite regress of search spaces.

  48. 48

    “If Karl Marx is likely to produce Das Kapital, then that high probability constitutes active information, does it not? So Marx is not creating, but rather “reshuffling”, information as he writes, and the active info still needs to be accounted for. It appears that all active info needs to be regressed at least to the origin of the universe.”

    R0b, this is a good point. Do intelligent agents create new information (active or CSI) or do they simple shuffle around existing information? I think ID argues the latter, though others (Joseph for example) have told me that intelligent agents create really new information. How strong is the COI law supposed to be?

  49. 49

    Dr. Dembski, Mark Chu-Carroll has now responded to this response.

  50. R0b wrote:

    Unfortunately for Marks and Dembski, coming up with such cases is trivially easy. As they say in the paper, the ways in which alternative searches can be instantiated is “endlessly varied”, and their statement of the LCI puts no constraints on our assumptions regarding that instantiation. We can always assume that it came from a higher-order search space that contains only efficient searches.

    No we can’t. You measure the fraction of “efficient” functions from the total number of elements in the next largest set inducing an average performance equal to blind search. (Indirectly, we’re measuring the number of elements we’d have to remove from a baseline set to get this “efficient” set.) See my reply on the other thread.

    Atom

  51. Hoki,

    Nature, operating freely is a blind search.

    As in said in comment 12:

    That said in a non-ID scenario the word “search” does not belong as there isn’t anything to search for. Only survival counts- well that and the ability to reproduce.

  52. Joseph (#52):

    Nature, operating freely is a blind search.

    Again, this is from the abstract of the Dembski/Marks paper:

    Searches that operate by Darwinian selection, for instance, often significantly outperform blind search.

    So, nature, operating freely does not include Darwinian selection? What, then, is nature operating freely?

  53. I though I’d go back to the start of the dialogue, so I went to Mark C. Chu-Carroll’s first article about Dembski.

    Dembski has been trying to apply the NFL theorems to evolution: his basic argument is that evolution (as a search) can’t possibly produce anything without being guided by a supernatural designer – because if there wasn’t some sort of cheating going on in the evolutionary search, according to NFL, evolution shouldn’t work any better than random walk – meaning that it’s as likely for humans to evolve as it is for them to spring fully formed out of the ether.

    This would have to be the most accurate descrition of ID by an Anti-ID proponent that I have seen. (which is kinda sad when you think about it) The only thing that is wrong with it is the word ‘supernatural’ should read ‘intelligent’.

    Thes doesn’t work for a very simple reason: evolution doesn’t have to work in all possible landscapes. Dembski always sidesteps that issue.

    So the search just happens to be be one able to take advantage of the structure of the landscape? What an interesting cooincidence.

    Let me pull out a metaphor to demonstrate the problem. You can view the generation of a notation for a real number as a search process. Suppose you’re given ?. You first see that it’s close to 3. So the first guess is 3. Then you search further, and get closer – 3.14. That’s not quite right. So you look some more, and get 3.141593. You’ll get closer and closer to a notation that precisely represents ?. Of course, for ?, you’ll never get to an optimum value in decimal notation; but your search will get progressively closer and closer.

    Unfortunately, most real numbers are undescribable. There is no notation that accurately represents them. The numbers that we can represent in any notation are a miniscule subset of the set of all real numbers. In fact, you can prove this using NFL.

    Ok, I think I’m with you so far…

    If you took Dembski’s argument, and applied it to numbers, you’d be arguing that because most numbers can’t be represented by any notation, that means that you can’t write rational numbers without supernatural intervention.

    Erg. I was going to let your earlier slipup pass, but you’ve gone and based your counterargument on it. If you replace this with an accurate description you get:

    “If you took Dembski’s argument, and applied it to numbers, you’d be arguing that because most numbers can’t be represented by any notation, that means that you can’t write rational numbers without intelligent intervention.”

    After that, I was a bit too discouraged to keep reading.

  54. 54

    Bill Dembski (30),

    Sorry to respond slowly — it’s the end of the semester.

    I’m not sure why you equate environments with Nature writ large.

    Every environment has an environment, except for the universal environment. What is a closed environment but a thermodynamically closed system — a modeling fiction that is sometimes useful? When you say that information has entered a material system from a non-material source, a methodological naturalist must contend that your accounting is an artifact of your framing of the informed material system. We could play with all of our matryoshka dolls, but I suggest we go straight to the biggest.

    I’m really not picking on you here. I no longer believe that anyone can speak of the objective probability of the universe (perhaps multiverse) being what it is. Juergen Schmidhuber was a bit obnoxious in presenting his inductive bias as the Great Programmer religion — the idea that the universe unfolds as a dovetailing program runs. But I think he correctly indicates that our explanations of nature begin with unprovable assumptions about the nature of nature.

    An environment with Karl Marx, paper, and pen in it will output Das Kapital. Environments, it seems, can be quite cozy and the information they contain and the sources from which they obtain it may be studied and assigned probabilities.

    Yes, Marx must have found the reading room of the British Museum cozy. He spent much of his time there, over a period of 12 years, surveying the economics literature. There are historians of ideas who explain Marx as a product of his times, much as they do Darwin.

    Shall we next set up the 1866 holdings of the British Museum’s library as the target of a search?

  55. 55

    P.S.–I would ask you how to make Karl Marx into a repeatable experiment, but it brings to mind The Boys from Brazil.

  56. 56

    “After that, I was a bit too discouraged to keep reading.” You should have kept up, because Chu-Carroll points out that the paper implies that human intelligences don’t create information either: the original intelligent agent created all the information there is. We just shuffle it around.

  57. StephenA @54

    Thes doesn’t work for a very simple reason: evolution doesn’t have to work in all possible landscapes. Dembski always sidesteps that issue.

    So the search just happens to be be one able to take advantage of the structure of the landscape? What an interesting cooincidence.

    Not really. We see the search mechanisms that work in this environment working. We don’t see search mechanisms working that don’t work in this environment. Hardly surprising.

    For my own edification, is Dr. Dembski now admitting that the mechanisms identified by modern evolutionary theory do, in fact, result in the evolution we observe? I get the impression that he has stepped back (or up) a level and is now taking the position that the environment itself is intelligently designed. Is my understanding accurate?

    If so, is ID now a type of fine tuning argument?

    JJ

  58. Hoki:

    So, nature, operating freely does not include Darwinian selection?

    As I said there isn’t anything to search for.

    So Darwinian selection in a scenario without something to search for would be nature, operating freely.

    Darwinian selection with a target is not nature, operating freely.

  59. David Kellogg:

    the original intelligent agent created all the information there is. We just shuffle it around.

    We tap into it and use it.

    Geneticist Giuseppe Semonti, in his book “Why is a Fly Not a Horse?” tells us in chapter VIII (“I Can Only Tell You What You Already Know”):

    An experiment was conducted on birds-blackcaps, in this case. These are diurnal Silviidae that become nocturnal at migration time. When the moment for the departure comes, they become agitated and must take off and fly in a south-south-westerly direction. In the experiment, individuals were raised in isolation from the time of hatching. In September or October the sky was revealed to them for the first time. Up there in speldid array were stars of Cassiopeia, of Lyra (with Vega) and Cygnus (with Deneb). The blacktops became agitated and, without hesitation, set off flying south-south-west. If the stars became hidden, the blackcaps calmed down and lost their impatience to fly off in the direction characteristic of their species. The experiment was repeated in the Spring, with the new season’s stars, and the blackcaps left in the opposite direction- north-north-east! Were they then acquainted with the heavens when no one had taught them?

    The experiment was repeated in a planetarium, under an artificial sky, with the same results!

  60. Mr StephenA,

    Actually, Mr Chu-Carroll was being loose in his description of evolution as a search strategy. NFL would say that there are some spaces evolution does well in, some spaces where it equals a random walk, and some spaces where it does worse than a random walk, so that on average it equals the random walk in performance across all spaces.

    By accepting NFL, Dr Dembski and the rest of us have to accept that evolution works, full stop. Really, the only remaining issue is whether the universe we inhabit is a search space tuned to make evolution easy, or not.

    One approach to this question is to look at universes (fitness functions) where evolution fails to perform as well as a random walk. Dr David Goldberg at the University of Illinois studies deception in genetic algorithms.

    Imagine a fitness surface like a bowl, with one point sticking up from the lowest point to reach just a little bit above the rim. That is a deceptive fitness function. All the information points away from the optimum. By tuning the parameters of the fitness function, it is possible to force an evolutionary algorithm to perform worse than a random search.

    Is our universe deceptive? Or are its laws monotonic and regular over the scale of life in size, temperature and pressure? To the extent that the laws are regular, we should expect that we live in an evolution friendly universe. To the extent that the laws are deceptive, if we still saw evolution work, that would be evidence of some interference or assistance.

  61. Atom:

    No we can’t. You measure the fraction of “efficient” functions from the total number of elements in the next largest set inducing an average performance equal to blind search.

    Atom, four answers to this:

    1) I’m not sure what you mean by “next largest”. Next to what? A lot of sets of functions can have an average performance the same as the null search, and some of the sets can be very small. Consider the set that consists of a single function in which every point has a fitness of 0. For some algorithms, this will result in a performance equal to the null search.

    2) I don’t see anything in the paper that states or implies the bolded part above. If your idea remedies this problem, then Marks and Dembski need to add it as a condition to the LCI.

    3) Having said that, I don’t think it remedies the problem. Consider a case in which q=2*p. To falsify the LCI, we need to show that more than 1/2 of the higher-order space consists of searches that succeed with a probability of at least q. We can define our higher-order space so that, say, two-thirds of it consists of these “good” searches, and the other third consists of searches that are bad enough to offset them, so the average of the whole set is the same as the null search.

    4) Your condition doesn’t seem to be generally applicable. See my comment here.

  62. Good morning R0b.

    1) I already discussed this trivial set in a previous post and said we’re looking for the next largest set from the “reduced” set. Since Dembski/Marks’ paper begins with a set-up where someone shows an improved performance over blind search (such as by using a fitness function, f1), we begin with that set and add to the higher level space until we reach a null performance baseline. Then we measure the fraction of “good” functions (with efficiency at least as good as the first proposed function, f1) to this total set. According to the paper, the informational cost of this reduction will be at least the active information.

    2) You are completely correct, though I believe this is implied in the paper due to the way they set-up the problem. It is a straight-forward extension of their work and I agree they should probably state it explicitly.

    3) If you can do what you propose – begin with a higher level search space with performance averaged to blind search on the lower level search then reduce that set to a good fitness function that increases performance such that the active information gained is greater than the informational cost incurred by your reduction – I will concede. If my ideas were not what Dembski and Marks had in mind with their paper they may clarify and argue against your point, but I won’t. So you will have proved your point to me, at very least.

    Atom

  63. Joseph:

    As I said there isn’t anything to search for.

    So Darwinian selection in a scenario without something to search for would be nature, operating freely.

    Darwinian selection with a target is not nature, operating freely.

    From page 8 of the Dembski/Marks paper:

    In other words, viability and functionality, by facilitating survival and reproduction, set the targets of evolutionary biology. Evolution, despite Dawkins’s denials, is therefore a targeted search after all.

    Do think that you and Dembski agree with eachother?

  64. R0b, continued,

    4) I posted a reply here. Although a fitness function method would not work well when we’re using a different search strategy, a similar way of setting a baseline could be used in other cases as well. But since I can’t enumerate all cases (being an infinite number), I can explain the applicability on a case-by-case basis until it is clear to you that the problem you posed, while insightful and demonstrating a good place that the paper could have been more explicit, does not represent an insurmountable obstacle.

    Atom

  65. Hoki,

    See comment 12

  66. Joseph,

    This is just going around in a circle and getting quite tedious.

    Dembski is arguing that Darwinian evolution is teleological and a targetted search. Agree?

    If not, why not?

  67. Atom, you were right and I was wrong. You’re a genius, man. (Not that it takes a genius to be right when I’m wrong.)

    Not only does the LCI follow from your condition, but you’ve also pointed the way to much easier proofs for the three CoI theorems in the paper.

    Here’s a way that the LCI can be derived from your condition.

    Definitions:
    p,q: Same as in the paper
    O2: Higher-order space
    Q: Set of “good” functions in O2
    sum(X): Sum of all probabilities in set X
    |X|: Cardinality of set X

    Derivation:
    1. Since the probabilities in Q are at least q:
      sum(Q) >= q*|Q|

    2. Since sum(O2) >= sum(Q)
      sum(O2) >= q*|Q|

    3. Divide both sides by |O2|:
      sum(O2)/|O2| >= q*|Q|/|O2|

    4. Your condition is sum(O2)/|O2| = p. So:
      p >= q*|Q|/|O2|

    5. So:
      p/q >= |Q|/|O2|

    And that’s the LCI.

    And since your condition obviously holds in the scenarios posited by the three CoI theorems, the above constitutes a simple proof for those theorems also.

    Unless I’m wrong again. Did I mess up somewhere?

  68. Re #68

    I aplogise for being too lazy to trace back all the posts – where did Atom’s condition:

    sum(O2)/|O2| = p

    come from?

    Also, even if:

    p/q >= |Q|/|O2|

    Is not the LCI unless you assume all members of O2 are equally probable. D&M do assume this when they write of their “epistemic rights” to assume a uniform probability distribution. But there are massive problems with this assumption and it is key to the whole paper.

  69. R0b,

    I’ve just gone through your proof step-by-step and you are in fact correct: it is a simpler method of proving the COI. You didn’t make any mistakes in your derivation (that I saw) and the final step is equivalent to Dembski’s function-theoretic derivation.

    I wish I could take credit for being a genius, but you’re the genius who built the proof. So let’s just say we’re both pretty smart guys. :) (Feel free to share any for that discovery as your proof was elegant.)

    Mark Frank,

    The condition

    sum(O2)/|O2| = p

    is based on the definition of O2, which is the next largest set containing Q as a proper subset and has an average performance (on the lower-level search) equal to null, blind search. This is what I said was the logical definition of our higher order search space and as R0b and I have shown, is a sufficient condition for the LCI to hold.

    As for your second objection, you can assume that O2 has a non-uniform probability distribution on its elements that makes “good” functions more likely than bad, the same way that O2 induces a higher probability on “good” elements in the original search space, O. Since the probability distribution on O2 is only one of many possible, you now have to explain what the cost of choosing that probability distribution over the others was. So you have a search-for-a-search-for-a-search. Dembski has proven a measure-theoretic version for probability distributions and demonstrated that the LCI still holds. So your regress doesn’t solve the problem, it only exacerbates it.

    Atom

  70. Mark Frank,

    On further reflection I think you may not even need the uniformity assumption to get from step 5 (p/q >= |Q|/|O2|) to the LCI.

    I don’t believe we have made use of the uniformity assumption in the initial steps (steps 1-5), or in our definitions. (Though I could be wrong…) All that we’ve said is that p is some probability and that q is a greater probability than q, so it has been improved over p. Furthermore, Sum(O2) / |O2| = p, so that the O2 set has on average the same performance as the original search p and so can serve as an objective baseline. Those were the important definitions and I don’t think we’d have to change anything if p differed from a uniform search probability, since we left p as a variable. The above will work for any value of p.

    From there, we do the following to get LCI:

    6. Rearrange, by multiplication and division
    |O2|/|Q| >= q/p

    7. Take the log (base 2) of both sides
    log(|O2|/|Q|) >= log(q/p)

    8. log(q/p) is the active information (I+), by definition
    log(|O2|/|Q|) >= I+

    9. Break up log, using quotient rule
    log(|O2|) – log(|Q|) >= I+

    10.Rearrange logs and factor out -1
    -[log(|Q|) - log(|O2|)] >= I+

    11. Combine, using quotient rule, we get
    -log(|Q|/|O2|) >= I+

    …which is the LCI.

    Atom

  71. Atom

    Loads of comments I could make – but to quickly address your last post. Taking logs makes little difference – except to confuse things slightly.

    The LCI is that:

    -log(probability(Q))>=I+

    But probability(Q) only equals |Q|/|O2| if you assume that all functions in O2 are equally likely i.e. a uniform probability distribution.

  72. Atom

    More on uniform probability distributions (UPDs)

    D&M’s measure-theoretic version assumes a UPD itself. It assumes that all pdfs across the search space are equally probable. So it can’t be used to prove that a UPD is justified.

    See Häggström 2007 (pp 6-7) for some of the problems with UPDs. One of them is that UPDs are not closed under non-linear transformations. In most real situations there is more than one UPD to choose from. Häggström uses the example of the size of a square. Do we say all lengths of the side are equally likely or all areas are equally likely? We can’t have both. Something similar applies to choosing algorithms. For example, M&D give three “definitions” of an algorithm. All three assume UPDs. However, in at least some cases, the UPD assumptions of the definitions are incompatible.

    I can illustrate with a simple example. Suppose:

    The space we are searching (?) is the digits 1 2 and 3.

    The target (T) is the digit 1.

    So p=1/3

    Using the function theoretic approach let the other space (?’) be the two letters a and b.

    Then here is the set of all possible functions from ?’ to ? and the associated value of q

    a b q
    1 1 1
    1 2 0.5
    1 3 0.5
    2 1 0.5
    2 2 0
    2 3 0
    3 1 0.5
    3 2 0
    3 3 0
    ?
    We could assume that each of these is equally likely. But each function is also associated with a probability distribution function on ?. Thus (sorry about the formatting):

    a b -1- -2- -3-
    1 1: 1.0 0.0 0.0
    1 2: 0.5 0.5 0.0
    1 3: 0.5 0.0 0.5
    2 1: 0.5 0.5 0.0
    2 2: 0.0 1.0 0.0
    2 3: 0.0 0.5 0.5
    3 1: 0.5 0.0 0.5
    3 2: 0.0 0.5 0.5
    3 3: 0.0 0.0 1.0

    And you will see that there are only six unique pdfs (e.g. 1 2 and 2 1 give the same pdf).

    But in the measure-theoretic version M&D assume that all pdfs are equally probable. In which case the function 1 2 and the function 2 1 should count as one algorithm. Which UDP is it?

  73. I see that WordPress has turned my greek omegas to ?. I hope it still makes sense.

  74. Folks:

    Pardon a quick note:

    H = SUM (pi log pi) does not at all assume a uniform probability distribution. (We do use info theory with say English text, which has a significant degree of redundancy, i.e non-uniformity of probability. Also cf Bradley’s working out of ICSI for 110-aa Cytochrome-C here, which treats of the non-uniformity per Yockey et al.)

    [I would be most interested to find out that the laws of physics and chemistry had in effect written into them, the DNA code, processing algorithms and associated molecular nanomachinery; onward the integration of proteins to form the complex, interwoven systems of life in the cell! If that is the effective objection to inference to design on seeing FSCI in DNA and its cognates, that looks a lot like jumping form the frying pan into the fire.]

    Also, that much derided uniform probability distribution is saying that this is the maximum uncertainty case, where the symbols i are least constrained. (It is a generally accepted principle of probability that absent reason to constrain otherwise, we default to equiprobable individual outcomes. Bernouilli and Laplace among others, if I recall. A classic and effective approach to statistical mechanics is based on just that.)

    We can then make shifts to account for non-uniformity; and H the average information per symbol is an application of that.

    GEM of TKI

    PS: Atom et al — good stuff.

  75. Mark Frank:

    The LCI is that:

    -log(probability(Q))>=I+

    Marks and Dembski’s stated formulation of the LCI is vague on the condition of that probability; that is, probability(Q) given what? But their examples make it clear that they’re talking about a null higher-order search. Notice that each of their three CoI theorems ends with the statement, “Equivalently, the (higher-order) endogenous information … is bounded by the (lower-order) active information…”

  76. Mark Frank:

    D&M’s measure-theoretic version assumes a UPD itself. It assumes that all pdfs across the search space are equally probable. So it can’t be used to prove that a UPD is justified.

    Yes, as they regress probabilities up the hierarchy, they keep moving their assumption of uniformity to a higher level. Ultimately they justify that assumption by the principle of insufficient reason. As you point out, Haggstrom and others have explained why this justification doesn’t work.

    As you also point out, Marks and Dembski’s “information cost” is arbitrary, as it depends on how we define the higher-order space. Without Atom’s condition, the information cost can range from 0 to infinity. With Atom’s condition, the lower bound is at least log(q/p), but the upper bound is still infinity.

  77. Mark Frank:

    where did Atom’s condition:

    sum(O2)/|O2| = p

    come from?

    Atom’s position is that this condition is implied in Marks and Dembski’s work. Indeed, in each of their examples, they define the higher-order space with a symmetry that evenly distributes probabilities over the lower-order space, which satisfies Atom’s condition.

    This symmetry is how Marks and Dembski neutralize any deviation from uniformity. Here’s the game:

    1) They posit a completely unbiased search space.

    2) You counter with a fitness function (or search space translation, or probability distribution, etc.) that biases some points over others.

    3) They counter with a uniform space of fitness functions (or of search space translations, or of probability distributions, etc.) that again renders the original search space unbiased.

    4) etc.

    Without Atom’s condition, the LCI is easily falsified. With Atom’s condition, the LCI is easily proven. Interestingly, the paper says that the LCI is neither falsified nor provable.

  78. R0b

    I think this thread is pretty much dead, but for completeness.

    “With Atom’s condition, the LCI is easily proven.”

    I don’t think that is true unless you also assume whichever UPD fits your needs (see #72 above).

  79. Mark, yeah, the thread is mostly dead. Where’s Miracle Max when you need him?

    Yes, your function-theoretic higher-order space does not meet the conditions of the measure-theoretic CoI. A measure-theoretic higher-order space would look like this (the set should be infinite, but I’m setting the granularity to 1/3 to make it finite):

    -1- -2- -3-
     1   0   0
     0   1   0
     0   0   1
    2/3 1/3  0
    2/3  0  1/3
    1/3 2/3  0
     0  2/3 1/3
    1/3  0  2/3
     0  1/3 2/3
    1/3 1/3 1/3

    But this set has something in common with your function-theoretic set, namely that the average of the distributions is:

    1/3 1/3 1/3

    In each of Marks and Dembski’s three CoI theorems, the assumptions of the theorem entail a uniform average distribution on the lower-order space. This means that Atom’s condition is met, and the LCI conclusion follows.

  80. R0b

    It may well be that we agree but I am reluctant to overstate Atom’s position.

    Atom’s condition is that the average value of column one is 1/3. However, this does not necessarily mean that the probability of finding 1 in the lower order set from this set of searches is 1/3. For that to be true you need an additional assumption that each row in the set of searches is equally probable. This is the UPD assumption. It is the combination of this (unreasonable) assumption and Atom’s condition that leads to LCI.

    Maybe that was what you were saying – I just wanted to be clear.

  81. Mark, you’re correct — both assumptions are needed. Marks and Dembski’s one-sentence statement of the LCI doesn’t explicitly state either of them, but elsewhere they state that the comparison is between the lower-order active information and the higher-order endogenous information, which entails the higher-order UPD assumption. Atom’s condition, on the other hand, isn’t stated anywhere, although all of their examples meet it.

  82. Wm. Dembski writes:
    An environment with Karl Marx, paper, and pen in it will output Das Kapital.

    Not necessarily.

    Yet this sort of thinking demonstrates one of my pet peeves with the ID movement’s claims in this area – they must know the outcome (in this case, that Marx wrote Das Kapital) prior to being able to give their equations/claims/analogies/filters a chance at success.

    Sort of like how biblical creation scientists KNOW that the b ibical version of history is 100% true, then seek facts and evidence to support their conclusion.

Leave a Reply