Home » Intelligent Design » Yet Again an ID Opponent’s Argument is Built on a Foundation of Equivocations

Yet Again an ID Opponent’s Argument is Built on a Foundation of Equivocations

Elizabeth Liddle quotes William Dembski when he writes:  “by intelligence I mean the power and facility to choose between options–this coincides with the Latin etymology of ‘intelligence,’ namely, ‘to choose between’”  The quotation comes from “Intelligent Design Coming Clean,” an essay Dembski published in 2000.

Liddle leaps from Dembski’s definition to the following dubious conclusion:  “Dembski has produced an operational definition of ‘intelligence’ that does not require ‘intention’ which he specifically excludes as a ‘question of science’.”

Liddle employees two rather clumsy equivocations to misrepresent Dembski’s work.  Let’s see how she does it:First, she equivocates on the word “choose” from Dembski’s quote.  What does “choose” mean?  Let’s go to the dictionary (I’m using the World English Dictionary), and we find “choose” means:

1. to select (a person, thing, course of action, etc) from a number of alternatives

2. to consider it desirable or proper: I don’t choose to read that book

3.  to like; please: you may stand if you choose

What do all of these senses of the word have in common?  They all presuppose the prior existence of a “chooser,” i.e., an agent who chooses between alternatives, and in context this is exactly what Dembski meant in his essay.  I defy anyone to click on the link above, read the essay and come to any other conclusion.  He absolutely does NOT have a natural process in mind.

Here is where the equivocation comes in.  Liddle says a natural process can “choose” in the way a wire mesh “chooses” between small rocks that it allows to pass through and large ones that it does not (she later even calls this choosing a “filter”).  And this is what Liddle means when she writes:  “My point is that I think that IF we use Dembski’s definition (which excludes intention as a criterion) then he is correct – there is a characteristic signature of patterns that have emerged from a process that involves “choice between options”. Natural selection is one such process, and I think that’s why it’s products resemble in so many ways, the products of human design.”

Give me a break.  Does anyone with even the remotest familiarly with Dembski’s ouvre really believe he would define intelligence in such a way as to include natural selection within the definition?  This is not a close question.  Liddle’s assertion beggars belief, because the entire thrust of Dembski’s work for the last 20 years is exactly the opposite of what Liddle attributes to him.  

But wait.  There’s more.  One need not be familiar with Dembski’s other work to know that Liddle is misrepresenting him.  In the very essay from which Liddle quotes, Dembksi writes:  “Intelligent design regards intelligence as an irreducible feature of reality.  Consequently it regards any attempt to subsume intelligent agency within natural causes as fundamentally misguided . . . “  In other words, in the very essay upon which Liddle relies, Dembski says her interpretation his definition of “intelligence” is “fundamentally misguided.”

This brings us to Liddle’s second equivocation.  She is quite correct when she states that in his essay Dembski says that the “intentionality” of the designer is not a question for science.  Here, however, Liddle equivocates between “intention” in the sense of “ultimate purpose” and “intention” in the sense of “employing means to achieve and result.”

Perhaps an example will make this clearer.  Let’s say a car builder (John) decides to build a car.  At one level he has an “intention” to “employ means to achieve a result.”  In other words, John acquires the materials necessary to build the car and then assembles the materials into a car.  At a wholly different level, John might have an ultimate purpose for building the car.  Why did John build the car?  He built it so he could have transportation back and forth to his job.

When Dembski says that “intentionality” is excluded from science he means that an inquiry into the designer’s ultimate purpose is not a scientific question, and of course he is correct.  But Liddle equivocates and attributes to Dembski a statement that again runs counter to his entire ouvre.  It is difficult to understand how anyone would expect us to believe that William Dembski does not believe that the process of designing a living thing does not require intentionality in the sense of “employing means to achieve and end.”

In summary, we see that Liddle has not been fair to Dembski’s work and has grossly misrepresented his ideas.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

37 Responses to Yet Again an ID Opponent’s Argument is Built on a Foundation of Equivocations

  1. I’m not seeing the point of this dispute.

    There is no widespread agreement on what is meant by “intelligence.” There is a whole discipline of AI (artificial intelligence) that does not require intentionality as a requirement for intelligence. Many AI folk argue that intentionality is like phlogiston (i.e. doesn’t exist, is a term in a mistaken theory).

    When we use poorly defined terms such as “intelligence”, we should expect disagreement over the meanings of those terms.

    If by “Intelligent Design”, you really mean “Design by a conscious agent”, maybe you should change the name to that. Then we could all start disagreeing about the meaning of “conscious” instead of about the meaning of “intelligent.”

  2. NR:

    Kindly provide a definition of “life,” as in Biology is that science which studies life.

    GEM of TKI

  3. “Life”: that which is studied by biologists.

    This goes well with:

    “Mathematics”: that which is studied by mathematicians;
    “Physics”: that which is studied by physicists.

    There is no problem here, since “life” is not an important technical term within biology, just as “mathematics” is not a technical term within mathematics and “physics” is not a technical term within physics.

    It does not matter to me whether “intelligent” is precisely defined, as long as it is not used as a technical term. However, the post on which we are commenting is arguing as if it is a technical term.

  4. Neil, I would say that you are speaking of AI (artificial intelligence) as a technical term. If that is the case, intelligence must have a clear definition, no? And if AI, as you say, stipulates that it isn’t necessary to be characterized by intentionality, then intentionality would be an implied part of actual intelligence, no?

    I think so.

  5. Neil, I would say that you are speaking of AI (artificial intelligence) as a technical term. If that is the case, intelligence must have a clear definition, no?

    No. People are not arguing about whether a suggested algorithm meets the definition of “intelligence.” So it is not a technical term.

  6. You suggested that intention was something that isn’t required for AI. Now, why the need for that distinction if, one, an algorithm cannot have it (of itself) anyway, and two, if it isn’t an important part of actual intelligence?

  7. Elizabeth Liddle quotes William Dembski when he writes: “by intelligence I mean the power and facility to choose between options–this coincides with the Latin etymology of ‘intelligence,’ namely, ‘to choose between’”

    And takes it out of context to boot.

  8. You suggested that intention was something that isn’t required for AI.

    If you carefully reread my comment, you will see that I did not suggest that. Rather, I stated that many AI proponents suggest that. Personally, I am a critic of AI.

  9. Well, thank you for this post. I had asked earlier if Elizabeth had taken the quote out of context. I’m disappointed in her, as she has recently scolded others for taking her own words out of context.

    I think the trend of this post and commentary support the comment made earlier today (Was it by BA?) that this whole Darwinian endeavor is a matter of word games.

  10. Neil @8,

    OK. But you are not addressing my point. If you are implying that intention isn’t a necessary element of intelligence, I think you’ve chosen an illustration that runs counter to that. Maybe you can explain further how you think your example counters the idea of intentionality as integral to intelligence, or give some other example or arguments.

  11. There is a whole discipline of AI (artificial intelligence) that does not require intentionality as a requirement for intelligence.

    So, AI researchers are not intentionally trying to develop artificial intelligence?

  12. 12

    I am not “misrepresenting” Dembski. I do not ascribe to Dembski my own conclusions from his premises and operational definitions.

    I know perfectly well that Dembski draws different conclusions from them. I think he is incorrect.

  13. 13

    This is a critique on Dembski’s paper that I posted here a few years back:

    http://www.uncommondescent.com.....ment-84107

    I think there is widepread misunderstanding in this community as to what “equivocate” means. “Equivocate” is to use a word in one sense to make an argument, and then apply your conclusions from that argument to another argument in which that word is used in another sense.

    I am not doing this. I am, at all stages, stating precisely how I am using any given term. And when I use Demski’s operational definition of Intelligence, I am showing how that operational definition does not exclude evolutionary process, whether or not Dembski intended it to. An operational has to lack ambiguity, which is why they are often difficult to devise.

    And my point, of course, is that Dembski’s argument does indeed hold water using his operational definition of Intelligence, but does not tell us anything about Intention because non-intentional processes, including evolutionary processes, are not excluded from his operational definition.

    Where he to revise it, and insist upon intention as part of his operational definition, his conclusion would no longer be valid.

    This is because we can account for biological patterns by a process that can select without a distal goal (intention).

    If you want to show that CSI is the signature of intention then you need to provide evidence that it can distinguish between processes with proximal and distal goals.

    None of Dembski’s papers do this.

  14. 14

    Here’s a shot at an operational definition of “intention”: the ability of a system to select a distal goal and then to select from a range of options, courses of action that are most likely to achieve that goal.

    People can do this because we have the capacity to model possible futures, and can select our actions so as to maximise the probability of a particular future to come about.

    But that doesn’t mean that systems without that capacity cannot produce complex outputs that include solutions to proximal problems.

  15. 15

    Meleager:

    So, AI researchers are not intentionally trying to develop artificial intelligence?

    Yes, of course they are. Neil is saying that intention is not a criterion used by AI researchers when developing intelligence.

    At the heart of many AI systems are evolutionary algorithms because they are excellent learning systems. However, building an intentional AI system is also in principle perfectly possible, and AI systems already do this. Watch the YouTube videos of ASIMO in various incarnations – one ASIMO can formulate a high-level “intention” – to cross a room without stepping on moving obstacles, and selects its actions in order to further that goal, constantly updating its proximal goals (e.g. “move to the right”) in the light of whether or the action required to achieve that proximal goal will further the more distal goal (“get to the other side of the room”).

    The next stage will be getting ASIMO to formulate increasingly deep hierarchy of goals from distal to proximal: “Make master happy” – “make master cup of tea” – “go to kitchen” – “avoid furniture” – “step to the right”.

    Intention, like most things in this world, is a continuum.

  16. 16

    avocationist:

    Well, thank you for this post. I had asked earlier if Elizabeth had taken the quote out of context. I’m disappointed in her, as she has recently scolded others for taking her own words out of context.

    I think the trend of this post and commentary support the comment made earlier today (Was it by BA?) that this whole Darwinian endeavor is a matter of word games.

    There is a huge disconnect here. I did not take the quote “out of context” – and in fact provided a link to the context.

    More to the point, I did not use the quote (as unfortunately many do, though not, in general, here) to ascribe to Dembski views he does not hold, but to make an alternative argument from his premise.

    This is quite different.

    An analogy:

    A poses a mathematical problem and provides an answer.

    B takes the same problem and provides a different answer.

    Do you accuse B of taking the “quote” of the problem “out of context”? No, you don’t. On the other hand if B then attributed her own answer to A, you certainly would.

  17. Elizabeth said, in the other thread:

    Dembski has produced an operational definition of “intelligence” that does not require “intention” which he specifically excludes as a “question of science”.

    Your argument is with Dembski, not with me.

    Then here, she says here:

    More to the point, I did not use the quote (as unfortunately many do, though not, in general, here) to ascribe to Dembski views he does not hold, but to make an alternative argument from his premise.

    So, is our argument with Dembski, who never intended to imply that intelligence could be divorced from intention (even if we could not discern the specifics of the intention), and flatly stated it could not be, or with you, who have for some reason decided that it is okay to take the semantics of what Dembski wrote, repurpose it to mean something he never meant, and then quote Dembski and refer to him as if it was somehow relevant to your argument for non-intentional intelligent agency, and then say our argument is with him (as if that is what he meant) and not you?

  18. Dembski refers to intelligence as necessarily being intentional, even if we don’t know the exact intention, and thus capable of producing design that cannot be achieved without said intention.

    Interpreting that to mean that Dembski left the door open (or else, why refer to Dembski and say our argument is with him?) for some kind of “unintentional” intelligent design would be at best a profound error.

    From dictionary.com:

    Equivocate: to use ambiguous or unclear expressions, usually to avoid commitment or in order to mislead; prevaricate or hedge.

  19. So, when Elizabeth says “our argument is with Dembski”, when our argument is clearly not with Dembski but rather her semantic diversion from any argument Dembski has made, and then when called on it say that she has used the same argument (when clearly she has not, since in Dembski’s argument intention was a necessary aspect of intelligence) but only come to different conclusions, is that an equivocation?

    Is it an equivocation to say one did not make a finding of best explanation for ID in the prime-number radio signal case because of recognition of the CSI of the information, but rather used a Bayesian method that relies upon the exact same kind of probability comparison?

    Is it an equivocation to use two words that fundamentally mean opposing concepts, like “unintentional design”, or to claim that intelligence can be an unintentional process?

  20. 20

    Barry:

    Elizabeth Liddle quotes William Dembski when he writes: “by intelligence I mean the power and facility to choose between options–this coincides with the Latin etymology of ‘intelligence,’ namely, ‘to choose between’” The quotation comes from “Intelligent Design Coming Clean,” an essay Dembski published in 2000.

    Liddle leaps from Dembski’s definition to the following dubious conclusion: “Dembski has produced an operational definition of ‘intelligence’ that does not require ‘intention’ which he specifically excludes as a ‘question of science’.”

    No, it is not a leap.

    Liddle employees two rather clumsy equivocations to misrepresent Dembski’s work. Let’s see how she does it:First, she equivocates on the word “choose” from Dembski’s quote. What does “choose” mean? Let’s go to the dictionary (I’m using the World English Dictionary), and we find “choose” means:

    1. to select (a person, thing, course of action, etc) from a number of alternatives

    2. to consider it desirable or proper: I don’t choose to read that book

    3. to like; please: you may stand if you choose
    What do all of these senses of the word have in common? They all presuppose the prior existence of a “chooser,” i.e., an agent who chooses between alternatives, and in context this is exactly what Dembski meant in his essay. I defy anyone to click on the link above, read the essay and come to any other conclusion. He absolutely does NOT have a natural process in mind.

    No, they do not “presuppose” a “chooser” who is an intentional agent. Look at your own first definition, which gives, as a synonym, “select”. Select is frequently used to describe non-intentional processes, not least in the phrase “natural selection”. More to the point, Dembski’s entire argument does not depend on the “chooser” or “selector” being an intentional agent. That he assumes it is is his own equivocation, not mine.

    Here is where the equivocation comes in. Liddle says a natural process can “choose” in the way a wire mesh “chooses” between small rocks that it allows to pass through and large ones that it does not (she later even calls this choosing a “filter”). And this is what Liddle means when she writes: “My point is that I think that IF we use Dembski’s definition (which excludes intention as a criterion) then he is correct – there is a characteristic signature of patterns that have emerged from a process that involves “choice between options”. Natural selection is one such process, and I think that’s why it’s products resemble in so many ways, the products of human design.”

    Give me a break. Does anyone with even the remotest familiarly with Dembski’s ouvre really believe he would define intelligence in such a way as to include natural selection within the definition? This is not a close question. Liddle’s assertion beggars belief, because the entire thrust of Dembski’s work for the last 20 years is exactly the opposite of what Liddle attributes to him.

    Of course I don’t think intended to “define intelligence in such a way as to include natural selection within the definition”. Unfortunately he did. And he based his argument on it. Then he equivocates (not me, Dembski) and infers intention on the part of his “chooser”.

    Here is Dembski, from that paper (Intelligent Design as a Theory of Information):

    The principal characteristic of intelligent causation is directed contingency, or what we call choice. Whenever an intelligent cause acts, it chooses from a range of competing possibilities. This is true not just of humans, but of animals as well as extra-terrestrial intelligences. A rat navigating a maze must choose whether to go right or left at various points in the maze.

    Precisely. An “intelligent cause”, by Dembski’s own definition, is one that “chooses from a range of competing possibilities”. Just as “natural selection does – from a range of “competing” genotypes it “chooses” (by virtue of differential reproduction) those that thrive best in the current environment. Sure it’s trial-and-error choice – each generation throws up some novel options, and is faced by the Great Selector, environmental hazard. But Dembski clearly does not exclude trial-and-error choosing from “intelligent causation” because, right there, he gives the example of a “rati in a maze”. The rat in the maze has no foresight – it has to “choose” at random. If it takes a option that leads to a blind alley, it deletes that option from its repertoire. We say that the rat has “learned” the maze when all the blind options have been deleted. This is exactly analogous to what happens in Darwinian selection: a population has “adapted” to an environment (you could even say “learned to thrive in” an environment) when the blind alleys have been deleted (the genotypes that lead to death or sterility) and the alleys that lead to successful reproduction are left behind.

    And he goes on:

    Where in this scheme does complexity figure in?

    The answer is that it is there implicitly. To see this, consider again a rat traversing a maze, but now take a very simple maze in which two right turns conduct the rat out of the maze. How will a psychologist studying the rat determine whether it has learned to exit the maze. Just putting the rat in the maze will not be enough. Because the maze is so simple, the rat could by chance just happen to take two right turns, and thereby exit the maze. The psychologist will therefore be uncertain whether the rat actually learned to exit this maze, or whether the rat just got lucky. But contrast this now with a complicated maze in which a rat must take just the right sequence of left and right turns to exit the maze. Suppose the rat must take one hundred appropriate right and left turns, and that any mistake will prevent the rat from exiting the maze. A psychologist who sees the rat take no erroneous turns and in short order exit the maze will be convinced that the rat has indeed learned how to exit the maze, and that this was not dumb luck. With the simple maze there is a substantial probability that the rat will exit the maze by chance; with the complicated maze this is exceedingly improbable. The role of complexity in detecting design is now clear since improbability is precisely what we mean by complexity.

    See what he did there? He infers CSI in the pattern of correct turns exhibited by the rat. And how did that CSI get there? Because the rat, by trial and error, eliminated from its repertoire of options, all those that did not lead to the exit from the maze. We can also deduce CSI from the pattern of “correct turns” exhibited by a population that adapts to its environment, and the CSI got there by exactly the same means – trial and error learning, or perhaps “learning” although the only difference the scarequotes denote is that we do not have a nice discrete entity like a rat to attribute the learning to but rather an entire system – self-replication with heritable variation in the capacity to replicate in the current environment. Which happens also to be exactly the system possessed by the rat herself: the capacity to repeat a sequence of operations with random variance (replication with random variance), rejecting those sequences that fail to result in exit from the maze, and retaining those that do (selection).

    As I said – his argument works just fine, using his definition, and without invoking intention. What he should not do, and you should not, is then equivocate, and say: ah, well, you see, intelligence is always intentional, and I meant that all along, so now you can infer that biological systems were intentionally designed. You can’t. You can’t change the definition that led to the proof. That is equivocation.

    But wait. There’s more. One need not be familiar with Dembski’s other work to know that Liddle is misrepresenting him. In the very essay from which Liddle quotes, Dembksi writes: “Intelligent design regards intelligence as an irreducible feature of reality. Consequently it regards any attempt to subsume intelligent agency within natural causes as fundamentally misguided . . . “ In other words, in the very essay upon which Liddle relies, Dembski says her interpretation his definition of “intelligence” is “fundamentally misguided.”

    Sure. But unfortunately he forgets that part when he makes his argument. That’s what’s wrong with his argument. He equivocates – he uses a definition of “intelligence” (an odd one, admittedly, but it’s the one he chose), uses it to demonstrate that CSI is always the result of intelligence by that definition then turns round and claims intelligence can’t be subsumed within natural causes.

    Although what he has done throughout the rest of the article is prove that it can.

    This brings us to Liddle’s second equivocation. She is quite correct when she states that in his essay Dembski says that the “intentionality” of the designer is not a question for science. Here, however, Liddle equivocates between “intention” in the sense of “ultimate purpose” and “intention” in the sense of “employing means to achieve and result.”

    Perhaps an example will make this clearer. Let’s say a car builder (John) decides to build a car. At one level he has an “intention” to “employ means to achieve a result.” In other words, John acquires the materials necessary to build the car and then assembles the materials into a car. At a wholly different level, John might have an ultimate purpose for building the car. Why did John build the car? He built it so he could have transportation back and forth to his job.

    Right. Goals can be both proximal and distal. CSI, unfortunately, does not distinguish between them, and Dembski has not shown that it can.

    CSI only requires proximal goals to appear. So we cannot infer a distal-goal setting agent from the presence of CSI. We might well infer it from some other evidence, but not from CSI.

    When Dembski says that “intentionality” is excluded from science he means that an inquiry into the designer’s ultimate purpose is not a scientific question, and of course he is correct. But Liddle equivocates and attributes to Dembski a statement that again runs counter to his entire ouvre. It is difficult to understand how anyone would expect us to believe that William Dembski does not believe that the process of designing a living thing does not require intentionality in the sense of “employing means to achieve and end.”

    No he isn’t correct. It is perfectly scientific to inquire into a designer’s ultimate purpose. That’s why we have a difference between manslaughter and murder charges. Come on, Barry, you are a lawyer! Don’t you consider that lawyer’s use scientific reasoning? Don’t you use forensic scientists to shed light on that very question?

    But to clarify: “Liddle” does not “expect [you] to believe that William Dembski does not believe that the process of designing a living thing does not require intentionality in the sense of “employing means to achieve and end.” I don’t believe it myself – I’m quite sure that Dembski believes that designing a living thing requires intention. It’s pretty obvious that he does. He doesn’t seem to have noticed the hole in his own argument, i.e. his own equivocation.

    I am not equivocating, Barry; you have mistaken my exposure of Dembki’s equivocation for equivocation on my part. That is a mistake.

    In summary, we see that Liddle has not been fair to Dembski’s work and has grossly misrepresented his ideas.

    I rest my rebuttal :)

  21. 21

    Meleager:

    So, is our argument with Dembski, who never intended to imply that intelligence could be divorced from intention (even if we could not discern the specifics of the intention), and flatly stated it could not be, or with you, who have for some reason decided that it is okay to take the semantics of what Dembski wrote, repurpose it to mean something he never meant, and then quote Dembski and refer to him as if it was somehow relevant to your argument for non-intentional intelligent agency, and then say our argument is with him (as if that is what he meant) and not you?

    Your argument should be with Dembski who failed to notice that his hung on a definition of intelligence from which he had inadvertently failed to exclude intention, and failing to notice the resulting equivocation when later claiming that therefore things exhibiting CSI must be intentionally designed.

    Note, please, that I am not simply referencing Dembski’s definition; I am referencing the arguments he makes from that definition and noting that the do not require us to insert an implied “intention” into the definition to work as arguments.

    It’s as though he originally defined x as 3, proved, correctly that 2x=6, then turned round and claimed that of course he really meant that x=3+1, and that therefore 6 is = 2*(3+1)

    Sure he might not have meant to say that x=3, but if it doesn’t, then his conclusion is no longer correct.

  22. Neil is saying that intention is not a criterion used by AI researchers when developing intelligence.

    But! This argument does no favor for your, or Neil’s, stance. That such a position should be made clear, i.e. the position that intention is not a criterion for AI, they are actually asserting that they believe intention is integral to actual intelligence. Else, why the need for making that clear? Does anyone really think that systems that employ AI, whatever they may be, can actually have intention? No. They are often made to appear to have intention, which is what makes them seem actually intelligent. But everyone knows that the algorithm was put there ahead of time, and that the “intentionality” was predetermined.

  23. The typical AI view is that if their AI system behaves in the “right” way, then we will attribute intention to it. The claim is that there is nothing more to intention than the attribution of intention. Take a look at Dennett’s “The Intentional Stance.”

  24. Elizabeth says:

    Your argument should be with Dembski who failed to notice that his hung on a definition of intelligence from which he had inadvertently failed to exclude intention,

    Perhaps you mean, failed to “include”, but that is not true, because he stated it was necessary to the definition. If you are arguing he failed to exclude “intention”, you’ve contradicted yourself, because you claim he excluded it; if you argue that he failed to include it, that’s a flat-out lie, because Dembski said intention was a necessary part of the intelligence he was referring to and that anyone saying it could be “unintentional” was misrepresenting him.

    Can there really be any doubt about what Elizabeth is doing here?

  25. 25

    Sorry, yes, I meant “inadvertently excluded intention”.

    Good catch. Note to self: avoid double negatives.

    No, he did not state that it was necessary to the definition, and the argument that he builds on the definition that he gives is not dependent on the choosing agent being an intentional agent. For instance, the rat in the maze selects the openings in the maze at random, then, through trial and error, rejects sequences that do not lead quickly to the exit and retains those that do. This is exactly analogous to Darwinian processes in which mutations are generated at random, followed by the rejection (by low rate of replication) of those sequences that do not promote successful replication and retention of those that do. Rat learning and Darwinian evolution are both trial (choice) and error (feedback) learning paradigms and they both generate CSI.

    But Dembski later equivocated (inadvertently I’m sure) by generalising his conclusion, based on an argument that does not hinge on the “intelligent” agent being an “intentional” agent only to intentional agents. This is a logical flaw in Dembski’s argument.

    You are mistaking my drawing attention to this logical flaw namely equivocation, in Dembski’s argument for equivocation on my part!

    What Dembski’s argument shows us is that for CSI to be generated you need trials; result (success/error); selective repeat of of correct trials. That’s how the rat learns the maze and the resulting pattern of behaviour has CSI. That’s how Darwinian evolution works and the resulting adapted population has CSI.

    No intention is therefore required to produce CSI, and so Dembski’s definition and argument are entirely correct (his definition excluding, as it does intention as a criterion).

    To conclude otherwise is to equivocate between the with-intention definition used to generalise the conclusion and the without-intention definition used to make the argument.

  26. “For instance, the rat in the maze selects the openings in the maze at random, then, through trial and error, rejects sequences that do not lead quickly to the exit and retains those that do.”

    I don’t think a rat in a maze is a good example, since while the actions of the rat might seem random, the rat may have an intention involved (to get at the cheese, or some other goal). Trial and error is not indicative of a random selection, but a goal directed selection; which implies intent.

    If you lost something of value and you began looking for that which was lost, you would look everywhere possible even though what you lost is in one specific place and not everywhere you look. You still intend to find something even though where you look is not necessarily where you find it. It’s the same situation with a rat in a maze. Every random act is intended towards the goal. The reason why a rat’s actions may look more random is that a rat has less intellectual abilities than say a human in a similar maze, and has less ability to use reasoning to find the most logical pathway through the maze. This does not indicate; however that the rat has no intention because it’s actions seem random.

  27. The claim is that there is nothing more to intention than the attribution of intention.

    Then we may as well erase the word intention from our dictionaries. Attribution of anything is meaningless if the thing attributed isn’t itself real, however. Intention, then, must have some meaning, even for those who claim it is only present by way of attribution.

  28. 28

    Intention is a perfectly good concept.

    It just needs a decent definition. I’ve offered one somewhere, can’t remember whether it’s on this thread.

  29. Elizabeth,

    Why does the rat, as you say, “reject sequences that do not lead quickly to the exit and retain those that do” if it is not the rat’s intent to get out of the maze?

  30. 30

    OK, CK, you raise a good point (as ever), but not a saving one for Dembski, IMO.

    The pattern of left and right turns we observe in the rat’s behaviour after it has learned the maze has CSI. It has CSI because all possible route through the maze sum to a very large number, we assume, so each individual route has a low probability, and thus each is complex), yet it reliably picks the one (i.e. specified) path that leads directly to the cheese. (I’m leaving out that bit of CSI that compares that to the number of events in the universe because I think that’s a bit silly and way to conservative).

    The desire for cheese is, I would argue, is simply the force that results in the rat preferentially replicating the behaviour that led to the cheese.

    Just as, in the case of an evolving population, self-replication results in the preferential repetition of genotypes that lead to more successful babies.

    What produces the replication is not germane to the generation of CSI; what is the preferential “selection”, over successive replications, of successful “choices” – called “selection” in both cases.

  31. Elizabeth says: “The desire for cheese is, I would argue, is simply the force that results in the rat preferentially replicating the behaviour that led to the cheese.”

    What does “preferentially” mean here, other than “intending one thing over another” or “making an intentional choice”?

    Avoiding using the term “intention” by instead inserting “the force that results in the rat preferentially” choosing one thing over another is equivocation, Elizabeth.

    What is a desire for cheese if not the intention to go find and eat some cheese?

  32. On the one hand we have: “intention to find the cheese”.

    On the other: “The desire for cheese is, I would argue, is simply the force that results in the rat preferentially replicating the behaviour that led to the cheese.”

    The Darwinian Dance of Equivocation continues.

  33. But scientists brag around that don’t know yet what life means. Does this mean that biologists study/research what they know nothing about?

  34. 34

    You haven’t really engaged my point, Meleager.

    Let me try again:

    CSI is evidenced in the learned behaviour of the rat – there are many possible routes through the maze, meaning that any one is highly improbable if the rat is making purely chance (coin-toss) decisions. So the behaviour is complex It is also highly specified – there is only one route through the maze that leads directly to the cheese.

    At the beginning we have non-CSI behaviour; at the end we have CSI behaviour.

    From the CSI behaviour we can infer that the rat has learned the maze.

    And, to go back to Dembski’s definition, we can infer that the rat has “the power and facility to choose between options”. Which indeed it does. The issue at stake is whether “the power and facility to choose between options” necessarily requires “intention” in order to result in CSI, or whether the sufficient factor is simply a system in which there is feedback between the results of a series of options and future selection of those options. In the case of the rat that last set of criteria is present, but we can, in addition, say that the rat “intends” to get at the cheese.

    So is the intention necessary? Or is repetition with feedback sufficient?

    So now take a system that has feedback between the outcome of the pattern of behaviour that affects the probability of future patterns of behaviour.

    The obvious example is a population in a changing environment. The behaviour in question is the “behaviour” of the phenotype within that environment. Let’s say we start with a warm environment in which we have a furry population. In that population we have a several genes for fur thickness each with several alleles, and there is quite a range of fur thickness within the population, as the climate is temperate, and fur thickness doesn’t matter much.

    Now we suddenly change the climate to a much colder one. The population exhibits its usual range of fur thickness producing “behaviours”. However, babies born with thinner fur now tend to die before maturity. The surviving babies tend to be those bearing the alleles for thicker fur. The net result is that the next population exhibits “thicker-fur bearing behaviour” than the previous one, with the thicker-fur-promoting alleles being much more prevalent. In the next generation again, more babies will survive, but particularly the ones with the thickest fur. So the third generation exhibits “even-thicker-fur-bearing behaviour”, with an even higher concentration of thicker-fur-promoting alleles. Repeat a few more times, until most of the population has very thick fur and some of the babies actually die of heat stroke. At this point the fur-thickness in the population stabilises and we have an adapted population with optimally thick fur, and an optimal set of alleles for survival in the colder climate.

    Now, map this on to Dembski’s definition, and the rat-in-the-maze:

    The population starts off with a set of fur-producing behaviours that are random with respect to the climate – some will be too thin, a few too thick. This is the rat trying tunnels at random. It has no idea which ones lead to the cheese. But just as a particular set of choices do get the rat to the cheese, so organisms with particular set of alleles get to breeding successfully, and those are the ones, by definition, that get repeated. Just as, when the rat repeats the maze, it only repeats the behaviours that got it reasonably quickly to the cheese.

    Then in the next generation, again, the set of alleles that get to breeding most successfully are those, by definition, that get repeated. Just as, when the rat repeats the maze, it only repeats the behaviours that got it most quickly to the cheese.

    After a few more repetitions, or goes at the maze, both systems, population and rat, exhibit, reliably behaviours with a specified result – consistent survival of babies in the populaton, consistent rapid cheese-eating in the rat.

    In both cases we have CSI – we have a set of alleles, out of a large number of possible selections, that consistently specify optimally thick fur, and we have a set of left-and-right turns, out of a large number or possible left-and-right turns, that specify maximally efficient cheese-eating.

    In other words, it’s the feedback and repetition that generates CSI, not intention, thus validating Dembski’s intention-free definition of intelligence: the power and capacity to choose between options, as the conditions both necessary and sufficient to generate CSI. The Darwinian system has “chosen” a set of alleles that promote survival in a cold environment; the rat has chosen (no scarequotes) a set of left-right turns that promote cheese-eating. Both have the “power and capacity to choose between options” and to choose, moreover, the options that lead to a specified result.

  35. 35

    “preferentially”, in this context, means taking the option that leads to a specified result.

    Not randomly choosing any old option, and thus to any old result.

    Generating CSI in other words.

  36. Frame it a little differently – Fur does not exist – now apply differential whatamever and see what that creates.

    Choose fits the rat example but lose would be appropriate for the ‘population behavior’, it loses much in each generation.

    Your population already contains massive CSI! And then it reacts in a nested repetitive feedback whatever and then it has less CSI. This is not the origin of any information.

    That self replication with differential survival exists and produces “changes” is self evident, and sounds silly at best every time its repeated.

    Does the series of R L turns ‘have’ CSI or is it ‘evidence’ of CSI? Where is the CSI?

    Does thicker fur have CSI …

  37. I was joking a few weeks back when I said that eventually intelligent design would be co-opted as a natural cause so that evolution would finally encompass every explanation possible. Perhaps the idea caught on?

Leave a Reply