Home » Intelligent Design » Darwin’s Big Mistake – Gradualism

Darwin’s Big Mistake – Gradualism

The big mistake in Origin that Darwinists won’t admit is gradualism. Darwin explained that according to his theory we should expect to observe a continuum of living species each with only the slightest of variations between them. He postulated that we don’t observe this because the fittest species take over and the insensibly slight variants die off leaving species that are fully characteristic of their kind which then makes possible taxonomic classification by those characters. It’s in the full title in the latter half “The Preservation of Favored Races”.

That left Darwin with explaining the fossil record which is indisputably a record of saltation. Species in the fossil record appear abruptly fully characteristic of their kind, persist unchanged for an average of about 10 million years, then disappear as abruptly as they appeared. Darwin explained this away by saying the fossil record was incomplete and that when it was more fully explored the insensibly small variations that cumulatively led to the emergence of new species would be apparent. One hundred fifty years of fossil hunting later has not revealed what Darwin thought it would reveal. Some still say the fossil record in incomplete. Stephen Gould’s candid admission (“the trade secret of paleontology” is that it fails to support the very theory it is based upon) and formation of the theory of punctuated equilibrium is perhaps the most famous attempt to salvage gradualism.

No Darwinists I know or read give saltation any credence. The reason why is because saltation implies front loading. How would one species change in just a few generations to something taxonomically different? All the new characters that distinguish the new species must have been present in the predecessor if they were expressed that quickly. Random mutation & natural selection, through a tedious trial and error process, takes a very long time to generate novel characters. Indeed this insufficiency is at the very core of Intelligent Design. Haldane’s Dilemma is alive and well. Only an intelligent agent has the capacity to plan for the future. Intelligent agency is proactive and that proactivity is what distinguishes it from RM+NS. RM+NS is reactive in that it can “learn” from past experience but it can’t plan for future contingencies which have not been experienced in the past.

My position, which has remained unchanged for several years, is that phylogenesis was a planned sequence. Common descent from one or a few ancestors beginning a few billion years ago has overwhelming evidence in support of it. Gradualism however does not have overwhelming evidence. Gradualism in evolution survives to this day because the only alternative to it is intelligent design. Gradualism doesn’t survive by the weight of the evidence but rather by the tightly held belief in philosophic naturalism held by an overwhelming number of the practioners of evolutionary biology. As Richard Dawkins famously wrote “Although atheism might have been logically tenable before Darwin, Darwin made it possible to be an intellectually fulfilled atheist.” These people are clinging to gradualism like religious dogma because to say it’s wrong is tantamount to giving up their religion.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

50 Responses to Darwin’s Big Mistake – Gradualism

  1. “The reason why is because saltation implies front loading.”

    There are other possibilities for saltation besides front loading. If someone from MIT develops a new species in the lab and then places it in an environment, that would be an example of saltation. If some intelligence in the past, placed a population with a diverse gene pool into an earthly environment, that wold also be a saltation.

  2. One question that should be clarified is whether by “gradualism” you mean ‘proceeding by small, incremental steps’ or ‘proceeding at a constant rate’.

    As I’m sure most here know, Darwin wrote in Origin of Species:

    Although each species must have passed through numerous transitional stages, it is probable that the periods, during which each underwent modifications, though many and long as measured by years, have been short in comparison with the periods which each remained in an unchanged condition. These causes, taken conjointly, will to a large extent explain why—though we do not find many links—we do not find interminable varieties connecting together all extinct and existing forms by the finest graduated steps.

    In other words, he recognized that evolution need not proceed at a constant rate, that there could be periods of stasis or near-stasis punctuated be periods of relatively rapid change.

    The difference between this and the Punctuated Equilibria proposed by Gould and Eldredge may be more a matter of emphasis than a real difference. Darwin stressed the gradual nature of evolution because he needed to impress upon his readers how large changes could emerge from the accumulation of small, incremental steps over vast spans of time. Gould and Eldredge stressed the episodic nature of evolution because they needed to reconcile the theory with what is revealed by the fossil record.

    Both, however, would have agreed that “rapid” is a relative term since even the periods of rapid change took place over anything from hundreds of thousands to tens of millions of years. The apparently abrupt appearance of some species could simply be an artefact of the the very coarse-grained image provided by the fossil and geological records. It simply doesn’t have the resolution to show the fine detail of the gradual changes that may have occurred.

  3. Severesky,

    I don’t care about the semantics of the word “gradualism” or word “rapid” one iota. The point is does the original Darwinist model of RM+NS fit the data.

    I fully admit I do not study the data enough by myself to have a opinion from first principles. But I always have to ask myself, why is it that Stephen J. Gould even postulated “punctuated equilibrium” if the fossil record confirms gradualism.

    Its like the proposal of the “multiverse”. The very fact that scientists who study these things feel the need to propose the multiverse in order answer to the amazing fortunate coincidences of the privileged planet, make me believe that there is abundant evidence for design — even as there is abundant evidence that RM+NS just does not quite answer the questions of life.

  4. Someone has observed that no matter how slow and gradual evolution appears to us, speed up the film by a certain amount and it looks like life appearing instantaneously. So which is the correct perspective – ours, or a speeded up one.

  5. Gradualism always fails for me with the supposed evolution of the bat from a rodent like mammal. Why would webbed digits convey an advantage to a rodent, and why would that particular deformity repeat itself time and again resulting in larger webs and longer digits? Not to mention that it would have to be a number of genetic errors working in conjunction, providing the correct musculature, blood vessels, etc. And then to top it all off finally the creature would miraculously be aerodynamically sound. Have you ever noticed that no one has attempted to depict a half bat-half rat creature hanging on a tree trunk? Nor has anyone depicted what the ancestors of the flying reptiles looked like, it’s only birds. Also, in both cases, the fossil record says no.

  6. The main problem with gradualism, besides the fact that there appears almost nothing in the fossil record leading from one species to a new one with novel complex capabilities, is that each step along the way is viable. You will be shown a suite of fossils from a forrest animal to a whale but where along the way is their a gradual presentation of the unique characteristics of the whale? After all the whale is a little different from a forest animal.

    Also each step is theoretically a way point for more than one new path. So the progression from one species to a completely new one would theoretically leave a trail of thousands of intermediaries and lots of dead ends as many of the paths branched off and came to an end with no new progression. So theoretically some of these should be available in the fossil records and I assume that some of them may be. But are any available with the novel complex capabilities.

    There also appears to be none present in the current world. Where are all the dead ends, they don’t have to be extinct, and where are all the intermediaries, they do not have to be extinct. We just cannot assume that for every viable species that exists that all its predecessors are now extinct. That is just too convenient.

    In truth there are tens of thousands of examples of progressions in the current world of different species but they are all micro evolution and most likely are devolution or the generation of smaller gene pools from larger gene pools in the past. And this is not what Darwin envisioned. He envisioned an upward evolution of species not a downward devolution to more restricted gene pools caused by environmental changes and migrations.

    There are no upward examples in the current world or else we would never hear the end of it from the Darwinists. I can just hear the panting at Panda’s Thumb. Tell them about this, tell them about this. Well someone who goes to Panda’s Thumb, come tell us.

    There appears to be only devolution in our current world and this is what Darwin saw on the Beagle but mistakenly extrapolated the wrong way and that is why we are forever in this needless debate. By the way such a phenomenon as devolution is predicted by evolutionary biology. It is only assumed that along with this devolution there is real upward evolution and the creation of new characteristics.

  7. Here is a timely column in Newsweek about Lamarckism making a comeback as an alternative it Darwinism.

    http://www.newsweek.com/id/180103

    Some water fleas sport a spiny helmet that deters predators; others, with identical DNA sequences, have bare heads. What differs between the two is not their genes but their mothers’ experiences. If mom had a run-in with predators, her offspring have helmets, an effect one wag called “bite the mother, fight the daughter.” If mom lived her life unthreatened, her offspring have no helmets. Same DNA, different traits. Somehow, the experience of the mother, not only her DNA sequences, has been transmitted to her offspring.

    Interesting stuff.

  8. Seversky

    Darwinism already doesn’t have enough time without you consigning all of the evolutionary change to relatively brief periods of time.

  9. 9
    William J. Murray

    There’s a recent article in nature about how plants use a historically retroactive process of quantum collapse to “decide” what scaffolding pathway to use for an energy conduit during photosynthesis, giving the process a 95% energy efficiency rating (as opposed to an 80% energy efficiency rating for designed power cables).

    There was another observational collapse experiment that demonstrated retroactively altered results could be achieved.

    Julian Barbour, in “the End of Time”, proposed that the universe was a “full set” of everything that could exist in potential state, with frame sequences activated by consciousness, and historical pathways were determined by the nature of the observer “collapsing” quantum potential.

    It all sounded very esoteric at the time, but these recent experiments lend some support to his ideas.

    If humans exist at a certain point X in space-time potential, then what humans observe is a collapsed set of potentials that supports, contextualizes, or “explains” their existence.

    Just like the above experimental results, this would “choose” a history that is efficient to our exitence, regardless of how wildly unlikely it is for that history to have sequentially occurred, because it didn’t have to **actually** sequentially occur.

    This would also explain what we see in the evolutionary record, and what we see as a finely-tuned universe.

  10. Common descent from one or a few ancestors beginning a few billion years ago has overwhelming evidence in support of it.

    However ALL evidence for said common descent (UCD) can also be used as evidence for common design.

    And until someone can demonstrate that the physiological and anatomical differences can be accounted for via genetic changes universal common descent does not deserve to be called “science”, as it is nothing more than speculation based on the assumption.

  11. 11
    William J. Murray

    Off-topic: I ran across this paper and it seemed to me that an information system/code-based algorithm had outperformed (by a large margin) gene ontology predictions based on supposed evolutionary histories and similarities.

    http://www.ncbi.nlm.nih.gov/si.....s=17646340

  12. DaveScot: Only an intelligent agent has the capacity to plan for the future.

    I’ll ask again the question that I asked in a previous thread. If Gil’s checkers app can anticipate future game situations and choose its moves accordingly, is it an intelligent agent?

    And yes, I know that the app was designed to do this, but that doesn’t change the fact that it does it. If a product of design can’t be an intelligent agent, then where does that leave us?

  13. If Gil’s checkers app can anticipate future game situations and choose its moves accordingly, is it an intelligent agent?

    The point your missing is that the app could ONLY have been designed by an intelligent agent.

  14. tribune7:

    The point your missing is that the app could ONLY have been designed by an intelligent agent.

    Pardon my thick skull, but I don’t understand what that has to do with the question of whether the app itself is an intelligent agent.

  15. R0b,

    If Gil’s checkers app can anticipate future game situations and choose its moves accordingly, is it an intelligent agent?

    Yes. However, a program cannot anticipate anything. (Imagine that the program’s code has been printed out and that you were to play a game of checkers according to the code.) This becomes particularly obvious when a flaw in the programmed strategy is discovered that allows you to program your victory. Surely after losing a hundred times in a row to the same set of moves, the program ought to be able to anticipate and counter them.

  16. Pardon my thick skull, but I don’t understand what that has to do with the question of whether the app itself is an intelligent agent.

    OK, sorry.

    My direct answer is no.

    Intelligence — as per the dictionary definition — requires the ability to acquire knowledge. That app is never going to know more than what the programmer gives it, nor does it have the freedom to go beyond what the programmer allows.

  17. See that, ROb. There is all sorts of consensus on the question :-)

  18. Thanks for your answer, tribune. I’m still in search of that consensus, or at least a common denominator.

  19. tribune,

    If a checkers program can anticipate future situations and react accordingly, surely it is acquiring knowledge. Of course, to apply such mystical attributes to a procedure–a mere list of instructions–is sheer silliness.

  20. Timothy,

    If a checkers program can anticipate future moves and figured out the Pythagorean theorem and how to cook a steak dinner, then it would be actually acquiring knowledge that it didn’t have before. The programmer has already programmed it to play chess.

  21. Clive,

    The programmer has already programmed it to play chess.

    Precisely. And the ability to anticipate and/or choose cannot, by definition, be programmed. A checkers program cannot itself be intelligent, any more than a cookbook can be intelligent. I a merely suggesting that anything which is able to anticipate and make choices ought to be considered intelligent.

  22. I consider the process by which engineers create a new prototype to be in line with saltation change.

    Bicycle x 2 = quadricycle, now add engine, now add wings, now add another line of engineering, say a computer.

    Is it fair to say we can reverse engineer how the original Designer did it?

    Can we not modify organisms genetically now? All we need now is to find the cosmic laboratory.

  23. A few questions for tribune, Timothy, and Clive:

    Do you see the limitations of computer programs as simply a matter of limited current technology, or are these limitations inherent to all computer programs, regardless of technology?

    If computational processes cannot, even in principle:
    - anticipate future situations
    - react accordingly
    - acquire knowledge
    then how do you define the phrases “anticipate future situations,” “react accordingly,” and “acquire knowledge” such that humans can do them but computational processes cannot?

  24. And you can add:
    - make choices
    to the above list.

    Thanks.

  25. This video I just loaded fits this topic very well;

    Ancient Fossils That Evolutionists Don’t Want You To See

    http://www.godtube.com/view_vi.....b09c6eb2e1

  26. Hello Timothy (#21),

    Doesn’t a checkers program choose to make a move based on the logic provided for it by its programmer in combination with its opponent’s move. Do you choose to do things based on a whim, or do you also choose based on logic provided by your programmer in conjunction with the “moves” that happen around you. Furthermore, if you do something based on a whim (no reason that you can tell), how do you know that there isn’t an extremely complex set of causes which operate in combination with the logic behind your choice? IOW, if the idea to got to the beach just pops in your head and you choose to go, how do you know that there aren’t subconscious (even random) processes which brought that thought to your mind which you will then make a choice based on some reasons (logic).

    Furthermore, isn’t anticipation merely the ability to combine memory with the ability to process information from your environment. What if a robot had specific interactions with its environment stored into its memory and upon seeing the beginning of a situation which it had seen before, it was programmed to began to react in advance to that previous situation. If the situation was indeed the same, the robot would react appropriately to the rest of the situation before it was completed. We could say that the robot anticipated the scenario. Indeed isn’t this how we anticipate things, remembering previous situations (even sub-consciously) and responding accordingly to our environment, albeit on a more complex level?

  27. If a checkers program can anticipate future situations and react accordingly, surely it is acquiring knowledge

    If its anticipations are correct it already had the knowledge.

  28. ROb –Do you see the limitations of computer programs as simply a matter of limited current technology, or are these limitations inherent to all computer programs, regardless of technology?

    My view is that computer can do no more than what you put in in regardless of the number of CPUs and ram.

    But there are real experts — said without hyperbole — on this board such as Dave and Gil who could address that with greater authority.

    If computational processes cannot, even in principle:
    - anticipate future situations
    - react accordingly

    They can do these things if programmed for them.

    - acquire knowledge

    You can use them to data mine but gathering facts isn’t the same as knowledge.

  29. tribune:

    If its anticipations are correct it already had the knowledge.

    Should I take that as a vote that computer programs have anticipation capability? If so, I’m counting 2 votes for and 2 votes against.

    This may just be a semantic issue, but I curious whether anyone can come up with definitions that solve it.

  30. R0b,

    Don’t try to mystify things. :) A program is a set of instructions, a series of rules. Do instructions anticipate? Do they make choices? Do they acquire knowledge? Of course not. They are just rules. To say otherwise is nonsense. Choices, in principle, might be made by whatever is executing the instructions, but not so in the case of the computer. An algorithm cannot tell the computer to make a “choice.” There would have to be another algorithm governing which “choice” the computer “made.” Moreover, regardless of how complicated and sophisticated the algorithm, it remains an algorithm.

    Imagine that you are to play checkers using the checkers algorithm. Do you make any choices? Of course not. The rules tell you what to do in such-and-such a situation. That is all. Do the rules choose? No, they just are. The only choice involved is whether you obey the rules.

    Are these limitations inherent to all computer programs?

    Precisely.

    How do you define the phrases “anticipate future situations,” “react accordingly,” and “acquire knowledge” such that humans can do them but computational processes cannot?

    Begging the question is really not very fair, is it? Please discuss why you think that rules can “make choices.” At any rate, whether or not humans are simply sophisticated computers simply following a sophisticated rulebook (this seems to be a point of some controversy), the fact remains that this is indisputably true of computers.

  31. Based on my posting #26, it is at least possible that human are in part sophisticated computers following a sophisticated rulebook. The only place this breaks down is when it comes to explaining consciousness.

  32. tribune:

    They can do these things if programmed for them.

    Yes, it goes without saying that computers can’t do much of anything without some initial programming. So the question “Can computers do X?” should be interpreted “Can an appropriately programmed computer do X?”

    Interestingly, the answer to that question is actually pretty well-established. The list of things that computer programs can’t do includes solving the halting problem, producing an arbitrary number of digits of Chaitin’s Number, etc. Of course, humans can’t do those things either, as far as we know. According to computing theory, computational processes have all of the known abilities that humans have.

  33. Timothy:

    Begging the question is really not very fair, is it?

    My question was sincere, and was based only on the assumption that you consider humans to be capable of X, Y, and Z, while computers are not. I have no idea what question you think I’m begging.

    I really would like an answer to the question. In AI and machine learning circles, saying that a computer anticipates and learns and gains knowledge is unobjectionable. I’m curious whether ID proponents define those terms differently.

  34. More than 50 years of pop culture have conditioned us to associate computer programs with conscious agency rather than with machinery.

    Computer programs have a direct analogue to mechanical devices. I suppose that because a program’s interaction with programmable microcircuitry is hidden, producing no visual or auditory cues behind the scenes, we tend to associate spooky characteristics with it.

    I doubt that anyone links old-school adding machines with agency, nor player pianos with decision making. However computer programs are essentially glorified versions of these, with abstractions that allow for greater complexity of design. There are no characteristics of computer programs that could not theoretically be produced with an intricate mechanical device, albeit with great difficulty.

    A 1960′s era truck motor doesn’t ‘decide’ when to produce combustion — it doesn’t decide when to send spark down a plug wire or how to mix fuel. Each element of this harmony is carefully tuned, and the machine runs according to its ‘programming.’ The milling of the cam shaft determines timings of valves and cylinders. The distributor sends the spark when its points make contact.

    There are no choices exercised by machines, nor computer programs; their behaviors are determined strictly by their programming. Computer programs are nothing more than elaborate decision trees, which react to input with slavish reliability, bereft of any notion one way or another.

    When we use words like anticipating or decision in regards to computer programs, it’s a projection of the programmer’s design intent; this agency does not belong to the program in any way, shape, or form. The program’s behavior its not its own. It is not responsible for its actions, nor can it rectify a bad design decision made by its engineer.

    A computer program is an extension of both the will and capability of the designer, in much the same way a backhoe bucket is an extension of the operator. In both cases, there’s an actual decision maker in the driver’s seat.

  35. R0b,

    There are inferences that are specific to humans that cannot be accounted for by computational processes such as we see in programmed computers. How do you frame a problem? How do you form the judgment? How do you bring the relevant information to bear? What are the relevant considerations when considering a problem? Take an illustration: a man walks into a bar, orders a glass of water, and the bartender pulls out a gun, and the man says thank you.

    It’s an interesting situation, what would explain it from a computer’s point of view?

    The answer is that the man had the hiccups.

    When looking at a picture, what are the salient points or aspects of it? These aren’t things that fall under computational approaches.

  36. R0b (#33),

    The anthropomorphization of programs doesn’t bother me; often it is very instructive. The problem is when people get caught up in the metaphors and start making arguments based on them.

    Yes, for the purpose of explanation, we can say that computers anticipate or choose or even think. Nevertheless, we know that computers are simply deterministic machines that execute instructions.

    CJYman (#26),

    Doesn’t a checkers program choose to make a move based on the logic provided for it by its programmer in combination with its opponent’s move.

    No, it follows the programmed rules. That is what it does.

    Do you choose to do things based on a whim, or do you also choose based on logic provided by your programmer in conjunction with the “moves” that happen around you. Furthermore, if you do something based on a whim (no reason that you can tell), how do you know that there isn’t an extremely complex set of causes which operate in combination with the logic behind your choice?

    I don’t know.

    We could say that the robot anticipated the scenario.

    One could easily write an algorithm that fits a curve to sampled data and then outputs the curve’s value for some arbitrary point in the future. Can you call this prediction or anticipation? Yes. But to say then that the algorithm has the ability to anticipate is deeply misleading, because this implies a very generalized ability. The program simply does what it was programmed to do, nothing more.

    Even if you were to collect every such algorithm and mass them into one master prediction program, it would still be completely deceptive to say that the program had the ability to anticipate, because it would still be doing nothing more than what it was programmed to do.

    You and I, as humans, have an extremely generalized ability to anticipate the future. We also have the ability to be unpredictable. Well, perhaps it’s all an illusion. I cite Clarke’s Third Law.

  37. I see many posited limitations of computational processes, but I don’t understand how these claims are supported. Empirical evidence, eg “Nobody has ever made a machine that does X”, may just identify limitations of current technology. So it seems that logical arguments are needed, but I’ve never seen any formal logic that shows anything other than the non-computability of the halting problem and its equivalents.

  38. Timothy:

    Nevertheless, we know that computers are simply deterministic machines that execute instructions.

    Under Bohmian mechanics, everything is fully deterministic. And deterministic computational processes can be made to look as non-deterministic as you like (eg pseudo-RNGs) no matter what method you use to determine determinacy. Or, if you consider quantum phenomena to be truly non-determistic, you can throw a quantum RNG into a computer if you’d like. So if humans aren’t computers, I don’t see how determinacy separates us.

    We also have the ability to be unpredictable.

    Predictability doesn’t separate us either. It’s not hard to write a computer program for which no human on earth can predict its output.

  39. ROb–Should I take that as a vote that computer programs have anticipation capability?

    It’s safe to safe a program can be written to choose between possibilities.

  40. ROb–According to computing theory, computational processes have all of the known abilities that humans have.

    Computational processes can’t create (cause to exist, bring into being). People can.

  41. R0b (#37, #38),

    What you forget is that if humans are computers, we don’t actually make choices either. That is precisely the point. Perhaps humans are merely programs. Perhaps we’re not. Regardless, computers are merely programs. No amount of technological sophistication can change this.

    I see many posited limitations of computational processes, but I don’t understand how these claims are supported.

    Yes, and I find that very strange. Thus it would be instructive for everyone if you attempted to support your claim that computational processes can make choices.

    It’s not hard to write a computer program for which no human on earth can predict its output.

    Oh really? 1) Execute program. 2) Record output. 3) Execute program w/same input. 4) Confirm prediction. See? This is the sort of creative thinking that separates us from machines.

  42. How did this thread get into computers? Whatever…

    It’s safe to safe a program can be written to choose between possibilities.

    Yes and no. Depends on how you define choose.

    A coded “if-then-else” construct is not real choosing imo. A choice is more than binary logic. Any program merely follows through its code according to preset conditional statements.

    Can a program choose which user it likes best?

    Example – an oversimplified program to calculate who won an election:

    Lets say that variable Jack is a 32 bit integer and variable
    John is a 32 bit integer – the only two candidates. Initial values assigned = 0

    - each variable will represent the number of votes received in the election

    Now assume that Jack and John have been assigned values that indicate how many votes have been entered for each somewhere else ion the program where votes were counted.

    We also have a variable Winner that is of type string with initial value = ‘draw’

    So a statement like

    if Jack > John then
      Winner = 'Jack'
    else if John > Jack then
      Winner = 'John'
    end if

    If Winner = 'draw' then
      // do a recount ... etc.

    There’s other better ways to do it but this is just for illustration

    Now, did this little program make a decision as to who won the election? Well yes and no. But it was guided by very limited, preconceived answers.

    Now say you want to write a program in which the computer can choose which is it’s favorite color.
    It’s obvious that no computer program could be written to do such since computers don’t have personality and thus favorites and color to a computer is just so many bits coding combinations of red, green and blue (RGB).

    Can a computer choose a favorite color? No.
    Ya, I know – You could simulate a color choice by coding for random combinations of RGB then having the program randomly select a combination etc.. But that isn’t making a choice the way intelligent agents would make the same choice. Not at all.

    And as far as computers gaining knowledge, that’s another myth.

    Computers do not gain knowledge per se – only information. Coded information is not knowledge in a machine. The computer doesn’t really ‘know’ anything. It merely contains so many bits of coded information and algorithms that process it.

    Nor, as in the case of Gil’s checkers program, does the program really anticipate. Not the way humans do. A program follows strict rules coded into it. In AI you would probably have a rule base, maybe a neural network etc. and a lot of search and select algorithms that after going through millions of iterations in loops will come up with possible moves, one of which will be “chosen” by the algorithm according to it’s most probable outcome in subsequent moves by some final if-then-else construct.

    But that still is not anticipation as per intelligent agents. That, imo, is better defined as calculation than anticipation. The strict laws of boolean logic, hardwired into the CPU’s logic gates is still used.

    But there’s nothing wrong with using such terms – we intelligent agents ‘know’ what we mean when applying them to a program.

    Another example: does your OS ‘know’ what time it is?
    Or is it just reading it’s CPUs internal ‘clock ticks’ and formating that in terms of a human timeframe? Obviously the latter.

    Hope this helps.

  43. Joseph

    ALL evidence for said common descent (UCD) can also be used as evidence for common design.

    A genome is sort of like a tin can that’s been kicked around. It accumulates dings and nicks in it that are visible but don’t have any practical effect. But they make it uniquely identifiable. If common design then the designer began the new designs from an existing genome and copied all the dings and nicks into the new one. For all practical purposes that’s common descent. This is basically why Dembski writes that the explanatory filter can generate false negatives – a designer can make a design look like an accident. If common design then the designer made it look like common descent in every detail.

  44. Rob

    If Gil’s checkers app can anticipate future game situations and choose its moves accordingly, is it an intelligent agent?

    Under an inflexible set of very simple rules, yes it is. It also has all of Gil’s knowledge of checkers incorporated into it.

    The take home point for you though should be a realization that Gil’s checkers program doesn’t operate via random mutation and natural selection. Natural selection can only evaluate moves after the move is made. Gils program evaluates potential moves before they’re made and intelligently selects one with the best projected result.

  45. Something for all to consider:

    One can say that humans are programmed for various things. If something smells a certain way you want to put it in your mouth. If something tastes a certain way you want to swallow it.

    A huge difference between man and computer is that man can disobey/rebel/ignore his programming. It’s impossible for a computer to do that.

  46. ALL evidence for said common descent (UCD) can also be used as evidence for common design.

    A genome is sort of like a tin can that’s been kicked around.

    Except that it isn’t made out of tin and hasn’t been kicked.

    It accumulates dings and nicks in it that are visible but don’t have any practical effect. But they make it uniquely identifiable.

    How do you/ we know that these identifying marks are dings and nicks? And why would something that doesn’t have any practical effect stay around intact enough to be used as an identifying mark?

    We are talking some thousands, if not millions of generations in which dings and nicks not only occurred but became fixed!

    That must have been some strong selection effect that allowed that to happen.

    If common design then the designer began the new designs from an existing genome and copied all the dings and nicks into the new one.

    Or convergent evolution could also explain those alleged dings and nicks. That is certain regions of DNA are more susceptible to mutations- those “hot spots”- and as such alleged markers can form just by chance.

    Comon mechanism- that is similar sequences of DNA are subject to similar mutations (or identical sequences of DNA are subject to identical mutations).

    For all practical purposes that’s common descent.

    Only because we don’t know any better. And what happens when biologists finally figure out the transformations required cannot be obtained via genetic alterations?

    This is basically why Dembski writes that the explanatory filter can generate false negatives – a designer can make a design look like an accident.

    Or these marks are accidents. But accidents that can only occur in specific regions.

    If common design then the designer made it look like common descent in every detail.

    I say only to people who cannot fathom something else. And until universal common descent can start explaining the physiological and anatomical differences it is not a scientific inference. Rather it is an inference based on a world-view.

    Ya see Dave we have data that indicates what gives us our eye color. We know what gives us sickle-celled anemia.

    We know what causes many traits. Is “human” a trait? I don’t think so.

    However we do not know what makes an organism what it is other than a human baby is born when a human male successfully mates with a human female and a kitten is born when a Tom-cat successfully mates with a she-cat.

    Is the final form really just a sum of the genome? I don’t think so. At least no one has been able to make that link.

  47. Btw, great 2 c u back at UD Dave. I thought u had retired.

  48. vjtorley (#48),

    That was fascinating. The following two paragraphs are especially interesting:

    Hybridisation isn’t the only force undermining the multicellular tree: it is becoming increasingly apparent that HGT plays an unexpectedly big role in animals too. As ever more multicellular genomes are sequenced, ever more incongruous bits of DNA are turning up. Last year, for example, a team at the University of Texas at Arlington found a peculiar chunk of DNA in the genomes of eight animals – the mouse, rat, bushbaby, little brown bat, tenrec, opossum, anole lizard and African clawed frog – but not in 25 others, including humans, elephants, chickens and fish. This patchy distribution suggests that the sequence must have entered each genome independently by horizontal transfer (Proceedings of the National Academy of Sciences, vol 105, p 17023).

    Other cases of HGT in multicellular organisms are coming in thick and fast. HGT has been documented in insects, fish and plants, and a few years ago a piece of snake DNA was found in cows. The most likely agents of this genetic shuffling are viruses, which constantly cut and paste DNA from one genome into another, often across great taxonomic distances. In fact, by some reckonings, 40 to 50 per cent of the human genome consists of DNA imported horizontally by viruses, some of which has taken on vital biological functions (New Scientist, 27 August 2008, p 38). The same is probably true of the genomes of other big animals. “The number of horizontal transfers in animals is not as high as in microbes, but it can be evolutionarily significant,” says Bapteste.

    This is particularly fascinating when one considers that most of the genetic manipulation done by humans (a subset of ID) is in fact guided horizontal gene transfer. That bit about “The most likely agents of this genetic shuffling are viruses” is, barring studies on the probability of viruses accounting for such massive HGT in a reasonable timeframe, simply whistling past the graveyard. HGT by an intelligent agent would be expected to be much more efficient than unguided HGT, as we have seen during the last 20 years. It remains to be seen how efficient natural viral or other unguided HGT is, but it is a reasonable bet that it is orders of magnitude too small to reasonably account for the facts as known.

    This would make excellent research material for someone wishing to do ID-related research.

    It always amuses me when an opponent of ID asks why the Intelligent Designer (they always mean God in this context) didn’t re-use successful components. The evidence seems to be more and more that He (or he, she, they, etc.) did. The opponents’ rhetoric is coming around to bite them.

  49. 50
    TheisticEvolutionist

    Eugene Koonin. The Origin at 150: Is a new evolutionary synthesis in sight?” Trends in Genetics, 25(11), November 2009, pp. 473-475 writes;

    The discovery of pervasive HGT and the overall dynamics of the genetic universe destroys not only the tree of life as we knew it but also another central tenet of the modern synthesis inherited from Darwin, namely gradualism. In a world dominated by HGT, gene duplication, gene loss and such momentous events as endosymbiosis, the idea of evolution being driven primarily by infinitesimal heritable changes in the Darwinian tradition has become untenable.

Leave a Reply