Uncommon Descent Serving The Intelligent Design Community

Darwin’s Big Mistake – Gradualism

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The big mistake in Origin that Darwinists won’t admit is gradualism. Darwin explained that according to his theory we should expect to observe a continuum of living species each with only the slightest of variations between them. He postulated that we don’t observe this because the fittest species take over and the insensibly slight variants die off leaving species that are fully characteristic of their kind which then makes possible taxonomic classification by those characters. It’s in the full title in the latter half “The Preservation of Favored Races”.

That left Darwin with explaining the fossil record which is indisputably a record of saltation. Species in the fossil record appear abruptly fully characteristic of their kind, persist unchanged for an average of about 10 million years, then disappear as abruptly as they appeared. Darwin explained this away by saying the fossil record was incomplete and that when it was more fully explored the insensibly small variations that cumulatively led to the emergence of new species would be apparent. One hundred fifty years of fossil hunting later has not revealed what Darwin thought it would reveal. Some still say the fossil record in incomplete. Stephen Gould’s candid admission (“the trade secret of paleontology” is that it fails to support the very theory it is based upon) and formation of the theory of punctuated equilibrium is perhaps the most famous attempt to salvage gradualism.

No Darwinists I know or read give saltation any credence. The reason why is because saltation implies front loading. How would one species change in just a few generations to something taxonomically different? All the new characters that distinguish the new species must have been present in the predecessor if they were expressed that quickly. Random mutation & natural selection, through a tedious trial and error process, takes a very long time to generate novel characters. Indeed this insufficiency is at the very core of Intelligent Design. Haldane’s Dilemma is alive and well. Only an intelligent agent has the capacity to plan for the future. Intelligent agency is proactive and that proactivity is what distinguishes it from RM+NS. RM+NS is reactive in that it can “learn” from past experience but it can’t plan for future contingencies which have not been experienced in the past.

My position, which has remained unchanged for several years, is that phylogenesis was a planned sequence. Common descent from one or a few ancestors beginning a few billion years ago has overwhelming evidence in support of it. Gradualism however does not have overwhelming evidence. Gradualism in evolution survives to this day because the only alternative to it is intelligent design. Gradualism doesn’t survive by the weight of the evidence but rather by the tightly held belief in philosophic naturalism held by an overwhelming number of the practioners of evolutionary biology. As Richard Dawkins famously wrote “Although atheism might have been logically tenable before Darwin, Darwin made it possible to be an intellectually fulfilled atheist.” These people are clinging to gradualism like religious dogma because to say it’s wrong is tantamount to giving up their religion.

Comments
Eugene Koonin. The Origin at 150: Is a new evolutionary synthesis in sight?" Trends in Genetics, 25(11), November 2009, pp. 473-475 writes;
The discovery of pervasive HGT and the overall dynamics of the genetic universe destroys not only the tree of life as we knew it but also another central tenet of the modern synthesis inherited from Darwin, namely gradualism. In a world dominated by HGT, gene duplication, gene loss and such momentous events as endosymbiosis, the idea of evolution being driven primarily by infinitesimal heritable changes in the Darwinian tradition has become untenable.
TheisticEvolutionist
September 25, 2013
September
09
Sep
25
25
2013
08:15 AM
8
08
15
AM
PDT
vjtorley (#48), That was fascinating. The following two paragraphs are especially interesting:
Hybridisation isn't the only force undermining the multicellular tree: it is becoming increasingly apparent that HGT plays an unexpectedly big role in animals too. As ever more multicellular genomes are sequenced, ever more incongruous bits of DNA are turning up. Last year, for example, a team at the University of Texas at Arlington found a peculiar chunk of DNA in the genomes of eight animals - the mouse, rat, bushbaby, little brown bat, tenrec, opossum, anole lizard and African clawed frog - but not in 25 others, including humans, elephants, chickens and fish. This patchy distribution suggests that the sequence must have entered each genome independently by horizontal transfer (Proceedings of the National Academy of Sciences, vol 105, p 17023). Other cases of HGT in multicellular organisms are coming in thick and fast. HGT has been documented in insects, fish and plants, and a few years ago a piece of snake DNA was found in cows. The most likely agents of this genetic shuffling are viruses, which constantly cut and paste DNA from one genome into another, often across great taxonomic distances. In fact, by some reckonings, 40 to 50 per cent of the human genome consists of DNA imported horizontally by viruses, some of which has taken on vital biological functions (New Scientist, 27 August 2008, p 38). The same is probably true of the genomes of other big animals. "The number of horizontal transfers in animals is not as high as in microbes, but it can be evolutionarily significant," says Bapteste.
This is particularly fascinating when one considers that most of the genetic manipulation done by humans (a subset of ID) is in fact guided horizontal gene transfer. That bit about "The most likely agents of this genetic shuffling are viruses" is, barring studies on the probability of viruses accounting for such massive HGT in a reasonable timeframe, simply whistling past the graveyard. HGT by an intelligent agent would be expected to be much more efficient than unguided HGT, as we have seen during the last 20 years. It remains to be seen how efficient natural viral or other unguided HGT is, but it is a reasonable bet that it is orders of magnitude too small to reasonably account for the facts as known. This would make excellent research material for someone wishing to do ID-related research. It always amuses me when an opponent of ID asks why the Intelligent Designer (they always mean God in this context) didn't re-use successful components. The evidence seems to be more and more that He (or he, she, they, etc.) did. The opponents' rhetoric is coming around to bite them.Paul Giem
January 22, 2009
January
01
Jan
22
22
2009
12:46 PM
12
12
46
PM
PDT
vjtorley FYI everyone: http://www.newscientist.com/article/mg20126921.600-why-darwin-was-wrong-about-the-tree-of-life.htmlvjtorley
January 21, 2009
January
01
Jan
21
21
2009
11:25 PM
11
11
25
PM
PDT
Btw, great 2 c u back at UD Dave. I thought u had retired.NZer
January 21, 2009
January
01
Jan
21
21
2009
08:37 PM
8
08
37
PM
PDT
ALL evidence for said common descent (UCD) can also be used as evidence for common design.
A genome is sort of like a tin can that’s been kicked around.
Except that it isn’t made out of tin and hasn’t been kicked.
It accumulates dings and nicks in it that are visible but don’t have any practical effect. But they make it uniquely identifiable.
How do you/ we know that these identifying marks are dings and nicks? And why would something that doesn’t have any practical effect stay around intact enough to be used as an identifying mark? We are talking some thousands, if not millions of generations in which dings and nicks not only occurred but became fixed! That must have been some strong selection effect that allowed that to happen.
If common design then the designer began the new designs from an existing genome and copied all the dings and nicks into the new one.
Or convergent evolution could also explain those alleged dings and nicks. That is certain regions of DNA are more susceptible to mutations- those “hot spots”- and as such alleged markers can form just by chance. Comon mechanism- that is similar sequences of DNA are subject to similar mutations (or identical sequences of DNA are subject to identical mutations).
For all practical purposes that’s common descent.
Only because we don’t know any better. And what happens when biologists finally figure out the transformations required cannot be obtained via genetic alterations?
This is basically why Dembski writes that the explanatory filter can generate false negatives - a designer can make a design look like an accident.
Or these marks are accidents. But accidents that can only occur in specific regions.
If common design then the designer made it look like common descent in every detail.
I say only to people who cannot fathom something else. And until universal common descent can start explaining the physiological and anatomical differences it is not a scientific inference. Rather it is an inference based on a world-view. Ya see Dave we have data that indicates what gives us our eye color. We know what gives us sickle-celled anemia. We know what causes many traits. Is "human" a trait? I don't think so. However we do not know what makes an organism what it is other than a human baby is born when a human male successfully mates with a human female and a kitten is born when a Tom-cat successfully mates with a she-cat. Is the final form really just a sum of the genome? I don't think so. At least no one has been able to make that link.Joseph
January 21, 2009
January
01
Jan
21
21
2009
05:28 AM
5
05
28
AM
PDT
Something for all to consider: One can say that humans are programmed for various things. If something smells a certain way you want to put it in your mouth. If something tastes a certain way you want to swallow it. A huge difference between man and computer is that man can disobey/rebel/ignore his programming. It's impossible for a computer to do that.tribune7
January 20, 2009
January
01
Jan
20
20
2009
11:05 PM
11
11
05
PM
PDT
Rob If Gil’s checkers app can anticipate future game situations and choose its moves accordingly, is it an intelligent agent? Under an inflexible set of very simple rules, yes it is. It also has all of Gil's knowledge of checkers incorporated into it. The take home point for you though should be a realization that Gil's checkers program doesn't operate via random mutation and natural selection. Natural selection can only evaluate moves after the move is made. Gils program evaluates potential moves before they're made and intelligently selects one with the best projected result. DaveScot
January 20, 2009
January
01
Jan
20
20
2009
05:40 PM
5
05
40
PM
PDT
Joseph
ALL evidence for said common descent (UCD) can also be used as evidence for common design.
A genome is sort of like a tin can that's been kicked around. It accumulates dings and nicks in it that are visible but don't have any practical effect. But they make it uniquely identifiable. If common design then the designer began the new designs from an existing genome and copied all the dings and nicks into the new one. For all practical purposes that's common descent. This is basically why Dembski writes that the explanatory filter can generate false negatives - a designer can make a design look like an accident. If common design then the designer made it look like common descent in every detail. DaveScot
January 20, 2009
January
01
Jan
20
20
2009
05:27 PM
5
05
27
PM
PDT
How did this thread get into computers? Whatever...
It’s safe to safe a program can be written to choose between possibilities.
Yes and no. Depends on how you define choose. A coded "if-then-else" construct is not real choosing imo. A choice is more than binary logic. Any program merely follows through its code according to preset conditional statements. Can a program choose which user it likes best? Example - an oversimplified program to calculate who won an election: Lets say that variable Jack is a 32 bit integer and variable John is a 32 bit integer - the only two candidates. Initial values assigned = 0 - each variable will represent the number of votes received in the election Now assume that Jack and John have been assigned values that indicate how many votes have been entered for each somewhere else ion the program where votes were counted. We also have a variable Winner that is of type string with initial value = 'draw' So a statement like if Jack > John then   Winner = 'Jack' else if John > Jack then   Winner = 'John' end if If Winner = 'draw' then   // do a recount ... etc. There's other better ways to do it but this is just for illustration Now, did this little program make a decision as to who won the election? Well yes and no. But it was guided by very limited, preconceived answers. Now say you want to write a program in which the computer can choose which is it's favorite color. It's obvious that no computer program could be written to do such since computers don't have personality and thus favorites and color to a computer is just so many bits coding combinations of red, green and blue (RGB). Can a computer choose a favorite color? No. Ya, I know - You could simulate a color choice by coding for random combinations of RGB then having the program randomly select a combination etc.. But that isn't making a choice the way intelligent agents would make the same choice. Not at all. And as far as computers gaining knowledge, that's another myth. Computers do not gain knowledge per se - only information. Coded information is not knowledge in a machine. The computer doesn't really 'know' anything. It merely contains so many bits of coded information and algorithms that process it. Nor, as in the case of Gil's checkers program, does the program really anticipate. Not the way humans do. A program follows strict rules coded into it. In AI you would probably have a rule base, maybe a neural network etc. and a lot of search and select algorithms that after going through millions of iterations in loops will come up with possible moves, one of which will be "chosen" by the algorithm according to it's most probable outcome in subsequent moves by some final if-then-else construct. But that still is not anticipation as per intelligent agents. That, imo, is better defined as calculation than anticipation. The strict laws of boolean logic, hardwired into the CPU's logic gates is still used. But there's nothing wrong with using such terms - we intelligent agents 'know' what we mean when applying them to a program. Another example: does your OS 'know' what time it is? Or is it just reading it's CPUs internal 'clock ticks' and formating that in terms of a human timeframe? Obviously the latter. Hope this helps.Borne
January 20, 2009
January
01
Jan
20
20
2009
05:02 PM
5
05
02
PM
PDT
R0b (#37, #38), What you forget is that if humans are computers, we don't actually make choices either. That is precisely the point. Perhaps humans are merely programs. Perhaps we're not. Regardless, computers are merely programs. No amount of technological sophistication can change this.
I see many posited limitations of computational processes, but I don’t understand how these claims are supported.
Yes, and I find that very strange. Thus it would be instructive for everyone if you attempted to support your claim that computational processes can make choices.
It’s not hard to write a computer program for which no human on earth can predict its output.
Oh really? 1) Execute program. 2) Record output. 3) Execute program w/same input. 4) Confirm prediction. See? This is the sort of creative thinking that separates us from machines.Timothy
January 20, 2009
January
01
Jan
20
20
2009
04:35 PM
4
04
35
PM
PDT
ROb--According to computing theory, computational processes have all of the known abilities that humans have. Computational processes can't create (cause to exist, bring into being). People can.tribune7
January 20, 2009
January
01
Jan
20
20
2009
03:46 PM
3
03
46
PM
PDT
ROb--Should I take that as a vote that computer programs have anticipation capability? It's safe to safe a program can be written to choose between possibilities.tribune7
January 20, 2009
January
01
Jan
20
20
2009
03:39 PM
3
03
39
PM
PDT
Timothy:
Nevertheless, we know that computers are simply deterministic machines that execute instructions.
Under Bohmian mechanics, everything is fully deterministic. And deterministic computational processes can be made to look as non-deterministic as you like (eg pseudo-RNGs) no matter what method you use to determine determinacy. Or, if you consider quantum phenomena to be truly non-determistic, you can throw a quantum RNG into a computer if you'd like. So if humans aren't computers, I don't see how determinacy separates us.
We also have the ability to be unpredictable.
Predictability doesn't separate us either. It's not hard to write a computer program for which no human on earth can predict its output.R0b
January 20, 2009
January
01
Jan
20
20
2009
03:01 PM
3
03
01
PM
PDT
I see many posited limitations of computational processes, but I don't understand how these claims are supported. Empirical evidence, eg "Nobody has ever made a machine that does X", may just identify limitations of current technology. So it seems that logical arguments are needed, but I've never seen any formal logic that shows anything other than the non-computability of the halting problem and its equivalents.R0b
January 20, 2009
January
01
Jan
20
20
2009
02:44 PM
2
02
44
PM
PDT
R0b (#33), The anthropomorphization of programs doesn't bother me; often it is very instructive. The problem is when people get caught up in the metaphors and start making arguments based on them. Yes, for the purpose of explanation, we can say that computers anticipate or choose or even think. Nevertheless, we know that computers are simply deterministic machines that execute instructions. CJYman (#26),
Doesn’t a checkers program choose to make a move based on the logic provided for it by its programmer in combination with its opponent’s move.
No, it follows the programmed rules. That is what it does.
Do you choose to do things based on a whim, or do you also choose based on logic provided by your programmer in conjunction with the “moves” that happen around you. Furthermore, if you do something based on a whim (no reason that you can tell), how do you know that there isn’t an extremely complex set of causes which operate in combination with the logic behind your choice?
I don't know.
We could say that the robot anticipated the scenario.
One could easily write an algorithm that fits a curve to sampled data and then outputs the curve's value for some arbitrary point in the future. Can you call this prediction or anticipation? Yes. But to say then that the algorithm has the ability to anticipate is deeply misleading, because this implies a very generalized ability. The program simply does what it was programmed to do, nothing more. Even if you were to collect every such algorithm and mass them into one master prediction program, it would still be completely deceptive to say that the program had the ability to anticipate, because it would still be doing nothing more than what it was programmed to do. You and I, as humans, have an extremely generalized ability to anticipate the future. We also have the ability to be unpredictable. Well, perhaps it's all an illusion. I cite Clarke's Third Law.Timothy
January 20, 2009
January
01
Jan
20
20
2009
02:29 PM
2
02
29
PM
PDT
R0b, There are inferences that are specific to humans that cannot be accounted for by computational processes such as we see in programmed computers. How do you frame a problem? How do you form the judgment? How do you bring the relevant information to bear? What are the relevant considerations when considering a problem? Take an illustration: a man walks into a bar, orders a glass of water, and the bartender pulls out a gun, and the man says thank you. It's an interesting situation, what would explain it from a computer's point of view? The answer is that the man had the hiccups. When looking at a picture, what are the salient points or aspects of it? These aren't things that fall under computational approaches.Clive Hayden
January 20, 2009
January
01
Jan
20
20
2009
02:20 PM
2
02
20
PM
PDT
More than 50 years of pop culture have conditioned us to associate computer programs with conscious agency rather than with machinery. Computer programs have a direct analogue to mechanical devices. I suppose that because a program's interaction with programmable microcircuitry is hidden, producing no visual or auditory cues behind the scenes, we tend to associate spooky characteristics with it. I doubt that anyone links old-school adding machines with agency, nor player pianos with decision making. However computer programs are essentially glorified versions of these, with abstractions that allow for greater complexity of design. There are no characteristics of computer programs that could not theoretically be produced with an intricate mechanical device, albeit with great difficulty. A 1960's era truck motor doesn't 'decide' when to produce combustion -- it doesn't decide when to send spark down a plug wire or how to mix fuel. Each element of this harmony is carefully tuned, and the machine runs according to its 'programming.' The milling of the cam shaft determines timings of valves and cylinders. The distributor sends the spark when its points make contact. There are no choices exercised by machines, nor computer programs; their behaviors are determined strictly by their programming. Computer programs are nothing more than elaborate decision trees, which react to input with slavish reliability, bereft of any notion one way or another. When we use words like anticipating or decision in regards to computer programs, it's a projection of the programmer's design intent; this agency does not belong to the program in any way, shape, or form. The program's behavior its not its own. It is not responsible for its actions, nor can it rectify a bad design decision made by its engineer. A computer program is an extension of both the will and capability of the designer, in much the same way a backhoe bucket is an extension of the operator. In both cases, there's an actual decision maker in the driver's seat.Apollos
January 20, 2009
January
01
Jan
20
20
2009
02:09 PM
2
02
09
PM
PDT
Timothy:
Begging the question is really not very fair, is it?
My question was sincere, and was based only on the assumption that you consider humans to be capable of X, Y, and Z, while computers are not. I have no idea what question you think I'm begging. I really would like an answer to the question. In AI and machine learning circles, saying that a computer anticipates and learns and gains knowledge is unobjectionable. I'm curious whether ID proponents define those terms differently.R0b
January 20, 2009
January
01
Jan
20
20
2009
02:07 PM
2
02
07
PM
PDT
tribune:
They can do these things if programmed for them.
Yes, it goes without saying that computers can't do much of anything without some initial programming. So the question "Can computers do X?" should be interpreted "Can an appropriately programmed computer do X?" Interestingly, the answer to that question is actually pretty well-established. The list of things that computer programs can't do includes solving the halting problem, producing an arbitrary number of digits of Chaitin's Number, etc. Of course, humans can't do those things either, as far as we know. According to computing theory, computational processes have all of the known abilities that humans have.R0b
January 20, 2009
January
01
Jan
20
20
2009
01:56 PM
1
01
56
PM
PDT
Based on my posting #26, it is at least possible that human are in part sophisticated computers following a sophisticated rulebook. The only place this breaks down is when it comes to explaining consciousness.CJYman
January 20, 2009
January
01
Jan
20
20
2009
01:51 PM
1
01
51
PM
PDT
R0b, Don't try to mystify things. :) A program is a set of instructions, a series of rules. Do instructions anticipate? Do they make choices? Do they acquire knowledge? Of course not. They are just rules. To say otherwise is nonsense. Choices, in principle, might be made by whatever is executing the instructions, but not so in the case of the computer. An algorithm cannot tell the computer to make a "choice." There would have to be another algorithm governing which "choice" the computer "made." Moreover, regardless of how complicated and sophisticated the algorithm, it remains an algorithm. Imagine that you are to play checkers using the checkers algorithm. Do you make any choices? Of course not. The rules tell you what to do in such-and-such a situation. That is all. Do the rules choose? No, they just are. The only choice involved is whether you obey the rules.
Are these limitations inherent to all computer programs?
Precisely.
How do you define the phrases “anticipate future situations,” “react accordingly,” and “acquire knowledge” such that humans can do them but computational processes cannot?
Begging the question is really not very fair, is it? Please discuss why you think that rules can "make choices." At any rate, whether or not humans are simply sophisticated computers simply following a sophisticated rulebook (this seems to be a point of some controversy), the fact remains that this is indisputably true of computers.Timothy
January 20, 2009
January
01
Jan
20
20
2009
01:43 PM
1
01
43
PM
PDT
tribune:
If its anticipations are correct it already had the knowledge.
Should I take that as a vote that computer programs have anticipation capability? If so, I'm counting 2 votes for and 2 votes against. This may just be a semantic issue, but I curious whether anyone can come up with definitions that solve it.R0b
January 20, 2009
January
01
Jan
20
20
2009
01:41 PM
1
01
41
PM
PDT
ROb --Do you see the limitations of computer programs as simply a matter of limited current technology, or are these limitations inherent to all computer programs, regardless of technology? My view is that computer can do no more than what you put in in regardless of the number of CPUs and ram. But there are real experts -- said without hyperbole -- on this board such as Dave and Gil who could address that with greater authority. If computational processes cannot, even in principle: - anticipate future situations - react accordingly They can do these things if programmed for them. - acquire knowledge You can use them to data mine but gathering facts isn't the same as knowledge.tribune7
January 20, 2009
January
01
Jan
20
20
2009
01:33 PM
1
01
33
PM
PDT
If a checkers program can anticipate future situations and react accordingly, surely it is acquiring knowledge If its anticipations are correct it already had the knowledge.tribune7
January 20, 2009
January
01
Jan
20
20
2009
01:21 PM
1
01
21
PM
PDT
Hello Timothy (#21), Doesn't a checkers program choose to make a move based on the logic provided for it by its programmer in combination with its opponent's move. Do you choose to do things based on a whim, or do you also choose based on logic provided by your programmer in conjunction with the "moves" that happen around you. Furthermore, if you do something based on a whim (no reason that you can tell), how do you know that there isn't an extremely complex set of causes which operate in combination with the logic behind your choice? IOW, if the idea to got to the beach just pops in your head and you choose to go, how do you know that there aren't subconscious (even random) processes which brought that thought to your mind which you will then make a choice based on some reasons (logic). Furthermore, isn't anticipation merely the ability to combine memory with the ability to process information from your environment. What if a robot had specific interactions with its environment stored into its memory and upon seeing the beginning of a situation which it had seen before, it was programmed to began to react in advance to that previous situation. If the situation was indeed the same, the robot would react appropriately to the rest of the situation before it was completed. We could say that the robot anticipated the scenario. Indeed isn't this how we anticipate things, remembering previous situations (even sub-consciously) and responding accordingly to our environment, albeit on a more complex level?CJYman
January 20, 2009
January
01
Jan
20
20
2009
01:05 PM
1
01
05
PM
PDT
This video I just loaded fits this topic very well; Ancient Fossils That Evolutionists Don't Want You To See http://www.godtube.com/view_video.php?viewkey=c968e9cb1fb09c6eb2e1bornagain77
January 20, 2009
January
01
Jan
20
20
2009
01:02 PM
1
01
02
PM
PDT
And you can add: - make choices to the above list. Thanks.R0b
January 20, 2009
January
01
Jan
20
20
2009
12:51 PM
12
12
51
PM
PDT
A few questions for tribune, Timothy, and Clive: Do you see the limitations of computer programs as simply a matter of limited current technology, or are these limitations inherent to all computer programs, regardless of technology? If computational processes cannot, even in principle: - anticipate future situations - react accordingly - acquire knowledge then how do you define the phrases "anticipate future situations," "react accordingly," and "acquire knowledge" such that humans can do them but computational processes cannot?R0b
January 20, 2009
January
01
Jan
20
20
2009
12:49 PM
12
12
49
PM
PDT
I consider the process by which engineers create a new prototype to be in line with saltation change. Bicycle x 2 = quadricycle, now add engine, now add wings, now add another line of engineering, say a computer. Is it fair to say we can reverse engineer how the original Designer did it? Can we not modify organisms genetically now? All we need now is to find the cosmic laboratory.the wonderer
January 20, 2009
January
01
Jan
20
20
2009
12:36 PM
12
12
36
PM
PDT
Clive,
The programmer has already programmed it to play chess.
Precisely. And the ability to anticipate and/or choose cannot, by definition, be programmed. A checkers program cannot itself be intelligent, any more than a cookbook can be intelligent. I a merely suggesting that anything which is able to anticipate and make choices ought to be considered intelligent.Timothy
January 20, 2009
January
01
Jan
20
20
2009
12:16 PM
12
12
16
PM
PDT
1 2

Leave a Reply