Put a Sock In It

Arguments we’ve heard many times before and don’t want to hear again.

If you insist on boring us with them you won’t be with us for long.

Many of these can be found in the Pandamonium game. If you want to wage battles with these arguments go there and fight the pandas instead of us.

Who Designed the Designer

This argument points out that, by inferring a designer from complexity in machines, the designer must also be complexity. Why? Well just because it seems like he/she/it would. This of course then plunges into an infinite loop of who designed the designer. This infinite loop makes Intelligent Design somehow impossible. The really weird part is the argument is broadcast to us using a computer that was the result of intelligent design. Intelligent design does not speak to the nature of designers anymore than Darwin’s theory speaks to the origin of matter.

Intelligent Design is Creationism in a Cheap Tuxedo

No, it isn’t. Are you capable of comprehending the concept that a theory being consistent with a philosophical, religious, or metaphysical belief is distinct from being the belief itself or being founded on that belief? ID is consistent with religion but is not itself a religion nor is it founded on religion. ID is also consistent with non-religious beliefs like panspermia. Creationism is an attempt to take the biblical account in Genesis and find scientific evidence of it. Religious groups are always likely to be involved to a certain extent since ID gives epistemic support in the form of greater explanatory power for their theology.

Intelligent Design is no more and no less than detecting patterns that can be independently given and whose probability of occurring by chance interplay of matter and energy are too improbable to be reasonable. It is meeting the challenge in Darwin’s Origin of Species that if any structure in a living creature cannot be constructed by small steps, where the structure at each step is useful to the creature, then it falsifies the theory of natural selection. ID is a modern scientific offshoot of philosophic arguments from design such as Aristotle’s first cause and Paley’s watchmaker, which predate unconstitutional creation science by thousands and hundreds of years respectively. The only cheap thing in this it’s a cheap shot to censor a valid scientific hypothesis by conflating it with religion that a court will find violates the establishment clause of the first amendment.

Since Intelligent Design Proponents Believe in a “Designer” or “Creator” They Can Be Called “Creationists”

By that measure Darwin was a Creationist and the Theory of Evolution a Creationist theory:

There is grandeur in this view of life, with its several powers, having been originally breathed by the Creator into a few forms or into one; and that, whilst this planet has gone circling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved. –Charles Darwin

Anyone who thinks a design inference is warranted is in some sense a “creationist”. The argument hinges on conflating “creationist” with biblical creationist. One can be the former without being the latter. ID proponents may also believe in God, a Creator, Genesis, or they may be agnostic. However, ID is the effort to form scientific theories based on empirical evidence, rather than on religious texts (whether true or not).

In principle many ID proponents would not mind being labeled as a “creationist” in a general sense. The problem is that it causes confusion since it doesn’t recognize the significant distinctions. Mankind has always been interested in investigating the relationship between God and nature. At times, philosophy defined the debate; at other times, science seemed to have the upper hand. What has always mattered in this discussion is in which DIRECTION the investigation proceeds. Does it move forward? That is, does it assume something about God and then interpret nature in that context. Or does it move backward? That is, does it observe something interesting in nature and then speculate about how that might have come to be? If the investigation moves forward, as does Creationism, it is faith based. If it moves backward, as does ID, it is empirically based.

Each approach has a pedigree that goes back over two thousand years. We notice the forward approach in Tertullian, Augustine, Bonaventure, and Anselm. Augustine described it best with the phrase, “faith seeking understanding.” With these thinkers, the investigation was faith based. By contrast, we discover the “backward” orientation in Aristotle, Aquinas, Paley, and others. Aristotle’s argument, which begins with “motion in nature” and reasons BACK to a “prime mover,” is obviously empirically based.

To say then, that Tertullian, Augustine, Anselm (Creationism) is similar to Aristotle, Aquinas, Paley (ID) is equivalent to saying forward equals backward. It has nothing to do with subjective interpretation.

Intelligent Design is an Attempt by the Religious Right to Establish a Theocracy

Hyperbolic nonsense. As valid as saying Darwinian evolution is an attempt by the atheist left to eliminate religion. Now some ID proponents may assert that people don’t WANT Darwin to be wrong, they simply don’t WANT there to be a designer of any type, because that threatens their world view.

That is some truth to this statement, but only to a certain extent. For example, Richard Lewontin, the eminent author and Professor of Biology at Harvard from 1998 through 2003, is famous for his quotes. He states “We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world , but on the contrary, that we are forced by our a priori adherence to material causes….Moreover, materialism is absolute, for we cannot allow a Divine Foot in the door.”

But to be fair, Darwinists are a diverse bunch and many are religious. BUT they have their own personal philosophical views on their own religion that they believe conflicts with design (like “God would not have done it that way”).

Bad Design Means No Design

By pointing out imperfections in living things it is somehow made apparent that there can’t be an intelligent agency behind it. This is really odd as it is basically a religious argument being made against Intelligent Design. The proponent of this argument is making a faith based assertion that God is perfect and hence incapable of bad design. ID makes no claim that the source of complexity is a perfect God incapable of imperfection. Write that down.

No Real Scientists Take Intelligent Design Seriously

Yes, they do. And even if they didn’t this is a logical fallacy called an appeal to authority. Ideas stand or fall on their own merits. Take a class in logic from a real logician at your nearest institute of higher learning and come back to us with a report card showing a passing grade.

“Evolution” Proves that Intelligent Design is Wrong

The meanings of evolution, from Darwinism, Design and Public Education:

1. Change over time; history of nature; any sequence of events in nature
2. Changes in the frequencies of alleles in the gene pool of a population
3. Limited common descent: the idea that particular groups of organisms have descended from a common ancestor or multiple LUCAs.
4. The mechanisms responsible for the change required to produce limited descent with modification, chiefly natural selection acting on random variations or mutations.
5. Universal common descent: the idea that all organisms have descended from a single common ancestor (LUCA).
6. Blind watchmaker thesis: the idea that all organisms have descended from common ancestors solely through an unguided, unintelligent, purposeless, material processes such as natural selection acting on random variations or mutations; that the mechanisms of natural selection, random variation and mutation, and perhaps other similarly naturalistic mechanisms, such as lateral gene transfer or endosymbiosis, are completely sufficient to account for the appearance of design in living organisms.

Creationists go with 1-4, with the change in 4 being built-in responses to environmental cues or organism direction as the primary mechanism, for allele frequency change, culled by various selection processes (as well as random effects/ events/ choice of not to mate/ unable to find a mate). The secondary mechanism would be random variations or mutations culled by similar processes. In other words life’s diversity evolved from the originally Created Kind, humans included. Science should therefore be the tool/process with which we determine what those kinds were.

Some but not ALL ID proponents go with 1-5, with the Creation change to 4 plus the following caveat in 5: Life’s diversity was brought about via the intent of a design. The initial conditions, parameters, resources and goal was pre-programmed as part of an evolutionary algorithm designed to bring forth complex metazoans, as well as leave behind the more “simple” viruses, prokaryotes and single-celled eukaryotes. This could be called “intelligent evolution.”

How are you defining evolution: blind watchmaker hypothesis? In that case there is not any positive evidence for that type of evolution based upon actual observation. What failed prediction would you consider to be adequate for falsification?

Real Scientists Do Not Use Terms Like Microevolution or Macroevolution

This is urban legend, for such terms have been used regularly in the scientific literature.

Campbell’s Biology (4th Ed.) states: “macroevolution: Evolutionary change on a grand scale, encompassing the origin of novel designs, evolutionary trends, adaptive radiation, and mass extinction.”

Futuyma’s Evolutionary Biology, a text I used for an upper-division evolutionary biology course, states, “In Chapters 2h3 through 25, we will analyze the principles of MACROEVOLUTION, that is, the origin and diversification of higher taxa.” (pg. 447, emphasis in original).

These textbooks respectively define “microevolution” as “a change in the gene pool of a population over a succession of generations” and “slight, short-term evolutionary changes within species.”

In his 1989 McGraw Hill textbook, Macroevolutionary Dynamics, Niles Eldredge admits that “[m]ost families, orders, classes, and phyla appear rather suddenly in the fossil record, often without anatomically intermediate forms smoothly interlinking evolutionarily derived descendant taxa with their presumed ancestors.” (pg. 22) Macroevolution: Pattern and Process (Steven M. Stanley, The Johns Hopkins University Press, 1998 version), notes that, “[t]he known fossil record fails to document a single example of phyletic evolution accomplishing a major morphological transition and hence offers no evidence that the gradualistic model can be valid.” (pg. 39)

The scientific journal literature also uses the terms “macroevolution” or “microevolution.” In 1980, Roger Lewin reported in Science on a major meeting at the University of Chicago that sought to reconcile biologists’ understandings of evolution with the findings of paleontology. Lewin reported, “The central question of the Chicago conference was whether the mechanisms underlying microevolution can be extrapolated to explain the phenomena of macroevolution. At the risk of doing violence to the positions of some of the people at the meeting, the answer can be given as a clear, No.” (Roger Lewin, “Evolutionary Theory Under Fire,” Science, Vol. 210:883-887, Nov. 1980.)

Two years earlier, Robert E. Ricklefs had written in an article in Science entitled “Paleontologists confronting macroevolution,” contending: “The punctuated equilibrium model has been widely accepted, not because it has a compelling theoretical basis but because it appears to resolve a dilemma. … apart from its intrinsic circularity (one could argue that speciation can occur only when phyletic change is rapid, not vice versa), the model is more ad hoc explanation than theory, and it rests on shaky ground.” (Science, Vol. 199:58-60, Jan. 6, 1978.)

Intelligent Design Tries To Claim That Everything is Designed Where We Obviously See Necessity and Chance

Behe had this to say:

Intelligent design is a good explanation for a number of biochemical systems, but I should insert a word of caution. Intelligent design theory has to be seen in context: it does not try to explain everything. We live in a complex world where lots of different things can happen. When deciding how various rocks came to be shaped the way they are a geologist might consider a whole range of factors: rain, wind, the movement of glaciers, the activity of moss and lichens, volcanic action, nuclear explosions, asteroid impact, or the hand of a sculptor. The shape of one rock might have been determined primarily by one mechanism, the shape of another rock by another mechanism.

Similarly, evolutionary biologists have recognized that a number of factors might have affected the development of life: common descent, natural selection, migration, population size, founder effects (effects that may be due to the limited number of organisms that begin a new species), genetic drift (spread of “neutral,” nonselective mutations), gene flow (the incorporation of genes into a population from a separate population), linkage (occurrence of two genes on the same chromosome), and much more. The fact that some biochemical systems were designed by an intelligent agent does not mean that any of the other factors are not operative, common, or important.

and

I think a lot of folks get confused because they think that all events have to be assigned en masse to either the category of chance or to that of design. I disagree. We live in a universe containing both real chance and real design. Chance events do happen (and can be useful historical markers of common descent), but they don’t explain the background elegance and functional complexity of nature. That required design.

The Explanatory Filter Implies that a Snowflake is Designed by an Intelligent Agent. This Proves that the Design Inference is Not Reliable!

This argument has been made various times. Here’s a recent one: Reinstating the Explanatory Filter

What About the spreading of antibiotic resistance?

Micro-evolution. No “special ID explanation” required. Why, do you hold the misconception that ID proponents consider everything in evolutionary biology to be false?

Also, the existence of “superbugs” prove yet another ID prediction. Mutations are generally considered “beneficial” if they provide benefit to an organism in a particular environment. Meaning that the majority of these “beneficial” mutations are only beneficial in a limited sense. As in, they’re destructive (deleterious) modifications that are beneficial only in a limited environment. But they provide survival benefits in a limited environment, like blowing up a bridge in a war is beneficial in a limited sense. Darwinism requires not only beneficial mutations but constructive beneficial mutations in order to be feasible. We’re looking for constructive beneficial mutations that are not merely a reshuffling of existing genes via sexual recombination. Most of these superbugs are fortunately of the limited benefit type and will quickly be eradicated when exposed to normal conditions outside hospitals.

For another example click here.

What Do You Mean by “Constructive” Beneficial Mutations Exactly?

We are looking for examples of mutations that are not only beneficial in relation to fitness but also in relation to the progressive/positive creation/significant (> UPB) modification of existing CSI. But that’s a different thing than the “beneficial mutations” generally used in scientific literature. If there is a generally-accepted term that encapsulates what you are looking for I’m not aware of it. It’s not CSI in general since that could be negative in relation to fitness. For example, if I were to tack a spoiler (like on a vehicle) and a retractable anchor onto a bird I think that would not be too beneficial…

In Behe’s new book, the majority of the examples he discussed involved destructive albeit positively selected mutations, but not all. Behe also discussed the antifreeze glycoprotein gene in Antarctic notothenioid fish. In short, he says that it looks reasonably convincing as an example of Darwinian evolution, but that it’s a relatively minor development, and probably marks the limit of what Darwinian processes can reasonably be expected to do in vertebrate populations. So what we’re primarily looking for is the limitations on “constructive” positively selected beneficial mutations.

The Edge of Evolution is an estimate and it was derived from the limited positive evidence for Darwinian processes that we do possess. This estimate would of course be adjusted when new evidence comes into play or abandoned altogether if there is positive evidence that Darwinian processes are capable of large scale constructive positive evolution (or at least put in another category if it’s ID-based[foresighted mechanisms]). The bulk of the best examples of Darwinian evolution are destructive modifications like passive leaky pores (a foreign protein degrading the integrity of HIV’s membrane) and a leaky digestive system (P. falciparum self destructs when it’s system cannot properly dispose of toxins it is ingesting, so a leak apparently helps) that have a net positive effect under limited/temporary conditions (Behe terms this trench warfare). I personally believe that given a system intelligently constructed in a modular fashion (the system is designed for self-modification via the influence of external triggers) that Darwinian processes may be capable of more than this, but we do not have positive evidence for this concept yet. But that’s foresighted non-Darwinian evolution in any case, and even if there are foresighted mechanisms for macroevolution they might be limited in scope.

Intelligent Design proponents deny, without having a reason, that randomness can produce an effect, and then go make something up to fill the void.

ID proponents do not outright deny that “randomness can produce an effect”. Random variation is “guided” by environmental variables, which are pseudo-random in a sense. Meaning that once you have the environmental variables in place the funneling effects are predictable (deterministic to a point), but in nature what variables occur can be semi-random. ID proponents look to the relevant limitations of Genetic Algorithms and other computer simulations where Active Information via intelligent input is required. They also analyze the data for real world examples of Darwinian evolution and justifiably note that these processes are very limited in capability. In order to defeat our claim, a thorough stepwise indirect genetic pathway must be extrapolated from the data. This biological object must not only contain an Irreducibly Complex core set of components, but the informational content must be higher than 500 informational bits.

In the past Darwinists have attempted to discuss this topic but generally their imaginations fail to produce when their ideas conflict with basic engineering concepts:

Chance, Law, Agency, or Other?

Intelligent Design is Not a Valid Theory Since it Does Not Make Predictions

Predictions of non-functionality of “junk DNA” were made by Susumu Ohno (1972), Richard Dawkins (1976), Crick and Orgel (1980, Pagel and Johnstone (1992), and Ken Miller (1994), based on evolutionary presuppositions.

By contrast, predictions of functionality of “junk DNA” were made based on teleological bases by Michael Denton (1986, 1998), Michael Behe (1996), John West (1998), William Dembski (1998), Richard Hirsch (2000), and Jonathan Wells (2004).

These Intelligent Design predictions of are being confirmed. e.g., ENCODE’s June 2007 results show substantial functionality across the genome in such “junk” DNA regions, including pseudogenes.

These predictions are further detailed in Junk DNA at Research Intelligent Design.

Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project.
The ENCODE Project Consortium, Nature 447, 799:816 (14 June 2007) doi:10.1038/nature05874

There are other predictions, but the majority of them are within the scope of ID-compatible hypotheses.

The Evidence for Common Descent is Incompatible with Intelligent Design

Darwinists will sometimes phrase this objection as such: “please explain where common descent ceases to occur and design takes over.” Design IS the cause of common descent, which is not really a process, rather it is a pattern imputed to the observations of nature made by observers from the outside. Common descent has functioned more as an abstract heuristic akin to the practice of making sense of groups (who is in, who is out) and of then relating those groups by a process of elimination. Although it should be noted that ID is compatible with Universal Common Descent, Common Descent through multiple LUCAs, and other scenarios.

It is certainly true that evolution predicts only minor changes from generation to generation – but when you look at the cumulative effect of hundreds of millions or billions of replications then those many, many changes can incrementally lead to large changes.

You cannot simply stack small changes and–boom–you got something complex. That is not how it works in reality. A series of small changes have to come about independently, each having positive selective pressure, and then indirectly come together to form a new whole. This is called an Indirect Darwinian pathway. The reason a Direct Darwinian pathway is not an option is due to Irreducible Complexity since a Direct Pathway requires that a component have positive selective pressure for its function every increment. Darwinists do not like to tacitly admit that IC is a factor but that is why all current research is now directed upon Indirect Darwinian pathways. Unfortunately for Darwinists, that type of scenario is essentially relying on serendipity.

Macro-evolution *is* nothing but lots and lots of microevolution!

Actually, no, it’s not. That was the mainstream view years ago when the single-gene-single-function paradigm reigned. If that concept was true and did represent biological reality accurately then, yes, we would say that Darwinism is feasible. Discoveries during the last couple decades have made this concept very unlikely.

Quite frankly, the beliefs of many Darwinists are based upon an outdated simplistic viewpoint that is now known to be false. We believe this version of Darwinism is still taught and perpetuated because it is easier for people to swallow. In fact, the majority of ID proponents on this site used to believe it, including William Dembski and all the admins. And then we found out what modern science was telling us.

One of your fellow Darwinists was nice enough to admit this is not the case here on UD:

One of the central tenets of the modern synthesis of evolutionary biology as celebrated in 1959 was the idea that macroevolution and microevolution were essentially the same process. That is, macroevolution was simply microevolution extrapolated over deep evolutionary time, using the same mechanisms and with essentially the same effects. A half century of research into macroevolution has shown that this is probably not the case. -MacNeill

But as to avoid being accused of unfairly quoting MacNeill out of context we must add that he personally believes that an “evolving holistic synthesis”, as he calls it, will emerge as a consensus in the coming years that will resolve the problems facing Darwinism. So now the real question is whether ID holds true in regards to this “evolving holistic synthesis” that is yet to be finalized. I don’t think anyone could say for certain at this point; it’s too early. It’s a different question with a potentially different answer.

Nothing is Wrong with the Modern Synthesis!

The ideas that came from the modern synthesis concerning microevolution are as valid and as settled as any scientific ideas, although there may remain some tweaking to be done. Still, I think it interesting many a Darwinists initial reaction is to defend the modern synthesis despite also claiming it’s been superseded. Why defend what you know to be wrong? Although, to be fair, I also think it weird that ID proponents are often preoccupied with personally attacking Darwin himself and/or outdated ideas.

Personally I think discussing RM+NS is like beating a dead horse even though there’s still many Darwinists that support it as the primary method for producing macroevolution. I’d rather move onto discussing these supposed “engines of variation”.

On “random mutations” and Allen’s claim that ID is using a strawman for evolutionary biology: ID proponents are using the term in reference to everything. For example, in Behe’s new book he lists all the mechanisms on one page but in general he uses “random mutations” unless a distinction needs to be made. Given that definition, these “engines of variation” would all be encapsulated under “random mutation”. But I agree that a better term should be adopted, since “random mutation” is often conflated with the over-simplification of the modern synthesis. One could also make this distinction with “non-foresighted mechanisms”.

The Information in Complex Specified Information (CSI) Cannot Be Qualified

There are several way of providing a quantitative number to CSI. First off, CSI references something else or it specifies something else that is functional. The objective functionality, independent of any viewer, is the Specification.

Now DNA is like a written and spoken language only it has meaning by referring to something else. We do not understand what 95% of DNA does but we definitely understand a large subset of it. It sets up a process to produce proteins. So we have these 64 letters in the DNA language called codons which are combinations of the DNA nucleotides. Since they are in threes and there are 4 possible options, there is a total of 64 combinations.

These codons specify one amino acid in each protein. This is all basic now a days. A typical protein is 300 amino acids long and this requires 1200 nucleotides. It is possible to count the information content in such a protein or nucleotide string and perhaps there are some adjustments that can be made to each protein because most of the amino acid have more than one codon specifying it. Also some amino acids at different parts of the protein are interchangeable. So lets just say that the actual information content is less than the 1200 nucleotides in the gene specifying the protein. The total number of possible combinations in a 1200 series of nucleotides is 4^1200. As I said the actual number will be less.

What types of life are Irreducibly Complex? Or which life is not Irreducibly Complex?

It’s not life as a whole. Mechanical components of all life are Irreducibly Complex (IC). Not all components are IC nor do they qualify as Complex Specified Information (CSI). The question is whether unguided Darwinian processes (RM+NT, lateral gene transfer, symbiogenesis, reliance on hox genes, whatever) can produce IC and/or CSI components via Indirect Pathways. Unguided Darwinian processes perhaps are capable of producing components that are composed of 3-6 parts. But for comparison the flagellum is composed of 41 parts and the most observed we’ve ever heard of is 2 or 3. Again, part of ID research is determining the limits of unguided Darwinian processes. Agreeing that there are beneficial mutations and limited instances of small changes is in no way a threat to ID or an admission of some sort we have been saying this for years to deaf ears.

An IC machine cannot, by definition, be the result of a direct Darwinian pathway. Direct means that the steps are selected for the improvement of the same function we find in the final machine. IC makes a direct Darwinian pathway impossible. So, only two possibilities are left: either sudden appearance of the complete machine (practically impossible for statistical considerations), or step by step selection for different functions, and with the target function COMPLETELY INACTIVE for natural selection. This is a point that Darwinists tend to bypass. Darwinists may believe in indirect Darwinian pathways, because it’s the only possible belief which is left for them, but it’s easy to see that it really means believing in impossibilities. There is no reason in the world, either logic or statistical, why a complex function should emerge from the sum of simpler, completely different functions. And even granted that, by incredible luck, that could happen once, how can one believe that it happened millions of times, for the millions (yes, I mean it!) of different IC machines we observe in living beings? The simple fact that Darwinists have to adopt arguments like co-option and indirect pathways to salvage their beliefs is a clear demonstration of how desperate they are.

In the Flagellum Behe Ignores that this Organization of Proteins has Verifiable Functions when Particular Proteins are Omitted, i.e. in its simplest form, an ion pump.

I note you refer to the ion pump. An IC machine cannot, by definition, be the result of a direct Darwinian pathway. Direct means that the steps are selected for the improvement of the same function we find in the final machine. The very fact that you even attempt to make this argument showcases that your comprehension of IC is in error!

Behe discusses this common argument in more detail here:

http://www.discovery.org/scripts/viewDB/index.php?command=view&id=1831

Here’s how Ken Miller phrased this poor argument:

“The very existence of the Type III Secretory System shows that the bacterial flagellum is not irreducibly complex.”

Reference my last comment to see why he completely misunderstands ID and why finding homologs doesn’t threaten ID. Finding the reuse of code is a prediction of some ID-compatible hypotheses. Years ago, Darwinists were SURPRISED (as usual) when they found the code to be this way.

Now an Indirect Darwinian pathway for the flagellum would not only require that the code for various components come together (be co-opted), but that the code regulating that code be modified, the location/orientation be precise, modifications be made to the original code for these components, and that new code be generated. The reason new code is needed is because not all the components in the total system may have homologs or functions separate from the whole. For the flagellum there are currently 17 unique proteins with no known homologs (by the way, the T3SS and the subsystem in the flagellum are similar but not exactly the same; Behe is researching protein binding sites to see if there are limitations that may make indirect pathways not just unlikely but impossible). Then of course there are the external systems for controlling the usage of the flagellum…kinda useless to have an outboard motor but no way of using it. Never mind overcoming the pleiotropic nature of this code, since making these changes can and will often have adverse effects. As in, in order to have positive selection the changes being made not only have to pull together a functional flagellum but they can’t have a negative effect that is worse than the positive of having the functional flagellum. Invoking exaptation like a magic wand won’t help you here.

Now Behe certainly wouldn’t deny exaptation in general since he accepts universal common descent and this deals with a system being modified to deal with different environments. An example would be bird feathers, which are said to have evolved for temperature regulation and then later evolved for flight. But that still doesn’t provide a mechanism for this evolution.

Finally, we have been discussing the supersystem of the flagellum this whole time but we have neglected to focus on the subsystems. In its own right, the T3SS is fairly complicated, being comprised of 11 proteins. The problem Darwinists face is that Darwinian mechanisms have never been shown capable of even producing a system like the T3SS, never mind the full flagellum.

Darwinian evolution is a Vastly More Simplistic Argument than Intelligent Design

No, it isn’t. The hypothesis of a designer for obviously designed things is extremely simple, natural, and based on factual observation (the constant causal link between intelligent designers and their products in reality). I am really surprised by the inconsistent use of the concept of simplicity in discussions like Occam’s razor and similar. Everyone seems to have his own unjustified ideas of what is simple and what is not. Darwinian theory of evolution is not simple. It is a very complex and artificial attempt to justify something which appears designed, and has appeared designed for centuries to most rational beings, without admitting the existence of a designer. In fact, it’s expanded way beyond Darwin’s original mechanism of natural selection to incorporate a whole slew of potential mechanisms. That’s not simple at all…it keeps getting more and more complicated.

The Designer Must be Complex and Thus Could Never Have Existed

This is obviously a philosophical argument, not a scientific argument, and the main thrust is at theists. So I will let a theist answer this question: “[M]any materialists seem to think (Dawkins included) that a hypothetical divine designer should by definition be complex. That’s not true, or at least it’s not true for most concepts of God which have been entertained for centuries by most thinkers and philosophers. God, in the measure that He is thought as an explanation of complexity, is usually conceived as simple. That concept is inherent in the important notion of transcendence. A transcendent cause is a simple fundamental reality which can explain the phenomenal complexity we observe in reality. So, Darwinists are perfectly free not to believe God exists, but I cannot understand why they have to argue that, if God exists, He must be complex. If God exists, He is simple, He is transcendent, He is not the sum of parts, He is rather the creator of parts, of complexity, of external reality. So, if God exists, and He is the designer of reality, there is a very simple explanation for the designed complexity we observe.”

Intelligent Design is Completely Out of Date! It’s arguing against old idea and not modern evolutionary theory.

Not all Darwinists believe that the principal mechanism for unguided evolution is the same thing. When an ID proponent refers to “Darwinism” we are referring to all proponents in all “camps” who are supporting unguided evolution. If we refer to you as a “Darwinist” it is because the term encompasses all advances in modern evolutionary theory and/or we’re not sure what camp you consider yourself a part of. We are not referring to the original hypothesis as advocated by Darwin himself, which has been termed selectionism. Advocates of the “modern synthesis” formed in the 1930s are commonly referred to as Neo-Darwinism, which to this day is the largest camp by far. ID proponents will often shorten the name of this mechanism down to “RM+NS”. But as the USA-based National Research Council recently published, “[n]atural selection based solely on mutation is probably not an adequate mechanism for evolving complexity.” Instead, the newer camps believe that lateral gene transfer, endosymbiosis, and other potential mechanisms are possibly the mechanisms for creating complex genomes.

The Neo-Darwinist camp is still probably the biggest of all the Darwinist camps but people are likely to start abandoning “Neo-Darwinism as the primary mechanism” in droves. The Neo-Darwinist camp being so large is probably primarily due it being the only major version of Darwinism mentioned in higher education unless your degree program is focusing on evolution. And of course the media almost never differentiates between the various camps. So once education and media catch up we see the Neo-Darwinist camp shrinking even more rapidly.

Also, we differentiate by camp based upon where the individual Darwinist puts their “Darwinian mechanism/process” emphasis. This means that ID proponents are not claiming that all these Darwinian mechanisms cannot “help” each other: where one is weak/limited the other may not be. Unfortunately, this can be confusing since a good (short) name that encapsulates all these other camp’s ideas has not been formulated. Some Darwinists just call it “modern evolutionary theory” but that is too long for common usage and it does not make known the distinctions.

A quote about Lynn Margulis may make this clearer:

The acknowledged star of the weekend [was] Lynn Margulis, famous for her pioneering research on symbiogenesis. Margulis began graciously by acknowledging the conference hosts and saying, “This is the most wonderful conference I’ve ever been to, and I’ve been to a lot of conferences.” She then got to work, pronouncing the death of neo-Darwinism. Echoing Darwin, she said, “It was like confessing a murder when I discovered I was not a neo-Darwinist.” But, she quickly added, “I am definitely a Darwinist though. I think we are missing important information about the origins of variation. I differ from the neo-Darwinian bullies on this point.”

Thus, even though Margulis repudiates Neo-Darwinism, she is still a Darwinist. And if she is, so is just about every other biologist who holds that teleology ought to play no substantive role in evolutionary theory.

Intelligent Design Does Not Do Research

Scientific research takes money and institutional support. Most people do not have the money to go out and build a multi-million dollar lab they can run themselves. The Discovery Institute has been funding a little research, and the CRS and ICR have been funding Creationist research, but other than that there is not much money. Discovery Institutes’s Center for Science and Culture (its program on intelligent design and evolution) only spent $1.2 million in 2003. In 2004 it spent the same, and in 2005 it spent $1.6 million. Indeed, the budget for the entire Discovery Institute, including expenditures on non-intelligent design programs on transportation, technology, and other topics, has never reached $5 million. In 2003, the Institute as a whole spent $2.5 million. In 2004, it spent $3.5 million, and in 2005 it spent $3.9 million. These facts are publicly available for anyone to check on the Institutes Form 990s posted at www.guidestar.com.

The latest research by ID proponents is attempting to ascertain the exact limits of unguided Darwinian processes and convenient scenarios. It should be noted that Creationists can conduct research that advances Intelligent Design without directly supporting the preferred hypotheses of ID proponents which may contradict Creationism. In return, Intelligent Design proponents, who may not advocate Creationism, can produce research that may be supportive of Creationism. While there can and will be overlap the issues should not be conflated.

Here is a brief summary of a handful of major ID proponents:

Jeffrey Schwartz has been researching the mind/brain problem, which is fundamental to ID.

Douglas Axe has been researching issues in large-scale amino acid changes in proteins.

Bill Dembski has been researching further mathematical methods of design detection and evolutionary algorithms.

Behe, Snoke, Minnich, and Meyer have been researching irreducible complexity.

Lonnig and Wood have both done research on plant transposons and front-loaded evolution. Lonnig even published a peer-reviewed book called Dynamical Genetics about genetics as dynamical systems.

Walter Remine has been working on the cost of natural selection.

John Davison has been publishing on his prescribed evolutionary hypothesis.

Jonathan Wells has been working on showing how design principles can better explain the mechanism of living systems than historic principles.

Cavanaugh is working on empirical methods for non-evolutionary taxonomy.

Paul Nelson has been working on ontogenic depth.

Again, this is just a sample of some research currently being undertaken and does not represent a complete survey.

You must also take into account persecution of ID proponents. For example, Dembski and Marks were forced by Baylor to return a research grant due to the implications of the research possibly being in favor of ID. ID proponents desire to increase the amount of research being done but Darwinists usually block the way. If you are making this argument then how can you not see the hypocrisy in saying that ID proponents should do research then blocking every attempt to do so?

Intelligent Design Cannot Be Falsified

Now Dembski has written a paper which shows that evolutionary search can never escape the CSI problem (even if, say, the flagellum was built by a selection-variation mechanism, CSI still had to be fed in).

If I may interpret what I think he’s saying: that even if an Indirect Stepwise Pathway was found to be capable ID would not be falsified completely, as the problem would then be shifted to the active information injected at OOL. Essentially, this would be a combination of Design and a Chance Hypothesis.

At this point if Necessity would need to be shown to be capable creating CSI in order to falsify ID. Since this instance of Necessity is not known what is being looked for is an “unknown law”.

Let’s say we found a 2001-style monolith on the moon and all the planets. Design would likely be inferred. But suppose later on we discover unknown processes (a Law) that is observed to create these monoliths in space as an emergent property of an interplay of processes. ID theory would be revised to take this Law into account.

Similarly, formalized design detection in regards to biology is open to falsification based upon new observations. It’s possible there is an unknown Law operating upon biology. If evidence of this unknown Law were found, ID theory would need to be revised. The limits of this Law would be analyzed. For example, this Law may only operate under limited circumstances and be capable of producing limited forms of complex specified information. Now this is only in regards to self-replicating life; obviously a separate unknown Law or event would need to be found for OOL. But if positive evidence is uncovered that these Laws are capable of operating uniformly then the entire ID scientific program in regards to biology is kaput.

William Dembski “Dispensed” with the Explanatory Filter (EF) and thus Intelligent Design Cannot Work

This quote by Dembski is probably what you are referring to:

I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI is clearer as a criterion for design detection.

Bill made a quick off-the-cuff remark that was immediately grossly distorted by Darwinists, who claimed that the “EF does not work” and that “it is a zombie still being pushed by ID proponents despite Bill disavowing it years ago”.

Before he wrote it I, Patrick, had expressed to him via email my belief that the original formulation of the EF from 1998 was too simplistic (which was also pointed out here). This is not so say that it does not work in practical applications but that it’s limited in its usefulness since it implicitly rejects the possibility of some scenarios since “[i]t suggests that chance, necessity[law], and design are mutually exclusive.” For example, the EF in its original binary flowchart would conflict with the nature of GAs, which could be a called a combination of chance, necessity, and design. To clarify, I’m referring to scenarios which have a combination of these effects, not whether necessity equates to design or some nonsense like that.

In regards to biology when the EF detects design why should it arbitrarily reject the potential for the limited involvement of chance and necessity? For example, in a front-loading scenario a trigger for object instantiation might be partially controlled by chance. Dog breeding might be called a combination of chance, necessity, and design as well. This does not mean the old EF is “wrong” but that it’s not accurate in its description for ALL scenarios. The old EF works quite well in regards to watermarks in biology since I don’t see how chance and necessity would be involved and thus they are in fact “mutually exclusive”. I’d add SETI as well, presuming they received something other than a simplistic signal.

Now Bill did expound on the practical usage of the Explanatory Filter in his book The Design Inference (read Part II). For a carefully nuanced exposition of the EF and how chance, necessity, and design relate, see especially ch. 11. The problem is that the 1998 formulation does not adequately reflect the latest work on the subject. So the EF either needs to be updated (made more detailed) in order to reflect these realities or be discarded (at least in terms of usage in regards to scenarios where chance, necessity, and design are NOT mutually exclusive).

Personally I believe that the EF as a flowchart could be reworked to take into account more complicated scenarios and this is a project I’ve been pondering for quite a while. kairosfocus has already produced a better version on his website.

But Bill also responded as thus:

In an off-hand comment in a thread on this blog I remarked that I was dispensing with the Explanatory Filter in favor of just going with straight-up specified complexity. On further reflection, I think the Explanatory Filter ranks among the most brilliant inventions of all time (right up there with sliced bread). I’m herewith reinstating it — it will appear, without reservation or hesitation, in all my future work on design detection.

I came up with the EF on observing example after example in which people were trying to sift among necessity, chance, and design to come up with the right explanation. The EF is what philosophers of science call a “rational reconstruction” — it takes pre-theoretic ordinary reasoning and attempts to give it logical precision. But what gets you to the design node in the EF is SC (specified complexity). So working with the EF or SC end up being interchangeable. In THE DESIGN OF LIFE (published 2007), I simply go with SC. In UNDERSTANDING INTELLIGENT DESIGN (published 2008), I go back to the EF. I was thinking of just sticking with SC in the future, but with critics crowing about the demise of the EF, I’ll make sure it stays in circulation.

I’m hoping that the visual representation will be updated to reflect his latest work.

ID Proponents Wrongly Claim that Natural Selection Does Not Work

Allen MacNeill (and other biologists) actually tries to break natural selection into several different “neat” categories to fit the evidence that is consistently found in the fossil record (very sudden appearance, then rapid diversity, then slow decline in diversity over periods of time.) Let me explain.

All naturally occurring populations exhibit what Fisher called continuous variation. That is, a range of variation in various traits that, when plotted in Cartesian coordinates, approximates a normal distribution (bell-shaped curve). In a trait that exhibits continuous variation, most of the individuals exhibit the trait at a value reasonably close to the mean, with a relatively small number of individuals exhibiting relatively large or relatively small values for that trait.

An example is height in humans, which is often illustrated in introductory biology texts with a group of students arranged along a football sideline in order of height. There are a few very short and a few very tall people, but the vast majority form a bulge in the middle of the curve. As noted by Eric that major problem is with the “engines of variation”.

Given a population that exhibits continuous variation for a trait, it is claimed that there are three different patterns of natural selection that can result:

Stabilizing selection, in which individuals from both extreme “tails” of the normal distribution are not preserved over time (i.e. they do not have as many offspring that survive to reproduction), compared with those in the middle bulge of the curve. Under such conditions, the mean value for the trait does not change over time (hence the term “stabilizing selection”). In our example of height, stabilizing selection would be the result if individuals of average height had the most surviving and reproducing offspring (assuming that height is heritable from parents to offspring, of course).

Directional selection, in which individuals from one (but not the other) extreme “tail” of the normal distribution are not preserved over time (i.e. they do not have as many offspring that survive to reproduction), compared with those in the middle bulge of the curve. Under such conditions, the mean value for the trait changes over time, shifting toward the tail of the curve that includes the surviving individuals (hence the term “directional selection”). An example would be a population on an island all becoming pygmies over time.

Diversifying selection, in which individuals from the middle of the range (but not either extreme “tail”) of the normal distribution are not preserved over time (i.e. they do not have as many offspring that survive to reproduction), compared with those at the extreme tails of the curve. Under such conditions, the mean value splits and becomes bimodal, with two new mean values increasing in frequency, and the old mean value disappearing. This process would eventually (depending on the intensity of selection) produce two phenotypically different populations where one had previously existed (hence the term “diversifying selection”).

Thus, with these different versions of natural selection Darwinism becomes unfalsifiable. For they can explain both rapid diversification of a fossil type and then can explain lack of diversity of fossil type thereafter. He can explain anything he wants in the fossil record whenever he wants his theory to fit the evidence. i.e. his “new” theory has greater weight than the evidence has to falsify it.

I agree that some ID proponents may go too far, and many forget that Natural Selection is the non-random component of Darwinism. I know some ID proponents have argued in the past for an intelligent mechanism for the finch beaks, but we can look at GAs and see that fitness functions do work when properly balanced (which is active information).

The problem is that Darwinists presume this balancing act and thus that natural selection is capable of operating uniformly. As in, for ALL targets in a search space there exists environmental factors capable of creating diversifying or directional selection to the extent that features become fixated within a population. I have no problem with the assertion that this works for SOME cases, just not ALL.

The reason I think this is an issue is since selection usually relies on environmental factors (I say usually since there is artificial selection like with dogs). While some factors are generalized, some factors must be very specific in order for the funneling effect to work. What if, like with these peacock feathers, the factors are very rare or don’t even exist? That means that in order for Darwinism to work not only does Functional Complexity have to emerge it must be paired with a rare event that offers selective pressure.

Now ID proponents don’t dispute the notion of stabilizing selection. They dispute the notion that there’s a kind of selection other than stabilizing selection that can operate successfully to the point of macro-evolution. This does not mean that selection in general does not happen per se (think finch beaks, blind cavefish, malaria, ice fish, etc.) but Berlinski would probably say it’s not special enough that should not warrant a separate categorization. Or at least that directional selection is exceedingly rare and can only operate under limited conditions/environments and thus for a very short amount of time (or at least it better be short lived…directional selection tends to decimate a population as was seen with the finches). Personally I’m fine with people making these categorical distinctions since they’ve only been shown to be capable of trivial changes.

Now as I’ve pointed out before the major issue is that natural selection is essentially a funnel, and it must be balanced in order to produce results. For an example, a while back I had an experiment with a GA that performed word searches. Going from memory here, so short version is that there were multiple versions of the fitness function: a) pseudo-random search b) a function that attempted to emulate Darwinism c) a function that incorporated some active information about the target d) explicit directed front-loading. The target was less than 200 informational bits but only C and D were capable of finding it. The most difficult target at 360 informational bits required D.

The point is that selection must be constrained and balanced long enough that the trait becomes fixated. The problem with the finch example is that once the environment changes back to normal the finch population also reverts back to being a mixed population based upon continuous variation. As in, the changes purportedly funneled by directional selection don’t stick (they are not fixated). Some Darwinists like to say that in order for such changes to fixate that the environment must be permanently altered as well. Well…in the finches case it’s apparent by their dwindling numbers that this might likely cause extinction of that population within that environment. Even if they did survive and the trait did fixate within the population it’s unknown whether the finches would permanently lose the ability to produce beaks of different sizes if the environment changed once again far off into the future.

A Darwinist put it this way: “sufficient conditions for long-term improvement [and fixation, I might add] to be likely are quite complicated.” Tell me about it… Here’s an example with flying squirrels, which have numerous balanced morphological changes in order to properly glide.

Dawkins speculated that falling from trees provided the environmental funnel. How many squirrels died jumping out of trees before some of them found out that they were lucky enough to have mutant extra skin along with modifications to the spine and ligaments in order to allow them to glide? How many squirrels have to fall to their deaths for such a change to become fixated in the population? Do we have any data at all on deaths caused by falls or is it all speculation? The automatic tendon locking mechanisms of such creatures should keep most of the corpses of natural deaths up in the trees I would imagine. What environment would provide this selective pressure? Unfortunately for such speculations, ordinary squirrels have been observed to fall from great heights with little or no injury. So are we now forced to hypothesize a limited set of environments which may include trees that would regularly cause death by falling?

The reason I ask all this is because evolutionary biology claims to have all this predictive power, so answering these questions should be easy. If this particular hypothesis (death by falling providing the environmental pressure) does not match reality what scenario is plausible? After all, there needs to some sort of plausible scenario since these traits are shared in divergent species and are supposed to be the result of convergent evolution.

The recent article HOW TO MAKE A FLYING SQUIRREL: GLAUCOMYS ANATOMY IN PHYLOGENETIC PERSPECTIVE (2007) makes the suggestion that since leaping distance scales with size that a smaller species would benefit more from gliding. So perhaps the selective pressure would be a smaller species competing with a larger species? Unfortunately, no data is provided for this hypothesis so we cannot evaluate whether this would provide enough selective pressure. It might be another peahen story-telling session.

They also briefly mention that evolving from a ground-based ancestor would be unlikely, presumably because of the low positive selective pressure for gliding. But again, we’re back to the problem of needing regular directional selection in order to fixate these changes in the population. Also, in order for these changes to be beneficial in the first place they have to be balanced (look up that squirrel article to see just how balanced). And if they’re not balanced they’re unlikely to provide much benefit (it’s neutral) and thus will be lost.

Having said all that, in general I don’t see an issue with unguided Darwinian mechanisms being capable of making these particular changes considering their “relative” simplicity and apparent modularity (then again, it may be front-loading) which “I” think “might” allow for a stepwise pathway. I just think it disconcerting that the focus of that recent article–which should represent the latest findings on this subject–seemed to be on making comparisons between samples. Darwinian mechanisms as the source of evolution were generally assumed to function, without any evidence of this being the case. The problems related to natural selection were never addressed. This is ironic since the article is entitled “HOW To Make a Flying Squirrel”.

Now Darwinists always start with the assumption of simplicity giving rise to higher complexity. Some ID proponents present this alternate scenario: What if ALL of the original squirrels could glide? After all, it’s far easier to suffer a deleterious mutation, and the survival benefit from this particular feature is negligible in most circumstances. The same could be said of the bat, where some species have echolocation and others do not. What if the original bat had echolocation and then over time some divergent lines lost it? Now before anyone accuses me of being a YEC, which I’m not, this scenario is compatible with YEC/OEC and front-loading hypotheses where the change program self-terminates at the final form and then deleterious mutations eventually occur.

Another issue is that often times Darwinists are dealing with mathematical models. It is claimed that fitness should not be measured by actual success (or actually, lifetime reproductive success, LRS). Instead, fitness should be the mathematical expectation of LRS in the environment. So it’s possible that Darwinism may “work” in the mathematical models but the models do not match reality. Models depend on empirical data and definitions. GIGO: Garbage In Garbage Out.

Needless to say, I’m not sure if all this makes the issue ever more confusing.

Intelligent Design Makes No Scientific Observations

While Darwinists have been happy to rely on story-telling Behe went and tried to find evidence for what Darwinian mechanisms are known to be capable of. The result of that research was published recently in The Edge of Evolution.

Behe’s latest work of analyzing what billions of trillions of replications of p.falciparum accomplished in the way of generating novel complexity without benefit of intelligent agency supports the prediction that only intelligent agency is capable of producing complex specified information. Random mutation and natural selection is almost universally regarded as a process which can generate complex machinery de novo. In principle this might be true – given enough time and opportunity to overcome statistical improbabilities and presuming that genetic entropy (deleterious mutations) do not outpace positively selected Darwinian processes. In practice all observations says there has not been sufficient time and opportunity. The universe is not believed to be either infinite in temporal or spatial dimension and the earth environment is far more limited. The modern synthesis, RM+NS, is the front runner for an alternative mechanism to intelligent agency. Under close observation in a fast eukaryote reproducer, in orders of magnitude more reproduction than all the mammals that ever lived, RM+NS failed to even remotely approach generating any of the genomic complexity that distinguishes modern mammals from their reputed reptilian ancestors. This is a more compelling example of a successful prediction for ID than was the confirmed prediction of cosmic background radiation was for the big bang theory. Possibly further research will reveal a reason why RM+NS failed to produce any significant complexity in p.falciparum but as it stands now there is no good explanation for the absence of any significant new complexity with such vast opportunity for it to self-organize.

Behe is Jumping to Conclusions. p.falciparum Did Not Evolve Because It Did Not Need to Evolve. In Other Words It is So Perfect Already That It Cannot Improve Upon Itself.

This answer, aside from being in opposition to neo-Darwinian postulates of random evolutionary trajectories (the answerer seemed to be channeling Lamarck) is quite wrong in the face of the facts of what p.falciparum “needed” in the way of differential reproduction.

Examples:

1. p.falciparum is excluded from a vast reproductive opportunity because it cannot survive in sub-tropical climates. Extending its range into temperate climates would vastly increase its reproductive potential. Evidently the necessary mutations for this require more than just a few interdependent mutations. It failed to increase its range in billions of trillions of replications.

2. The human-produced and administered drug chloroquine has killed billions of trillions of individual p.falciparum yet in billions of trillions of mutational opportunities to resist this drug, which requires just a few point mutations, it only found a way to resist, through random mutation and natural selection, about 10 times. In none of those 10 times did the RM+NS improved version of the parasite pass the improvements on into the parasite population at large.

3. A hemoglobin mutation in humans (sickle cell) confers resistance to p.falciparum (causing it to starve as the mutated hemoglobin clogs up its digestive mechanisms). Again in trillions of mutational opportunities p.falciparum failed to evolve any means of surviving in the sickle cell environment. Evidently this too requires more than just a few chained interdependent mutations.

How does modern evolutionary theory, with all its glut of potential Darwinian mechanisms beyond the modern synthesis’s RM+NS, explain these failures to evolve complex structures under intense selection pressure when given far more opportunity to evolve than all the mammals that ever lived?

ID Proponents Talk a Lot About Front-Loading But Never Explain What It Means

In engineered systems various possible contingencies are anticipated and processes are put in place to deal with them if and when any particular contingency actually arises in the future.

These forward looking predetermined responses are “front loaded” – put in place before they are actually needed.

Chance & necessity is a reactive process that cannot plan ahead. Intelligent design is a proactive process that can plan ahead.

Lenski’s Research on Citrate-Eating E. Coli Refututes Behe’s Edge of Evolution Hypothesis

The whole point of Behe’s new book was to try and find experimental evidence for exactly what Darwinian mechanisms are capable of. On the other hand we have speculative indirect stepwise pathway scenarios but so far the OBSERVED “edge of evolution” doesn’t allow these models to be feasible. But this “edge” is an estimate based upon a limited set of data which in turn “might” mean the estimated “edge” is far less than the maximum capable by Darwinian mechanisms. If Darwinists would bother to do further experiments they may see if this “edge” could in reality be extended. Then if this new derived “edge” is compatible with these models then so be it (though I’ll add the caveat that the “edge” might be better for Darwinism only in limited scenarios). In the meantime they’re just assuming the “edge” allows for it. Even worse, unless I failed to notice the news, the very first detailed, testable (and potentially falsifiable) model for the flagellum is yet to be fully completed (I realize there are people working on producing one) so a major challenge of Behe’s first book is yet to be refuted, never mind the new book.

Darwinists should stop pretending they have the current strongest explanation. I’ll fully acknowledge they’re currently formulating a response in the form of continued research, new models, and such but the mere fact is that they’re missing all the major parts to their explanation. This might change in the future, but it may not.

Or at least the situation hasn’t changed based upon this recent conversation where I asked for the functional intermediates in the indirect stepwise pathway to be named…and was never answered. Comment #203 summarizes that discussion, and should be read at full length, but I thought this was the kicker:

if the final function is reached through hundreds of intermediate functions, all of them selected and fixed, where are all those intermediates now? Why do we observe only the starting function (bacteria with T3SS) and the final function (bacteria with flagella)? Where are the intermediates? If gradualism and fixation are what happens, why can’t we observe any evidence of that? In other words, we need literally billions of billions of molecular intermediates, which do not exist. Remember that the premise is that each successive step has greater fitness than the previous. Where are those steps? Where are those successful intermediates? were they erased by the final winner, the bacterium with flagellum? But then, why can we still easily observe the ancestor without flagella (and , obviously, without any of the amazing intermediate and never observed functions)?

gpuccio was also gracious enough to assume the T3SS as a starting. Dave pointed out this long ago:

The gist of it, as I recall, is that the t3ss appears on small number of bacteria that prey on eukaryotes. It’s a weapon used to inject toxins into the prey. In the meantime the flagellum appears on a large number of bacteria that don’t prey on eukaryotes. Thus saying that the t3ss predates the flagellum is like saying that anti-aircraft missiles predate aircraft. Non sequitur. Flagella were useful to bacteria before eukarotes appeared but a t3ss would be useless before then.

The reasonable conclusion is that the T3SS devolved from the flagellum rather than the flagellum evolving from the T3SS. This scenario, which actually makes sense in the context of a tree of life beginning with bacteria, is also congruent with what we actually observe in nature today – useful things devolving from something more complex.

You are apparently parroting the chosen line of the Darwinian community:

Lenski’s experiment is also yet another poke in the eye for anti-evolutionists, notes Jerry Coyne, an evolutionary biologist at the University of Chicago. “The thing I like most is it says you can get these complex traits evolving by a combination of unlikely events,” he says. “That’s just what creationists say can’t happen.”

What a nice PR strategy. Assert their opponents making certain claims that they are not, then blow away that fake claim. AKA Strawman. Yet we’re never given the space to defend ourselves against such outrageous tactics.

For example, Darwinists were previously accusing Behe of ignoring pyrimethamine resistance in Malaria as an example of cumulative selection. In fact, Behe doesn’t deny the existence of cumulative selection, nor does he omit a mention of pyrimethamine as an example. Behe actually spends more than a full page discussing pyrimethamine resistance. Here is small portion of what Behe wrote about it in The Edge of Evolution.

Although the first mutation (at position 108 of the protein, as it happens) grants some resistance to the drug, the malaria is still vulnerable to larger doses. Adding more mutations (at positions 51, 59, and a few others) can increase the level of resistance.

Explaining how he covered cumulative selection, Behe writes in his Amazon blog:

I discuss gradual evolution of antifreeze resistance, resistance to some insecticides by “tiny, incremental steps, amino acid by amino acid, leading from one biological level to another, hemoglobin C-Harlem, and other examples, in order to make the critically important distinction between beneficial intermediate mutations and detrimental intermediate ones.

So the “ignoring cumulative selection in an indirect pathway” argument is a complete strawman. Behe’s position is that the creative power of cumulative selection is extremely limited, is not capable of traversing necessary pathways that are potentially tens and hundreds of steps long, and he backs up this position with real world examples of astronomical populations getting very limited results with it. This is something the critics don’t really address.

The fact that the opponents of Behe’s book find the need to repeatedly lie and misrepresent the book (Carroll and Miller) or avoid the subject matter altogether (Dawkins) shows exactly how good Behe’s book is. In spite of having more reproductive events every year than mammals have had in their entire existence, malaria has not evolved the ability to reproduce below 68 degrees. Nick Matzke’s explanation for this was that “in cold regions all the mosquitoes (and all other flying insects) die when the temperature hits freezing.” Think about it. Malaria cannot reproduce below 68 degrees Fahrenheit. Water freezes at 32 degrees Fahrenheit.

Musgrave and Smith have asserted that their research into HIV has disproved Behe’s hypothesis. To illustrate how out to lunch Musgrave and Smith are on Behe’s Edge of Evolution, on page 143 Behe writes that the estimated number of organisms needed to create one new protein to protein binding sites is 10^20. Further down the page, Behe notes that the population size of HIV is, surprise, within that range. So according to Behe’s own thesis, HIV should be able to evolve a new protein-to-protein binding sites. So along come Smith and Musgrave, point out a mutation clearly within Behe’s thesis, and then declare victory when in fact they have not contradicted Behe at all. All Behe needs to do is update his book to include that example.

How about an actual example where a more complex organism is less fit than its simpler counterpart? Depends on the complexity being looked at it, does it not? Let’s take a look at TalkOrigin’s example of people with “monkey tails”. I have no problem calling that “complexity” in a generalized sense. As in, not CSI, but a continuation of a process beyond its normal termination. I’m not sure what positive effects they do have. From what I remember they’re not articulated and cannot serve as an additional limb. But I’m pretty sure they’d act as the opposite of a peacock’s feathers, (which, BTW, has its own issues), dramatically reducing those individual’s chances of reproducing. Ditto goes for additional/non-functioning mammary nipples and other examples that turn off the opposite sex.

The situation is complicated enough that there can’t be blanket statements. There can be increments in complexity where the tradeoff is more positive than negative. But that’s why ID doesn’t have blanket statement…there is a complexity threshold. And that’s why Behe is trying to find an “edge of evolution”. While an estimate has been arrived at I don’t think that “edge” has been found yet. Personally I think it “might” be greater than where some ID proponents envision it to be. Perhaps the “true edge” is around 6 steps in an indirect stepwise pathway. But I could be wrong.

The perspective of ARN:

There are several observation that should be made before reaching general conclusions. The first relates to the machinery needed to metabolise citrate. The system to do this is already largely in place, but one enzyme is lacking. This is the comment from Mike Behe: “Now, wild E. coli already has a number of enzymes that normally use citrate and can digest it (it’s not some exotic chemical the bacterium has never seen before). However, the wild bacterium lacks an enzyme called a “citrate permease” which can transport citrate from outside the cell through the cell’s membrane into its interior. So all the bacterium needed to do to use citrate was to find a way to get it into the cell. The rest of the machinery for its metabolism was already there. As Lenski put it, “The only known barrier to aerobic growth on citrate is its inability to transport citrate under oxic conditions.”

Consequently, it is at least worth asking the question whether the E.coli bacterium had, in the past, lost the ability to metabolise citrate and what we are now seeing is a restoration of that damaged system. If this were the case, we should not be talking about “a major evolutionary innovation” but rather about the way complex systems can be impaired by mutations.

it demonstrates a major problem for those evolutionists who want to claim Darwinism can achieve major transformations. These mutations are not only rare, they are also useless without the pre-existence of a biochemical system that can turn the products of mutation into something beneficial.

The Edge of Evolution is an estimate and it was derived from the limited positive evidence for Darwinian processes that we do possess. This estimate would of course be adjusted when new evidence comes into play or abandoned altogether if there is positive evidence that Darwinian processes are capable of large scale constructive positive evolution (or at least put in another category if it’s ID-based[foresighted mechanisms]). The bulk of the best examples of Darwinian evolution are destructive modifications like passive leaky pores (a foreign protein degrading the integrity of HIV’s membrane) and a leaky digestive system (P. falciparum self destructs when it’s system cannot properly dispose of toxins it is ingesting, so a leak apparently helps) that have a net positive effect under limited/temporary conditions (Behe terms this trench warfare). I personally believe that given a system intelligently constructed in a modular fashion (the system is designed for self-modification via the influence of external triggers) that Darwinian processes may be capable of more than this, but we do not have positive evidence for this concept yet. But that’s foresighted non-Darwinian evolution in any case, and even if there are foresighted mechanisms for macroevolution they might be limited in scope.

We’re talking basic engineering here. When the code is pleiotropic you have to have multiple concurrent changes that work together to produce a functional result. Hundreds of simple changes adding up over deep time to produce macroevolution are not realistic. And, yes, I’m aware that the modular design of the code can allow for SOME large scale changes, especially noticeable with plants, but this is not uniform. Nor is it usually coherent (cows with extra legs hanging from their bodies, humans with extra mammary glands or extensions of their vertebrae [tails], flies with eyes all over). Nor non-destructive for that matter. And whence came the modularity? And we’re looking for CONSTRUCTIVE BENEFICIAL mutations that produce macroevolution. Darwinists cannot even posit a complete hypothetical pathway!

Previous discussions about the EoE:

Ken Miller, the honest Darwinist

Do the facts speak for themselves

ERV’s challenge to Michael Behe

Darwinist Predictions

P.falciparum – No Black Swan Observed

PBS Airs False “Facts”

The main point remains: at this time Darwinism does not have a mechanism observed to function as advertised. Should we continue research on proposed engines of variation? Definitely. When Edge of Evolution was released I believe I said that would make a good follow up (considering each proposed mechanism one by one, and of course their cumulative effect).

The Evidence for Gradualism in the Phylogenetic Tree of Life is Overwhelming

I suggest you read the latest studies:

Bushes in the Tree of Life

recent analyses of some key clades in life’s history have produced bushes and not resolved trees.

The patterns observed in these clades are both important signals of biological history and symptoms of fundamental challenges that must be confronted.

Wolf and colleagues omitted 35% of single genes from their data matrix, because those genes produced phylogenies at odds with conventional wisdom

The evidence presented here suggests that large amounts of conventional characters will not always suffice, even if analyzed by state-of-the-art methodology.

I think it would help the conversation to differentiate between Darwinian Common Descent and Common Descent compatible with ID hypotheses (which is fine with the above picture of bushes and not a consistent TOL). For example, you wouldn’t expect to find the same information being used in divergent lines from a Darwinian viewpoint if they’re geographically isolated or if the divergence supposedly took place a very large amount of time before.

ID proponents typically interpret homology as compatible with universal common descent, common descent from multiple LUCAs, or Designer Information Reuse (which is itself compatible with multiple scenarios). Instead of resolving a tree we’re getting bushes since we cannot find the gradual informational links that would be expected of Darwinian Common Descent. Continuing the theme of “islands of functional information”, these bushes could also be called “archipelagos of functional information” which must be bridged by informational leaps. Designed mechanisms could bridge (or traverse) these informational leap, thus producing Designed Common Descent.

Wolf and colleagues even discuss the “high frequency of independently evolved characters” aka convergent evolution. Another thing that ID proponents predicted is homologous information where none would be expected if Darwinian mechanisms were responsible for macro-evolution. Like the platypus, for example, whose genome is a patchwork of mammal, reptile, and bird.

Chromosomal sex determination in the platypus was also discovered to be a combination of mammal and bird systems. Yet TalkOrigins says:

birds are thought to have evolved from dinosaurs in the Jurassic about 150 million years ago, and that mammals are thought to have evolved from a reptile-like group of animals called the therapsids in the Triassic about 220 million years ago. No competent evolutionist has ever claimed that platypuses are a link between birds and mammals.”

Oops.

Often the the convergent evolution storytelling card is played…you’d think Darwinist would have run out of cards in that deck by now. Universal common descent from a single LUCA may be true itself but the historical narratives we have now may not be true themselves.

Even if the Tree of Life is Not Gradual We Still See a Bottom-To-Top Pattern as Darwin Predicted Would be Found

The real question is: Does the fossil record confirm or contradict the bottom to top pattern?

Here is James Valentine’s answer:

Darwin had a lot of trouble with the fossil record because if you look at the record of phyla in the rocks as fossils why when they first appear we already see them all. The phyla are fully formed. It’s as if the phyla were created first and they were modified into classes and we see that the number of classes peak later than the number of phyla and the number of orders peak later than that. So it’s kind of a top down succession, you start with this basic body plans, the phyla, and you diversify them into classes, the major sub-divisions of the phyla, and these into orders and so on. So the fossil record is kind of backwards from what you would expect from in that sense from what you would expect from Darwin’s ideas.

Now Valentine is a believer in a Darwinist approach to evolution so I would assume that the above quote is not biased.

Lateral (Or Horizontal) Gene Transfer (LGT) is Strong Evidence Against ID

When you think about novelty, think about both morphological and genetic novelty. James Valentine, the most knowledgeable of all paleontologists on the Cambrian Explosion, makes the argument that the novelty of body forms in the Cambrian came about through relatively simple changes in patterns of gene expression. This would presumably be because of the modular nature of their body plans.

Valentine sounds like is describing a common ID-compatible hypothesis. Whence came the modularity? And foresighted mechanisms? Why should ID proponents reject the capabilities of LGT if the system is designed to take advantage of it? (Although I will add that so far evidence for the extent of the capabilities of LGT is fairly limited yet it is assumed to be very active regardless.)

I’d also like to add that there probably should be made a distinction between Undirected/Non-Foresighted LGT and Active/Foresighted LGT. Now it’s possible that a Designer could build a system in expectation of undirected processes, but at least one ID-compatible hypothesis would expect A-LGT. For example, viruses as a whole could have once contained the functionality of networking all organisms and triggering macro-evolutionary events.

The problem with such functionality is that it would be expected to be highly susceptible to deleterious mutations since it’s not necessary for survival and thus this particular biosphere system for evolving would degrade with time. Eventually we’d be left with simple replicators totally focused on self-survival as we see today. BUT since they’re such fast replicators we might hope that some remnant of this functionality might survive.

Symbiosis Theory as Promoted by Lynn Margulis is Evidence Against ID

We should probably not consider symbiosis to be a subset of Darwinian mechanisms. But if you insist that we define it as such I have no problem with conceding that under your definitions that a subset of Darwinian mechanisms is “apparently” capable of producing macro-evolution. But it’s not like you can define ID away…

The available examples seem to interestingly show that there is very active interaction between host and symbiont, and different forms of adaptation. Frankly, nothing of that seems to have the characteristics of random variation. I would not call those things “macroevolution”, or, if we want to call them that way, they are examples of a kind of “macroevolution” which is very different from the classical form.

Here we observe active adaptation between existing species, and the biologic information for the expressed functions is already there, it is rather actively mixed, shared, and in some way readjusted. The process, although more complex, seems similar to lateral gene transfer between bacteria. Shall I remember that LGT is a common procedure, and that it is in some way part of the natural functions of bacteria? Symbiosis too is a common event in nature. It is a form of cooperation, not of generation of new functions from random events.

It’s logical to consider exchange of information and symbiosis as important elements in the modeling of living beings. They are, from bacteria to humans. But that changes nothing in the fundamental problem of the genesis of information.

Biological information must exist before it can be exchanged or remixed. All biological information, at its fundamental level, cannot just be the product of “exchange”. Proteins with their functions, the DNA code, regulation networks, complex molecular machines, body plans, transcriptome regulations, and so on, all must exist and work, and then they can sometimes be exchanged and/or shared between different living beings. So, arguments such as symbiosis and similar mechanisms are of no help to explain the origin of information.

Genetic Entropy is False and Thus ID is Falsified as Well

First, Sanford may be an ID proponent but his particular hypothesis should not be conflated with ID theory.

Second, Sanford’s hypothesis would only affect higher creatures. Let’s not forget, again, that high replicators like bacteria and viruses should avoid genetic entropy. Higher creatures with relatively low replication rates should be the focus of genetic entropy in the first place. UD has featured whole articles on the subject.

Genetic Entropy and Malarial Parasite p.falciparum

Third, Genetic Entropy does not necessarily imply immediate extinction….

In any case, see: Mutational Meltdown for micro examples of genetic entropy. It appears the principles ought to be partially scalable to large populations, except larger populations serve to slow the meltdown…

Since we’re mentioning Muller, here is a passing reference: Muller’s Ratchet and Mutational Meltdown

Also Muller’s Ratchet and Compensatory Mutation:

Evolutionary theory predicts that mutational decay is inevitable for small asexual populations, provided deleterious mutation rates are high enough. Such populations are expected to experience the effects of Muller’s Ratchet [1,2] where the most-fit class of individuals is lost at some rate due to chance alone, leaving the second-best class to ultimately suffer the same fate, and so on, leading to a gradual decline in mean fitness. The mutational meltdown theory [3,4] built upon Muller’s Ratchet to predict a synergism between mutation and genetic drift in promoting the extinction of small asexual populations that are at the end of a long genomic decay process. Regardless of reproductive mode, mitochondrial genomes from most animal species are expected to be particularly sensitive to Muller’s Ratchet due to their uniparental inheritance, high mutation rates and lack of effective recombination [3,5,6]. The genomic decay effects of Muller’s Ratchet have been observed in laboratory evolution experiments with abiotic RNA molecules [7], biotic RNA viruses [8], bacteria [9] and yeast [10]. Indirect evidence for the effects of Muller’s Ratchet in nature has resulted from studies on the long-term effects of reduced population sizes on genetic diversity and fitness in amphibians [11], greater prairie chickens [12,13] and New Zealand avifauna [14]. Molecular evidence for Muller’s Ratchet has resulted from analyses of deleterious tRNA gene structures encoded by mitochondrial genomes [15] and analyses of Drosophila sex chromosome evolution [16]. However, direct knowledge on the susceptibilities of natural populations to Muller’s Ratchet and the molecular mechanisms underlying this process remain enigmatic.

Every time this subject is raised Darwinists respond with “but what about sexual recombination?”. Unfortunately this is raised like a magic wand without any accompanying math COMBINED (as in, not just speculative scenarios that may or may not occur in nature regularly enough to effective) with empirical observations to justify that an overall downward trend is averted (notice I say “overall” since obviously there are exceptions).

We have a description of Sally Otto’s talk at ESEB which discusses this problem:

One of the big mysteries in evolutionary biology has been how sex evolved. John Maynard Smith pointed out in the 1960s that it really shouldn’t have – there’s a huge cost to any gene (because with sex it only has a 50% chance of being passed on), so a modifier that stops sex and have a 100% chance of being passed on will be fitter. Since then a lot of people have been worrying about this problem. In her plenary talk, Sally Otto talked about recent work that suggests we are close to a resolution of the problem.

There have been a couple of explanations that have been around for some time. The first is that sex helps evolution because it breaks up bad combinations of genes, particularly when the disadvantages are magnified, so that the cost of carrying two bad genes is more than the cost of carrying one bad gene twice (technically this is called epistasis). This does give sex and advantage, but it’s so small, and only occurs in limited and unlikely conditions.

The second explanation is the Red Queen hypothesis, again. A species is being subjected to all sorts of attacks (pathogens, parasites etc.), which are co-evolving with them, so there is a constant arms race (this is the Red Queen bit). A species evolves defenses, and sex can help combine them together, to increase the speed at which the species runs away from its enemies. This has some empirical support, but Otto showed that the theoretical results suggested it only worked under a narrow set of circumstances.

She then introduced a third idea – to look at finite populations. All of the previous work she had presented had been done assuming infinite populations. But in a finite population gene combinations can be combined randomly by genetic drift, and also not every combination of genes will be present in the population. Sex can then work to combine gene combinations and give an advantage. Adding the Red Queen improves the advantage (and I suspect that any sort of environmental variation will give an advantage to sex, more work needs to be done etc.).

Sounds speculative… Let’s assume these scenarios prevents or at least greatly slows a complete meltdown except in isolated cases. So genetic entropy may only kill off certain species but otherwise it’s just a “trimming effect” where certain “unnecessary” features are lost (Sal’s example of infrared vision in humans). But where is the evidence that such scenarios (by themselves without intelligently designed conservation/repair functionality) could maintain stasis long enough to allow for constructive beneficial mutations that lead to macroevolution in an indirect stepwise pathway?

To put this question into context, recent estimates for the difference between Human and Chimp are hovering around 70 percent. The estimates for a split vary, with some pushing it back further based upon a tooth. And then others place it at around 4.1 million based upon another analysis.

According to Nachman and Crowell, “[t]he average mutation rate was estimated to be ~2.5 x 10-8 mutations per nucleotide site or 175 mutations per diploid genome per generation.” A quote from Sanford summarizing further studies:

One of the most astounding recent findings in the world of genetics is that the human mutation rate (just within our reproductive cells) is at least 100 nucleotide substitutions (misspellings) per person per generation (Kondrashov, 2002). Other geneticists would place this number at 175 (Nachman and Crowell, 2000). These high numbers are now widely accepted within the genetics community. Furthermore, Dr. Kondrashov, the author of the most definitive publication, has indicated to me that 100 was only his lower estimate — he believes the actual rate of point mutations (misspellings) per person may be as high as 300 (personal communication). Even the lower estimate, 100, is an amazing number, with profound implications. When an earlier study revealed that the human mutation rate might be as high as 30, the highly distinguished author of that study, concluded that such a number would have profound implications for evolutionary theory (Neel et al. 1986). But the actual number is now known to be 100-300!

Let’s run a quick calculation:

960,000,000 (70% of human genome) / 4,100,000 years = 234.15 bases per year must be fixated within a population, which is barely within high estimates of human mutation rates

But that does not consider functionality and the ratio of deleterious-to-beneficial mutations, which is very roughly estimated to be one in one million by Gerrish and Lenski. So even if we presume that only 1% of those 960,000 bases are functional that still leaves 2.4 constructive (non-deleterious) beneficial mutations per year on average required by Darwinism.

At this point you might object, saying that you thought that ID was compatible with Common Descent? You would be correct. Unfortunately, when it comes to Intelligent Evolution research into related hypotheses has barely begun. So I could only posit potential mechanisms like front-loading and not identify any specific data as a justification for positing Intelligent Common Descent between Chimp and Human.

Final Warning

If you are making the above arguments your understanding of the subject matter is in error. The simplistic view of Darwinism of the past that you apparently adhere to is wrong. The arguments you are attempting to regurgitate will quickly earn you scorn. Please read the scientific literature produced by both Darwinists and ID proponents before continuing your association here at Uncommon Descent.

On a humorous note, in 2007 the UD database suffered corruption by which this page was hit by a series of point mutations to the text. All of them were deleterious. Of course, we have cleaned them up by erasing those remnants of Darwinian processes.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

Comments are closed.