Uncommon Descent Serving The Intelligent Design Community

Who Says Darwinists Don’t Make Predictions

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

. . . so long as the predicted event is safely 100,000 years in the future:

 Human race will split into two different species 

The human race will one day split into two separate species, an attractive, intelligent ruling elite and an underclass of dim-witted, ugly goblin-like creatures, according to a top scientist. 100,000 years into the future, sexual selection could mean that two distinct breeds of human will have developed. The alarming prediction comes from evolutionary theorist Oliver Curry . . . Dr Curry’s theory may strike a chord with readers who have read H G Wells’ classic novel The Time Machine, in particular his descriptions of the Eloi and the Morlock races.  In the 1895 book, the human race has evolved into two distinct species, the highly intelligent and wealthy Eloi and the frightening, animalistic Morlock who are destined to work underground to keep the Eloi happy

Now if only ID theorists would make a testable prediction; something like “over many thousands of generations natural selection will account for only extremely modest changes in the malaria parasite’s genes and will be unable to cause any increase in genetic information.”  Oh wait a minute, that prediction was made and confirmed.

Comments
Mickey:
Now further suppose that some cultural anthropologist pays a visit and finds the objects lined up in a neat row. He may assume from the neatness of the display that it was deliberate, but he has no way of knowing anything about the order of the objects. With the evidence at hand, he can see a form of order–the alignment of the objects–but that’s all. He would have no basis for thinking that the placement of the object was the result of any type of deliberate ordering–it appears to be random, even if it wasn’t, because he knows nothing about the significance of the order.
Seems to me this is entirely possible and consistent with the idea of specified complexity. In your thought experiment, the arrangement is specified but the anthropologist is not able to recognize it as such. Therefore he is unable to infer intelligent agency (at least not as much as a cultural insider would). But as others have pointed out, in biological systems, we *do* know some things about what patterns will be meaningful: for example, arrangements that function well; and especially, ones that require all their parts to work well. So we are unlike the anthropologist; we *can* recognize the specificity of certain patterns (but there may be patterns whose meaning we're unaware of).lars
October 30, 2007
October
10
Oct
30
30
2007
01:49 AM
1
01
49
AM
PDT
Mickey: You say: "If we encounter a deck of cards (or any other group of 52 unique objects, the probability that they will be in *any* particular order is 1 in 52!. Thus it seems possible to say, in encountering even a well-shuffled deck, that some sort of miracle or intelligent agency must have been involved, because there is only a 1 in 52! chance that they could be in that particular order" Briefly, because this subject has been already discussed many times: the example of the deck of cards completely misses the point. The point in the concept of Dembski's CSI is: complexity "plus" specification. Each of the possible combinations of your deck of cards is a legitimate example of complexity, because each one is very unlikely, but only a tiny subset of combinations can be said to be specified in one way or another. Therefore, only a tiny subset of combinations is specified, and exhibits CSI. The others are random. Now, you can ask what specification means. That's really the big question, and the answer is not necessarily simple and not necessarily final, and specification is often context dependent, but that does not mean that clear answers have not been given. Please, read Dembski and especially his paper on specification, on his site. Again briefly, I'll try to give here my simple personal view of specification, just to discuss. Specification is everything which allows us to "recognize" a subset of combinations of a system as not random. It has a strict relationship with the more general (and equally elusive) concept of meaning (at least in its cognitive sense). Specification can be of at least 3 different kinds: 1) Pre-specification: we can recognize a pattern because we have seen it before. In this case, the pattern in itself is probably random, but its occurrence "after" a pre-specification is a sign of CSI (obviously provided that complexity is also present, but that stays true for each of the cases). 2) Compressibility: some mathematical patterns of information are highly compressible, which means that they can be expressed in a much shorter sequence than their original form. That is the case, for instance, for numbers like 3333333333, which can be written as "10 times 3". Such compressible patterns are usually recognizable by a conscious mind, for reasons that are probably much deeper than I can understand. In this case, specification is in some way intrinsic to the specific pattern of information, we could say that it is inherent in its mathematical properties. 3) Finally there is perhaps the most important form of specification, at least for our purposes: specification by function. A few patterns of information are specified because thay can "do" something very specific, in the right context. That's the case of proteins, obviously, but also of computer programs, or in general of algorithms. In this case specification is not so much a characteristic of the mathemathical properties of the sequence, but rather of what the sequence can realize in a particular physical context: for example, the sequence of an enzyme is meaningless in itself, but it becomes very powerful if it is used to guide the synthesis of the real protein, and if the real protein can exist in a context where it can fulfill its function. Function is a very objective evidence of specification, because it does not depend on pre-conceptions of the observer (at least, not more than any other concept in human knowledge). So, this is the theoretic frame of CSI: complexity "plus" specification. And, obviously, the absence of any known mechanical explanation of the specific specified pattern in terms of necessity (that is, we are observing apparently random phenomena). The summary is: a) If you have a very complex pattern (very unlikely) and b) If no explanation of that patterm is known in terms of necessity on the basis of physical laws (in other words, if that pattern is equally likely as all other possible patterns, in terms of physical laws, and is therefore potentially random) and c) If that pattern is recognizable as specified, in any of the ways I have previously described: then we are witnessing CSI, and the best empirical explanation for that is an intelligent agent. That's just that simple.gpuccio
October 29, 2007
October
10
Oct
29
29
2007
03:30 PM
3
03
30
PM
PDT
reposting the code sample: suit = "hearts", "diamonds", "spades", "clubs"; rank = "ace", 2...10, "jack", "queen", "king"; for(i=1; i<=4; i++)   for (j=1; j<=13; j++)     output(suit[i], rank[j]); Apollos
October 29, 2007
October
10
Oct
29
29
2007
02:36 PM
2
02
36
PM
PDT
Mickey, but wouldn't this arrangement still fall within a tiny minority of combinations that have significance? I think so. Your argument seems to redefine the discussion. Besides, the properties of your alter-cultural display still betray agency, even if the message is not understood. Whether or not some other arrangement might have a significance to another culture still doesn't address that the probability of arriving at one of those arrangements randomly is vanishingly small. Also, I may not understand the meaning of the arrangement, but I could still identify the involvement of agency with astonishing reliability. Finding things in neat rows, when arrangement by row is not a property inherent to the objects, is a clear indicator of design. However another thing to consider are properties intrinsic to a rank/suit arrangement. I touched on this briefly in my previous post.
Just to note, the rank/suit ordering of the deck is not only meaningful, it’s compressible and subject to simple semantic expression. These features are not shared by more than a few other arrangements.
The rank/suit arrangement conforms to logical patterns that are a property of the deck's design. A deck of cards has 4 categories of repeating indices from 1 to 13. This can be expressed this way: suit = "hearts", "diamonds", "spades", "clubs"; rank = "ace", 2...10, "jack", "queen", "king"; for(i=1; i Without logical arrangement, the expression of a deck of cards could not be reduced to code semantics. Therefore this arrangement is the logical expression of the deck's design, and is fairly unique by nature -- exhibiting properties unshared with other arrangements. A very small percentage of other meaningful arrangements could be expressed semantically, and are likewise compressible; however the "random" combinations that make up the majority of probabilities will not exhibit these properties. This gives a tiny minority of patterns properties not shared by the majority, making design detection of these arrangements objectively possible without equivocation.Apollos
October 29, 2007
October
10
Oct
29
29
2007
02:34 PM
2
02
34
PM
PDT
Mickey,
With the evidence at hand, he can see a form of order–the alignment of the objects–but that’s all. He would have no basis for thinking that the placement of the object was the result of any type of deliberate ordering–it appears to be random, even if it wasn’t, because he knows nothing about the significance of the order.
Try reading Dembski's books. That's called a false negative, which is a valid minor issue with formalized design detection. But we're really only concerned if there is a false positive. While there are specifications that are context sensitive other specifications are independent of culture and such. The flagellum provides motility, for example.Patrick
October 29, 2007
October
10
Oct
29
29
2007
01:57 PM
1
01
57
PM
PDT
Hi Apollos-- Suit/rank is only meaningful to an observer who recognizes its significance. Thus to say that "You don’t get to impose the pattern after the deck is shuffled and revealed," is correct, but misses the point. There must be prior knowledge of the significance of ordered relationships in order to be able to recognized them as ordered. This seems fundamental to me. To illustrate my point, let's forget about decks of cards for the moment, and take some other group of 52 unique objects. Let's say that in some isolated culture, a particular ordering of these things has some cultural or religious significance, and the members of the culture all recognize it as such. Now further suppose that some cultural anthropologist pays a visit and finds the objects lined up in a neat row. He may assume from the neatness of the display that it was deliberate, but he has no way of knowing anything about the order of the objects. With the evidence at hand, he can see a form of order--the alignment of the objects--but that's all. He would have no basis for thinking that the placement of the object was the result of any type of deliberate ordering--it appears to be random, even if it wasn't, because he knows nothing about the significance of the order.Mickey Bitsko
October 29, 2007
October
10
Oct
29
29
2007
01:39 PM
1
01
39
PM
PDT
Mickey said:
"If we find a deck of cards ordered by rank and suit, there is an assumption that they were ordered that way intentionally, but only because that particular order is meaningful to the observer."
Exactly. If we find a deck of cards ordered by rank and suit, we can assume without doubt that they were ordered by agency, especially because that order is meaningful to the observer. The meaning isn't arrived at after the fact. The arrangement conforms to a preexisting pattern. It's not as if meaning is derived after the deck is shuffled. Only specific arrangements have meaning and could reasonably be attributed to agency. The fact that any arrangement is equally improbable is irrelevant.
If we encounter a deck of cards (or any other group of 52 unique objects, the probability that they will be in *any* particular order is 1 in 52!. Thus it seems possible to say, in encountering even a well-shuffled deck, that some sort of miracle or intelligent agency must have been involved, because there is only a 1 in 52! chance that they could be in that particular order.
The tautology is introduced by your imposition of a straw man. You don't get to impose the pattern after the deck is shuffled and revealed. According to your wording of the issue, there is a probability of nearly 1 that the deck will be reordered after it's shuffled. There's no miracle there. "Sufficiently shuffling the deck will sufficiently randomize its order." This is the tautology and thereby says nothing meaningful. "After the deck is sufficiently shuffled the deck will be ordered by rank and suit." That's the miracle, and the reason why this analogy is appropriate to CSI. Just to note, the rank/suit ordering of the deck is not only meaningful, it's compressible and subject to simple semantic expression. These features are not shared by more than a few other arrangements.Apollos
October 29, 2007
October
10
Oct
29
29
2007
12:30 PM
12
12
30
PM
PDT
Mickey Bitsko, I think this may help you understand. What makes an event improbable in biology is that a particular order (shape space) in a particular protein is required to be generated to match the configuration of other protein shape spaces to accomplish a specific novel task,,,, Maybe the following article will help you understand a bit better than that general description.: The simplest bacteria ever found on earth is constructed with over a million protein molecules. Protein molecules are made from one dimensional sequences of the 20 different L-amino acids that can be used as building blocks for proteins. These one dimensional sequences of amino acids fold into complex three-dimensional structures. The proteins vary in length of sequences of amino acids. The average sequence of a typical protein is about 300 to 400 amino acids long. Yet many crucial proteins are thousands of amino acids long. Proteins do their work on the atomic scale. Therefore, proteins must be able to identify and precisely manipulate and interrelate with the many differently, and specifically, shaped atoms, atomic molecules and protein molecules at the same time to accomplish the construction, metabolism, structure and maintenance of the cell. Proteins are required to have the precisely correct shape to accomplish their specific function or functions in the cell. More than a slight variation in the precisely correct shape of the protein molecule type will be for the life of the cell. It turns out there is some tolerance for error in the sequence of L-amino acids that make up some the less crucial protein molecule types. These errors can occur without adversely affecting the precisely required shape of the protein molecule type. This would seem to give some wiggle room to the naturalists, but as the following quote indicates this wiggle room is an illusion. "A common rebuttal is that not all amino acids in organic molecules must be strictly sequenced. One can destroy or randomly replace about 1 amino acid out of 100 without doing damage to the function or shape of the molecule. This is vital since life necessarily exists in a "sequence—disrupting" radiation environment. However, this is equivalent to writing a computer program that will tolerate the destruction of 1 statement of code out of 1001. In other words, this error-handling ability of organic molecules constitutes a far more unlikely occurrence than strictly sequenced molecules." Dr. Hugh Ross PhD. It is easily demonstrated mathematically that the entire universe does not even begin to come close to being old enough, nor large enough, to ally generate just one small but precisely sequenced 100 amino acid protein (out of the over one million interdependent protein molecules of longer sequences that would be required to match the sequences of their particular protein types) in that very first living bacteria. If any combinations of the 20 L-amino acids that are used in constructing proteins are equally possible, then there are (20^100) =1.3 x 10^130 possible amino acid sequences in proteins being composed of 100 amino acids. This impossibility, of finding even one “required” specifically sequenced protein, would still be true even if amino acids had a tendency to chemically bond with each other, which they don’t despite over fifty years of experimentation trying to get amino acids to bond naturally (The odds of a single 100 amino acid protein overcoming the impossibilities of chemical bonding and forming spontaneously have been calculated at less than 1 in 10^125 (Meyer, Evidence for Design, pg. 75)). The staggering impossibility found for the universe ever generating a “required” specifically sequenced 100 amino acid protein by would still be true even if we allowed that the entire universe, all 10^80 sub-atomic particles of it, were nothing but groups of 100 freely bonding amino acids, and we then tried a trillion unique combinations per second for all those 100 amino acid groups for 100 billion years! Even after 100 billion years of trying a trillion unique combinations per second, we still would have made only one billion, trillionth of the entire total combinations possible for a 100 amino acid protein during that 100 billion years of trying! Even a child knows you cannot put any piece of a puzzle anywhere in a puzzle. You must have the required piece in the required place! The simplest forms of life ever found on earth are exceedingly far more complicated jigsaw puzzles than any of the puzzles man has ever made. Yet to believe a naturalistic theory we would have to believe that this tremendously complex puzzle of millions of precisely shaped, and placed, protein molecules “just happened” to overcome the impossible hurdles of chemical bonding and probability and put itself together into the sheer wonder of immense complexity that we find in the cell. Instead of us just looking at the probability of a single protein molecule occurring (a solar system full of blind men solving the Rubik’s Cube simultaneously), let’s also look at the complexity that goes into crafting the shape of just one protein molecule. Complexity will give us a better indication if a protein molecule is, indeed, the handi-work of an infinitely powerful Creator. In the year 2000 IBM announced the development of a new super-computer, called Blue Gene, that is 500 times faster than any supercomputer built up until that time. It took 4-5 years to build. Blue Gene stands about six feet high, and occupies a floor space of 40 feet by 40 feet. It cost $100 million to build. It was built specifically to better enable computer simulations of molecular biology. The computer performs one quadrillion (one million billion) computations per second. Despite its speed, it is estimated it will take one entire year for it to analyze the mechanism by which JUST ONE “simple” protein will fold onto itself from its one-dimensional starting point to its final three-dimensional shape. "Blue Gene's final product, due in four or five years, will be able to "fold" a protein made of 300 amino acids, but that job will take an entire year of full-time computing." Paul Horn, senior vice president of IBM research, September 21, 2000 http://www.news.com/2100-1001-233954.html In real life, the protein folds into its final shape in a fraction of a second! The computer would have to operate at least 33 million times faster to accomplish what the protein does in a fraction of a second. That is the complexity found for JUST ONE “simple” protein. It is estimated, on the total number of known life forms on earth, that there are some 50 billion different types of unique proteins today. It is very possible the domain of the protein world may hold many trillions more completely distinct and different types of proteins. The simplest bacterium known to man has millions of protein molecules divided into, at bare minimum, several hundred distinct proteins types. These millions of precisely shaped protein molecules are interwoven into the final structure of the bacterium. Numerous times specific proteins in a distinct protein type will have very specific modifications to a few of the amino acids, in their sequence, in order for them to more precisely accomplish their specific function or functions in the overall parent structure of their protein type. To think naturalists can account for such complexity by saying it “happened by chance” should be the very definition of “absurd” we find in dictionaries. Naturalists have absolutely no answers for how this complexity arose in the first living cell unless, of course, you can take their imagination as hard evidence. Yet the “real” evidence scientists have found overwhelmingly supports the anthropic hypothesis once again. It should be remembered that naturalism postulated a very simple "first cell". Yet the simplest cell scientists have been able to find, or to even realistically theorize about, is vastly more complex than any machine man has ever made through concerted effort !! What makes matters much worse for naturalists is that naturalists try to assert that proteins of one function can easily mutate into other proteins of completely different functions by pure chance. Yet once again the empirical evidence we now have betrays the naturalists. Individual proteins have been experimentally proven to quickly lose their function in the cell with random point mutations. What are the odds of any functional protein in a cell mutating into any other functional folded protein, of very questionable value, by pure chance? “From actual experimental results it can easily be calculated that the odds of finding a folded protein (by random point mutations to an existing protein) are about 1 in 10 to the 65 power (Sauer, MIT). To put this fantastic number in perspective imagine that someone hid a grain of sand, marked with a tiny 'X', somewhere in the Sahara Desert. After wandering blindfolded for several years in the desert you reach down, pick up a grain of sand, take off your blindfold, and find it has a tiny 'X'. Suspicious, you give the grain of sand to someone to hide again, again you wander blindfolded into the desert, bend down, and the grain you pick up again has an 'X'. A third time you repeat this action and a third time you find the marked grain. The odds of finding that marked grain of sand in the Sahara Desert three times in a row are about the same as finding one new functional protein structure (from chance transmutation of an existing functional protein structure). Rather than accept the result as a lucky coincidence, most people would be certain that the game had been fixed.” Michael J. Behe, The Weekly Standard, June 7, 1999, Experimental Support for Regarding Functional Classes of Proteins to be Highly Isolated from Each Other “Mutations are rare phenomena, and a simultaneous change of even two amino acid residues in one protein is totally unlikely. One could think, for instance, that by constantly changing amino acids one by one, it will eventually be possible to change the entire sequence substantially… These minor changes, however, are bound to eventually result in a situation in which the enzyme has ceased to perform its previous function but has not yet begun its ‘new duties’. It is at this point it will be destroyed – along with the organism carrying it.” Maxim D. Frank-Kamenetski, Unraveling DNA, 1997, p. 72. (Professor at Brown U. Center for Advanced Biotechnology and Biomedical Engineering) Even if evolution somehow managed to overcome the impossible hurdles for generating novel proteins by totally natural means, Evolution would still face the monumental hurdles of generating complimentary protein/protein binding sites in which the novel proteins could actually interface with each other in order to accomplish specific tasks in the cell (it is estimated that there are least 10,000 different types of protein-protein binding sites in a "simple" cell). What does the recent hard evidence say about novel protein-protein binding site generation from what is actually observed to be occuring on the protein level of malaria and HIV since they have infected humans? Once again the naturalists are brutally betrayed by the hard evidence that science has recently uncovered! The likelihood of developing two binding sites in a protein complex would be the square of of the probability of developing one: a double CCC (chloroquine complexity cluster), 10^20 times 10^20, which is 10^40. There have likely been fewer than 10^40 cells in the entire world in the past 4 billion years, so the odds are against a single event of this variety (just 2 binding sites being generated by ) in the history of life. It is biologically unreasonable. Dr. Michael J. Behe PhD. (from page 146 of his book "Edge of Evolution") Mickey, I Hope that help explain why just any random event can't be considered a complex specified event.bornagain77
October 29, 2007
October
10
Oct
29
29
2007
09:47 AM
9
09
47
AM
PDT
BarryA,@#38: We seem to be talking past one another, in that the argument you quote from R. Totten seems to assume its own conclusion, thus I still don't know how the tautology may be logically escaped. If we find a deck of cards ordered by rank and suit, there is an assumption that they were ordered that way intentionally, but only because that particular order is meaningful to the observer. The cards being ordered by rank and suit is, in fact, a state that is no more or less likely than any random order. You have to understand that my personal faith as a Christian is not swayed in any way by my struggles to reconcile the reality of Intelligent Design with what appears to me to be attempts to force round pegs into square holes. If there's a difference between what my faith tells me and what I actually observe, I know that what I'm *able* to observe is severely limited by my human condition.Mickey Bitsko
October 29, 2007
October
10
Oct
29
29
2007
07:36 AM
7
07
36
AM
PDT
Leo, You called me to task to prove my assertion that Genetic Entropy is a foundational principle of science. To which I refer you to kairofocus's work On Thermodynamics, Information and Design http://www.angelfire.com/pro/kairosfocus/resources/Info_design_and_science.htm#thermod And I also refer you to Dr. Dembski's work on Conservation of Information; http://cayman.globat.com/~trademarksnet.com/T/ActiveInfo.pdf I have to humbly admit that much of the math is beyond me,,,but I am sure if you have any questions the Author's themselves, or someone who has a better grasp of the details than I, will be more than happy to answer your questions on this site!bornagain77
October 29, 2007
October
10
Oct
29
29
2007
06:26 AM
6
06
26
AM
PDT
Very good responses guys! Thanks for the correction Patrick,,I will be careful to say tentative limit of 2 protein/protein binding sites set by Dr. Behe, and not say concrete limit. Thanks for the info on the second Law kairosfocus,,I will dig through it, along with Granville Sewell's, later today to shore up my logic with Genetic Entropy, Thanks for the linked to site on abiogenesis BarryA,,,there is a lot of good stuff in there that I will dig through and make use of also.bornagain77
October 29, 2007
October
10
Oct
29
29
2007
05:11 AM
5
05
11
AM
PDT
The interesting thing about the Darwinist commentators on Amazon is that they were so focused on "we must prove Behe to be wrong somehow" that they fail to realize they're shooting themselves in the foot. If CQ resistance did indeed come about by a 2-part gradual scenario then all that does is make this example of the "all-mighty powers of Darwinian mechanisms" even more trivial than before! After all, a direct stepwise scenario is much more likely to occur than one that requires simultaneous changes or an indirect pathway. Yet even then Darwinian mechanisms have a hard time bring about such a change even with the extremely high number of replications (in comparison to higher animals). (BTW, I would rank in order of difficulty from easiest to hardest: direct gradualist, indirect gradualist, direct multiple/simultaneous, and then a combination of gradual changes combined with indirect multiple/simultaneous) Now I have seen excerpts where scientists hypothesize gradualistic scenarios...
Current evidence from transfection studies (71, 187) strongly suggests that the mechanism of P. falciparum resistance to CQ is linked to mutations in the pfcrt gene, especially the substitution of threonine for lysine at position 76. However, other mutations in the pfcrt gene at positions 72 to 78, 97, 220, 271, 326, 356, and 371, as well as mutations in other genes such as pfmdr1, might be involved in the modulation of resistance (173, 223). CQ resistance seems to involve a progressive accumulation of mutations in the pfcrt gene, and the mutation at position 76 seems to be the last in the long process leading to CQ clinical failure (53, 92).
...but I have not seen a direct statement of certainty (anyone care to supply a link?). But just because other scientists are discussing other scenarios for generating CQ resistance that "must" mean Behe is lying in the minds of these Darwinists.... This back and forth made me shake my head in exasperation:
If you want to refute Behe on this point, what you need to prove is that (1) CQ resistance actually demands more than two mutations. (To show that potentially profitable mutations arise more frequently than Behe claims)
A Darwinist responds:
Which leads me to think you don't understand what everybody is talking about in regards to CQ resistance. Behe's false assertion is that CQ resistance requires two SIMULTANEOUS mutations to occur. The reality is that the published literature clearly shows that the mutations for CQ resistance occur gradually, one mutation at a time. No one has said anything about CQ resistance needing more then 2 mutations. For one thing, that would be HELPING Behe's claim, not refuting it. As such, your claim that in order to "refute Behe" I would have to show that CQ resistance requires more then 2 mutations makes no sense. The whole point of this little exercise is to point out that Behe's false assertion greatly exaggerates the difficulty in CQ resistance by claiming that both mutations have to happen simultaneously.
A big "no duh" here...3 or more simultaneous mutations would put Darwinism in a better light. Another:
However, Professor Behe does not offer a scientifically credible means of demonstrating how Intelligent Design could account for "common descent".
That was outside the scope of the EoE book, but I'm guessing that commentator has not bothered to read other ID writers. Otherwise, many of the commentators do not seem to have bothered to read what Behe had said previously:
Incidentally, this bears on Coyne’s comment on Miller’s review that “one of the two mutations that Behe claims are ‘required’ for CQR is not actually required (Chen et al. 2003, reference accidentally omitted from Miller’s piece).” If you read that paper you see that, yes, A220S is not found in some resistant strains, as it is in most. (By the way, I was always quite careful in my book to state that A220S had been found in most strains, because I was quite aware of the several exceptions.) However, one also reads that the strains missing A220S have several other, novel mutations, which may be playing a comparable role in them that the mutation at position 220 plays in most other strains. My argument does not depend on exactly which changes are needed in the protein. Rather, the important point is that multiple changes appear to be required for resistance in the wild.
bg77,
could you just show me complexity being generated in the real world that would violate the concrete limit of 2 protein/protein binding sites being generated by Dr. Behe in Edge of Evolution.
Concrete limit? Don't be giving Darwinists more strawmen. It's an "estimate" based upon observed evidence. Throughout the history of life there "might" have been instances of 3-6. Or there might be very limited scenarios where more can be accomplished. Saying it's concrete in general goes too far. Oh, I noticed a basic error in the front page post:
over many thousands of generations natural selection...will be unable to cause any increase in genetic information
ID proponents should always be careful to not just say "information" or "complex information". Yes, I know Barry meant complex specified information, but just an increase an information in general can and does occur. Newcomers to ID not familiar with the language employed on UD might be put off by such a broad statement.Patrick
October 29, 2007
October
10
Oct
29
29
2007
04:03 AM
4
04
03
AM
PDT
Leo [and others]: Re: Thermodynamics, information, entropy and bio-functional CSI Have a look at Appendix A [and its context and the onward links] in my always linked through my name, in the left column. I think you will find that since the nanotech of life is based on molecules which can potentially be in very large config spaces, statistical mechanical considerations -- thus entropy etc -- apply, and that when such systems are opened up to raw energy flows, that naturally tends to INCREASE their entropy. So, spontaneous origin of CSI as seen in life forms is statistically so unlikely on the gamut of the observed universe, that its probability is negligibly different from zero. The same holds for the increments in information and functionality required for the body-plan level biodiversity we observe. The odds that both originated by the sort of processes envisioned in evolutionary materialist mechanisms, are therefore so close to zero as makes no practical difference. So, on inference to best explanation relative to the world we actually observe [I am here underscoring that the speculative quasi-infinite cosmos as a whole models are metaphysics, not physics], agency is the best explanation of both life and biodiversity at body plan level. For, on routine and general observation, agents are the known cause of CSI. In turn, that traces to the classic trichotomy of causal forces as long since documented by Plato in Book X of His The Laws: chance, mechanical necessity, agency. (Excerpt is in Appendix B. I am now adding "mechanical" as I have always been a little uncomfortable with "necessity" alone. Not sure who it was I first saw using it here at UD.) Highly contingent situations are not dominated by [mechanical] necessity, and chance runs out of probabilistic resources once we see informational complexity greater than about 500 - 1,000 bits, as per a Dembski UPB type calculation. Even the lower end of life forms is about 1 Mbits long, and the human genome is about 6 gigabits long, as each 4-state base-pair holds up to about 2 bits of information. BTW, on Genetic Entropy, it seems to me that malaria parasites and bacteria replicate themselves in vast numbers and have very large populations in general, many of which will be genetically fairly close to "the original." Winnowing out through the sort of functionality collapse that has been discussed would seem to be a mechanism for preserving the genome. That is, a population with the near-original information is likely to be preserved, and functionality- damaged variants which may survive for a time in niches, in the long run will not. [Cf. here the rise of hospital superbugs that can't compete with the originals in the wider world.] Hope that helps. GEM of TKIkairosfocus
October 29, 2007
October
10
Oct
29
29
2007
12:25 AM
12
12
25
AM
PDT
leo As soon as I saw a Monty Python cartoon appearing in Sean Carrol's review of Edge of Evolution I stopped reading. Anyone who needs to resort to Monty Python in a scientific argument can be safely ignored as not having any legs to stand on.DaveScot
October 28, 2007
October
10
Oct
28
28
2007
11:47 PM
11
11
47
PM
PDT
Mickey, I'll take one more run at answering your question. There are about 10^68 different combinations that you can make with a deck of cards. It is true that any particular shuffle will result in only one of those 10^68 combinations and is exceedingly unlikely. But that misses the point. R. Totten answers this objection this way: "The card-shuffling illustration assumes that basically ANY ordering of the cards is an acceptable outcome --and, comparing it to life-chemistry, this would be the equivalent of saying that almost any ordering of the amino acids would work to build a functional protein. So, whatever one might randomly come up with is basically "easy" to achieve --no matter how "unlikely" the probability calculations might make it seem. "However, the critic unwittingly brings out the correct perspective when he says we are basically looking for one "particular ordering of the cards" --because the research just previously cited in this article (esp. from Behe), points out that --in reality-- only about one specific sequence of amino acids out of 10^60 possible sequences is adequate to produce a properly folding protein which could be used by actual life. The rest are junk, and useless to life. "Therefore --to more accurately represent the life-chemistry situation-- the card-illustration should actually be restricted to say that there are only a few specific orderings of the cards which are the acceptable outcomes of the random shuffles of cards. That is, only about 24 out of the 10^68 possible outcomes will do. --For example, the only good outcomes in cards would be: a well-shuffled deck must randomly end up with all four suits in proper numerical order starting with the Ace, then the 2, then the 3, etc., on up through to the King. All four suits must be so ordered. --Specificity is required. The whole article is here. http://www.geocities.com/Athens/Aegean/8830/mathproofcreat.html It is interesting reading. BarryA
October 28, 2007
October
10
Oct
28
28
2007
09:55 PM
9
09
55
PM
PDT
Leo, Behe's responce to Sean Carroll in Science: http://www.amazon.com/gp/blog/post/PLNKWOEF4DT51SV2 Almost the same day that The Edge of Evolution was officially released Science published a long, lead review by evolutionary developmental biologist Sean Carroll, whose own work I discuss critically in Chapter 9. The review is three parts bluster to one part substance, which at least is more substance than Jerry Coyne’s essay. Here I’ll ignore the bluster and deal with the substantive points. Carroll first covers his rhetorical bases by warning readers that “Unfortunately, [Behe’s] errors are of a technical nature and will be difficult for lay readers, and even some scientists (those unfamiliar with molecular biology and evolutionary genetics), to detect. Some people will be hoodwinked. My goal here is to point out the critical flaws in Behe's key arguments and to guide readers toward some references.” So, you see, if Carroll’s reasoning doesn’t sound right, well, maybe that’s because you, dear reader, are too slow to understand him. If that’s the case, you’re supposed to just take his word for it. Unfortunately, his word is demonstrably questionable. He claims that Behe's chief error is minimizing the power of natural selection to act cumulatively... Behe states correctly [my emphasis] that in most species two adaptive mutations occurring instantaneously at two specific sites in one gene are very unlikely and that functional changes in proteins often involve two or more sites. But it is a non sequitur to leap to the conclusion, as Behe does, that such multipleamino acid replacements therefore can't happen. But I certainly do not say that multipleamino acid replacements “can’t happen”. A centerpiece of The Edge of Evolution is that it can and did happen. I stress in Chapter 3 that in the case of malarial resistance to chloroquine, multiple necessary mutations did happen in the membrane protein PfCRT. I also of course emphasize that it took a huge population size, one that would not be available to larger organisms. But Carroll seems uninterested in making distinctions. Carroll cites several instances where multiple changes do accumulate gradually in proteins. (So do I. I discuss gradual evolution of antifreeze resistance, resistance to some insecticides by “tiny, incremental steps — amino acid by amino acid — leading from one biological level to another”, hemoglobin C-Harlem, and other examples, in order to make the critically important distinction between beneficial intermediate mutations and detrimental intermediate ones.) But, as Carroll might say, it is a non sequitur to leap to the conclusion that all biological features therefore can gradually accumulate. Incredibly, he ignores the book’s centerpiece example of chloroquine resistance, where beneficial changes do not accumulate gradually. As a “second blunder”, he asserts I overlook proteins that bind to “short linear peptide motifs” of two or three amino acids. I’ll get to that in a second. Notice, however, that here he is writing simply of a sub-class of protein binding sites, and never gets around to dealing with the question of how the majority of binding sites, those with interacting folded domains, developed. I assume that’s because he has no answer. Carroll lets his imagination run wild. He thinks it would be child’s play for random processes to develop binding sites, at least for the sub-category of short peptide motif binding: Very simple calculations indicate how easily such motifs evolve at random. If one assumes an average length of 400 amino acids for proteins and equal abundance of all amino acids, any given twoamino acid motif is likely to occur at random in every protein in a cell. Wow, every protein in the cell will have a binding site! Methinks Carroll has just stumbled over an embarrassment of riches. If every protein (or even a large fraction of proteins) had such a binding site, then binding would essentially be non specific. (It would be much like, say, the case of the digestive enzyme trypsin, which binds and cuts proteins wherever there is the amino acid lysine or arginine.) As I make clear in The Edge of Evolution, the problem the cell faces is not just to have protein binding sites (which could simply be large hydrophobic patches), but to bind specifically to the right partner. In fact, if one takes the trouble to look up the references Carroll cites, one sees that a short amino acid motif is not enough for function in a cell. For example, Budovskaya et el (Proc. Nat. Acad. Sci USA 102, 13933-8, 2005) show that the majority of proteins in the yeast Saccharomyces cerevisiae containing a motif recognized by a particular protein kinase were not phosphorylated by the enzyme. What does that mean? It just means that the simple motifs, while necessary for binding, are not sufficient. Other features of the proteins are necessary, too, features which Sean Carroll ignores. In his enthusiasm Carroll seems not to have noticed that, as I discuss at great length in my book, no protein binding sites — neither short linear peptide motifs nor any other — developed in a hundred billion billion (1020) malarial cells. Or in HIV. Or E. coli. Or in human defenses against malaria, save that of sickle hemoglobin. Like Coyne, Carroll simply overlooks observational evidence that goes against Darwinian views. In fact, Carroll seems unable to separate Darwinian theory from data. He writes that “what [Behe] alleges to be beyond the limits of Darwinian evolution falls well within its demonstrated [my emphasis] powers”, and “Indeed, it has been demonstrated [my emphasis] that new protein interactions (10) and protein networks (11) can evolve fairly rapidly and are thus well within the limits of evolution.” Yet if one looks up the papers he cites, one finds no “demonstration” at all. Those papers show, respectively, that: A) different species have different protein binding sites (but, although the authors assume Darwinian processes, they demonstrate nothing about how the sites arose); or B) different species have different protein networks (but, again, the authors demonstrate nothing about how the networks arose). Like Jerry Coyne, Sean Carroll simply begs the question. Like Coyne, Carroll assumes whatever exists in biology arose by Darwinian processes. Apparently Darwinism has eroded Coyne’s and Carroll’s ability to separate data from theory. In fact, the data I cite in The Edge of Evolution is a real demonstration. While we have studied them, in a truly astronomical number of chances, a variety of microbes developed precisely none of the sophisticated cellular mechanisms that Darwinist imaginations ascribe to random mutation and selection. That data demonstrates random mutation doesn’t explain the elegance of cellular systems.bornagain77
October 28, 2007
October
10
Oct
28
28
2007
06:14 PM
6
06
14
PM
PDT
BarryA, That post of Dr. Dembski's doesn't seem to address my question. While he references what he calls "probabilistic resources" in correcting a critic (and rightly so, it seems), my point in responding to DaveScot was that his card deck analogy didn't seem apropos of my question to you, which is in comment #21 above.Mickey Bitsko
October 28, 2007
October
10
Oct
28
28
2007
05:52 PM
5
05
52
PM
PDT
Though I haven't read Edge of Evolution, I was looking up this limit that he postulated and the review in Science would likely do a better job (clearer, more knowledge in that specific area) than me (lowly cell biologist). http://www.sciencemag.org/cgi/content/full/316/5830/1427#ref10 Please, if this is not what you are looking for, let me know.leo
October 28, 2007
October
10
Oct
28
28
2007
04:56 PM
4
04
56
PM
PDT
Bah, 100,000 years in the future. Might as well say a zillion years in the future. We'll all have been raptured by then, and only the servants of satan will be left behind.Nochange
October 28, 2007
October
10
Oct
28
28
2007
04:16 PM
4
04
16
PM
PDT
Leo, Seeing as I am not that literate in math, could you just show me complexity being generated in the real world that would violate the concrete limit of 2 protein/protein binding sites being generated by Dr. Behe in Edge of Evolution.bornagain77
October 28, 2007
October
10
Oct
28
28
2007
01:45 PM
1
01
45
PM
PDT
Mickey, you are making a fairly common mistake. Go here to see the answer to your comment: https://uncommondescent.com/intelligent-design/john-derbyshire-i-will-not-do-my-homework/BarryA
October 28, 2007
October
10
Oct
28
28
2007
01:37 PM
1
01
37
PM
PDT
Another thought comes to mind regarding the card deck analogy. Keep in mind that I'm an engineer and not a scientist, and my knowledge in science is predictably superficial, although probably better than the average layman's. Nonetheless, my primary exposure is through the Internet and the popular press (Scientific American, e.g.). I do work with statistics and probability, however. If we encounter a deck of cards (or any other group of 52 unique objects, the probability that they will be in *any* particular order is 1 in 52!. Thus it seems possible to say, in encountering even a well-shuffled deck, that some sort of miracle or intelligent agency must have been involved, because there is only a 1 in 52! chance that they could be in that particular order. In other words, it seems to be me that the argument from probability is being misused here, because the order that the cards are in doesn't tell us anything about how they got that way. A seemingly random-ordered deck might have been deliberately arranged, and what appears to be a deliberately-ordered deck might have happened randomly.Mickey Bitsko
October 28, 2007
October
10
Oct
28
28
2007
01:08 PM
1
01
08
PM
PDT
DaveScott, As I was thinking about your comment in 11 above, it occurred to me that perhaps our difference lies in the fact that you overlooked my use of the term “relatively” in the following sentence: “Another corollary to ID is that a particular organism’s genetic code will be relatively stable over many generations.” A manmade system may incorporate redundancy and error correction mechanisms to promote stability. Nevertheless, the manmade system will always be only “relatively stable,” not absolutely stable. In the same way, the biological systems we observe that have redundancy and error correction mechanisms built in are also only “relatively stable.” I do not claim they are absolutely stable. In the larger picture, I see no conflict between my view stated here and genetic entropy. I think I am describing something real here and not simply erecting a semantic dodge to your criticism. Not being a specialist in the area, however, I am open to being shown I am wrong.BarryA
October 28, 2007
October
10
Oct
28
28
2007
12:58 PM
12
12
58
PM
PDT
"In a certain sense the development of civilization may appear contradictory to the second law… " But of course, in the REAL sense, it isn't at all. "Each localized, man-made or machine- made entropy decrease is accompanied by a greater increase in entropy of the surroundings, thereby maintaining the required increase in total entropy." Exactly. "According to this logic, then, the second law does not prevent scrap metal from reorganizing itself into a computer in one room, as long as two computers in the next room are rusting into scrap metal–and the door is open." Actually it is nothing like that. Comparing DNA to something like a computer or book or airplane is comparing a single molecule to a mixture of many different types of molecules - something way more complex. Furthermore, no one knows if DNA was made in one step as he is proposing, but books etc are much more complex and took many many more steps to form. He does a lot of equating thermal order with any order, and they are not the same thing. Now, the one interesting thing that he does say is "This is inexplicable–I don’t see any reason why all living organisms do not constantly decay into simpler components–as, in fact, they do as soon as they die." Now, of course we all have mechanisms of self repair - both macroscopically and microscopically. But the interesting question is how this occurred before such systems evolved. We would have to know the degradation rate of DNA (or whatever the system of inheritance was) and compare that to the rate of replication for that system. If the information could be passed on prior to degradation of the physical medium than there would not be a problem. "Now can you demonstrate the generation of CSI by natural means thus establishing evolution as valid?" Can I ask: Whose definition are we talking about? Orgel or Dembski?leo
October 28, 2007
October
10
Oct
28
28
2007
12:00 PM
12
12
00
PM
PDT
leo, Now can you demonstrate the generation of CSI by natural means thus establishing evolution as valid?bornagain77
October 28, 2007
October
10
Oct
28
28
2007
10:10 AM
10
10
10
AM
PDT
Leo, http://www.iscid.org/papers/Sewell_EvolutionThermodynamics_012304.pdf of special note: But the Earth is an open system, and it is often argued that any increase in order is allowed in an open system, as long as the increase is ”compensated” somehow by a comparable or greater decrease outside the system. S. Angrist and L. Helper [3], for example, write, ”In a certain sense the development of civilization may appear contradictory to the second law... Even though society can effect local reductions in entropy, the general and universal trend of entropy increase easily swamps the anomalous but important efforts of civilized man. Each localized, man-made or machine- made entropy decrease is accompanied by a greater increase in entropy of the surroundings, thereby maintaining the required increase in total entropy.” According to this logic, then, the second law does not prevent scrap metal from reorganizing itself into a computer in one room, as long as two computers in the next room are rusting into scrap metal–and the door is open. The spectacular increase in order seen here on Earth does not violate the second law because order is decreasing throughout the rest of this vast universe, so the total order in the universe is surely still decreasing. So I wrote a reply, ”Can ANYTHING Happen in an Open System?” [4] to my critics which was published in the Fall 2001 issue of The Mathematical Intelligencer. In that reply, I first showed (see Appendix) that the second law does not simply require that any increase in thermal order in an open system be compensated for by a decrease outside the system, it requires that the increase in thermal order be no greater than the thermal order entering the open system. Leo, He has wrote extensively on this if you want to check out his other writings. http://www.iscid.org/boards/ubb-get_topic-f-10-t-000038.htmlbornagain77
October 28, 2007
October
10
Oct
28
28
2007
10:06 AM
10
10
06
AM
PDT
bornagain, You can't simply state that entropy applies to complex material systems, you have to actually prove it. Entropy does not mean that complex thing can't self assemble. Entropy is simply a MEAUSRE of disorder. In terms of DNA, entropy can apply to the physical medium (ie the degradation of the DNA can be measured in terms of entropy)- that can be derived from the statistical equation. But to apply it to the information encoded therein, that is taking a term out if its intended concept and that cannot be derived from the definition (or at least, I have yet to see that - it you could show me I would be more than happy to admit that I am wrong)leo
October 28, 2007
October
10
Oct
28
28
2007
09:36 AM
9
09
36
AM
PDT
DaveScot, I agree with you totally on your reasoning!..I would love to see the foundational work done on Genetic Entropy thus, further clarifying what I truly believe is a foundational principle for biology!!! I believe once the mathematical mo^dels are refined for Genetic Entropy, this will clear up a lot of the garbage that evolution has generated and reveal important insights into biology!bornagain77
October 28, 2007
October
10
Oct
28
28
2007
09:14 AM
9
09
14
AM
PDT
leo, Entropy applies to all complex material systems,,,i.e. all complex material systems will eventually degrade into equilibrium! In its broader meaning for what we are talking about, Entropy means that complex things such as Space Shuttles and Genomes will not self assemble themselves...It is a commonsense inference, as well as foundational inference!!! I know evolutionists try to elude this direct inference by referring to high rhetoric of closed and open systems YET... Since Complex Specified Information is indeed encoded on a complex material system (DNA),,,scientifically, our first and foremost presumption for the information will be that the complex specified information (CSI) will degrade in accordance to the material degradation (entropy) of the material that it is encoded on!!! This is a first inference postulation of basic principles of science! The second law of thermodynamics, and conservation of information (and thus Genetic Entropy) are overriding first principles of science that have primary authority in science, i.e. Their inferences are considered valid and can only be overcome by hard evidence,,you cannot refute a first principle of science by alluding to high rhetoric! Science runs on evidence not high rhetoric! You MUST conclusively demonstrate the generation of Complex Specified Information in a material system by totally natural means in order to overcome this inference to Genetic Entropy!!!bornagain77
October 28, 2007
October
10
Oct
28
28
2007
09:08 AM
9
09
08
AM
PDT
The evident absence of genetic entropy can't be explained if the mutation rate Sanford uses is correct but it can be easily explained if the mutation rate in eukaryotes is the commonly given one in one billion chance per nucleotide. P.falciparum's genome size is about 23 million nucleotides. Thus on average, with an error rate of 1 in 10^9, we can expect that 97% of all p.falciparum replications to be perfectly error free copies. In mammalian cells with genomes roughly 100 times larger we can expect only 3% of the replications to be perfect copies. This great disparity probably explains why p.falciparum's genome is immune to genetic entropy. ID still explains why p.falciparum failed to evolve any novel complexity - intelligent agency is the only mechanism reasonably capable of generating novel biological complexity. P.falciparum did exactly what we expect in the absence of input from intelligent agency.DaveScot
October 28, 2007
October
10
Oct
28
28
2007
08:38 AM
8
08
38
AM
PDT
1 2

Leave a Reply