Uncommon Descent Serving The Intelligent Design Community

Lobbing a grenade into the Tetrapod Evolution picture

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A year ago, Nature published an educational booklet with the title 15 Evolutionary gems (as a resource for the Darwin Bicentennial). Number 2 gem is Tiktaalik a well-preserved fish that has been widely acclaimed as documenting the transition from fish to tetrapod. Tiktaalik was an elpistostegalian fish: a large, shallow-water dwelling carnivore with tetrapod affinities yet possessing fins. Unfortunately, until Tiktaalik, most elpistostegids remains were poorly preserved fragments.

“In 2006, Edward Daeschler and his colleagues described spectacularly well preserved fossils of an elpistostegid known as Tiktaalik that allow us to build up a good picture of an aquatic predator with distinct similarities to tetrapods – from its flexible neck, to its very limb-like fin structure. The discovery and painstaking analysis of Tiktaalik illuminates the stage before tetrapods evolved, and shows how the fossil record throws up surprises, albeit ones that are entirely compatible with evolutionary thinking.”

Just when everyone thought that a consensus had emerged, a new fossil find is reported – throwing everything into the melting pot (again!). Trackways of an unknown tetrapod have been recovered from rocks dated 10 million years earlier than Tiktaalik. The authors say that the trackways occur in rocks that: “can be securely assigned to the lower-middle Eifelian, corresponding to an age of approximately 395 million years”. At a stroke, this rules out not only Tiktaalik as a tetrapod ancestor, but also all known representatives of the elpistostegids. The arrival of tetrapods is now considered to be 20 million years earlier than previously thought and these tetrapods must now be regarded as coexisting with the elpistostegids. Once again, the fossil record has thrown up a big surprise, but this one is not “entirely compatible with evolutionary thinking”. It is a find that was not predicted and it does not fit at all into the emerging consensus.

“Now, however, Niedzwiedzki et al. lob a grenade into that picture. They report the stunning discovery of tetrapod trackways with distinct digit imprints from Zachemie, Poland, that are unambiguously dated to the lowermost Eifelian (397 Myr ago). This site (an old quarry) has yielded a dozen trackways made by several individuals that ranged from about 0.5 to 2.5 metres in total length, and numerous isolated footprints found on fragments of scree. The tracks predate the oldest tetrapod skeletal remains by 18 Myr and, more surprisingly, the earliest elpistostegalian fishes by about 10 Myr.” (Janvier & Clement, 2010)

The Nature Editor’s summary explained: “The finds suggests that the elpistostegids that we know were late-surviving relics rather than direct transitional forms, and they highlight just how little we know of the earliest history of land vertebrates.” Henry Gee, one of the Nature editors, wrote in a blog:

“What does it all mean?
It means that the neatly gift-wrapped correlation between stratigraphy and phylogeny, in which elpistostegids represent a transitional form in the swift evolution of tetrapods in the mid-Frasnian, is a cruel illusion. If – as the Polish footprints show – tetrapods already existed in the Eifelian, then an enormous evolutionary void has opened beneath our feet.”

For more, go here:
Lobbing a grenade into the Tetrapod Evolution picture
http://www.arn.org/blogs/index.php/literature/2010/01/09/lobbing_a_grenade_into_the_tetrapod_evol

Additional note: The Henry Gee quote is interesting for the words “elpistostegids represent a transitional form”. In some circles, transitional forms are ‘out’ because Darwinism presupposes gradualism and every form is no more and no less transitional than any other form. Gee reminds us that in the editorial office of Nature, it is still legitimate to refer to old-fashioned transitional forms!

Comments
Mustela Nivalis and R0b Thank you for your posts, which raise related points. Mustela Nivalis (326)
As it turns out, I actually have read all of your references in my search for a definition of CSI that ID proponents agree uniquely identifies design and that takes into account known physics, chemistry, and evolutionary mechanisms. Unfortunately, none of your referenced materials do that. Most suffer from the assumption of a uniform probability distribution (the tornado in a junkyard fallacy described by Aleta).
R0b (336)
You can scour ID works until doomsday and not find any CSI (or FSCI, or FCSI) analysis that allows for a possibility of Darwinian factors. Assuming tractability of such probability calculations, taking into account physical laws in addition to random mutations could significantly shrink CSI totals. As it is, uniform distributions form a singular basis for CSI claims, and applicability of such quantifications to biological organisms is doubtful.
Here’s a quote from William Dembski’s online 2004 paper, “Irreducible Complexity Revisited” at http://www.designinference.com/documents/2004.01.Irred_Compl_Revisited.pdf (pp. 28 ff.). [Dembski uses similar language in his 2008 book, The Design of Life, The Foundation for Thought and Ethics, Dallas, which he co-authored with Jonathan Wells, pp. 182 ff.] The bold type is mine (VJT); the italics are Dembski’s.
The details here are technical, but the general logic by which design theorists argue that irreducibly complex systems exhibit specified complexity is straightforward: for a given irreducibly complex system and any putative evolutionary precursor, show that the probability of the Darwinian mechanism evolving that precursor into the irreducibly complex system is small. In such analyses, specification is never a problem—in each instance, the irreducibly complex system, any evolutionary precursor, and any intermediate between the precursor and the final irreducibly complex system are always specified in virtue of their biological function. Also, the probabilities here need not be calculated exactly. It’s enough to establish reliable upper bounds on the probabilities and show that they are small. What’s more, if the probability of evolving a precursor into a plausible intermediate is small, then the probability of evolving that precursor through the intermediate into the irreducibly complex system will a fortiori be small. Darwinists object to this approach to establishing the specified complexity of irreducibly complex biochemical systems. They contend that design theorists, in taking this approach, have merely devised a “tornado-in-a-junkyard” strawman. The image of a “tornado in a junkyard” is due to astronomer Fred Hoyle. Hoyle imagined a junkyard with all the pieces for a Boeing 747 strewn in disarray and then a tornado blowing through the junkyard and producing a fully assembled 747 ready to fly. Darwinists object that this image has nothing to do with how Darwinian evolution produces biological complexity. Accordingly, in the formation of irreducibly complex systems like the bacterial flagellum, all such arguments are said to show is that these systems could not have formed by purely random assembly. But, Darwinists contend, evolution is not about randomness. Rather, it is about natural selection sifting the effects of randomness. To be sure, if design theorists were merely arguing that pure randomness cannot bring about irreducibly complex systems, there would be merit to the Darwinists’ tornado-in-a-junkyard objection. But that’s not what design theorists are arguing. The problem with Hoyle’s tornado-in-a-junkyard image is that, from the vantage of probability theory, it made the formation of a fully assembled Boeing 747 from its constituent parts as difficult as possible. But what if the parts were not randomly strewn about in the junkyard? What if, instead, they were arranged in the order in which they needed to be assembled to form a fully functional 747? Furthermore, what if, instead of a tornado, a robot capable of assembling airplane parts were handed the parts in the order of assembly? How much knowledge would need to be programmed into the robot for it to have a reasonable probability of assembling a fully functioning 747? Would it require more knowledge than could reasonably be ascribed to a program simulating Darwinian evolution? Design theorists, far from trying to make it difficult to evolve irreducibly complex systems like the bacterial flagellum, strive to give the Darwinian selection mechanism every legitimate advantage in evolving such systems. The one advantage that cannot legitimately be given to the Darwinian selection mechanism, however, is prior knowledge of the system whose evolution is in question. That would be endowing the Darwinian mechanism with teleological powers (in this case foresight and planning) that Darwin himself insisted it does not, and indeed cannot, possess if evolutionary theory is effectively to dispense with design. Yet even with the most generous allowance of legitimate advantages, the probabilities computed for the Darwinian mechanism to evolve irreducibly complex biochemical systems like the bacterial flagellum always end up being exceedingly small. The reason these probabilities always end up being so small is the difficulty of coordinating successive evolutionary changes apart from teleology or goal-directedness.In the Darwinian mechanism, neither selection nor variation operates with reference to future goals (like the goal of evolving a bacterial flagellum from a bacterium lacking this structure). Selection is natural selection, which is solely in the business of conferring immediate benefits on an evolving organism. Likewise, variation is random variation, which is solely in the business of perturbing an evolving organism’s heritable structure without regard for how such perturbations might benefit or harm future generations of the organism. In attempting to coordinate the successive evolutionary changes needed to bring about irreducibly complex biochemical machines, the Darwinian mechanism therefore encounters a number of daunting probabilistic hurdles. These include the following: (1) Availability. Are the parts needed to evolve an irreducibly complex biochemical system like the bacterial flagellum even available? (2) Synchronization. Are these parts available at the right time so that they can be incorporated when needed into the evolving structure? (3) Localization. Even with parts that are available at the right time for inclusion in an evolving system, can the parts break free of the systems in which they are currently integrated and be made available at the “construction site” of the evolving system? (4) Interfering Cross-Reactions. Given that the right parts can be brought together at the right time in the right place, how can the wrong parts that would otherwise gum up the works be excluded from the “construction site” of the evolving system? (5) Interface Compatibility. Are the parts that are being recruited for inclusion in an evolving system mutually compatible in the sense of meshing or interfacing tightly so that, once suitably positioned, the parts work together to form a functioning system? (6) Order of Assembly. Even with all and only the right parts reaching the right place at the right time, and even with full interface compatibility, will they be assembled in the right order to form a functioning system? (7) Configuration. Even with all the right parts slated to be assembled in the right order, will they be arranged in the right way to form a functioning system? To see what’s at stake in overcoming these hurdles, imagine you are a contractor who has been hired to build a house. If you are going to be successful at building the house, you will need to overcome each of these hurdles. First, you have to determine that all the items you need to build the house (e.g., bricks, wooden beams, electrical wires, glass panes, and pipes) exist and thus are available for your use. Second, you need to make sure that you can obtain all these items within a reasonable period of time. If, for instance, crucial items are back-ordered for years on end, then you won’t be able to fulfill your contract by completing the house within the appointed time. Thus, the availability of these items needs to be properly synchronized. Third, you need to transport all the items to the construction site. In other words, all the items needed to build the house need to be brought to the location where the house will be built. Fourth, you need to keep the construction site clear of items that would ruin the house or interfere with its construction. For instance, dumping radioactive waste or laying high-explosive mines on the construction site would effectively prevent a usable house from ever being built there. Less dramatically, if excessive amounts of junk found their way to the site (items that are irrelevant to the construction of the house, such as tin cans, broken toys, and discarded newspapers), it might become so difficult to sort through the clutter and thus to find the items necessary to build the house that the house itself might never get built. Items that find their way to the construction site and hinder the construction of a usable house may thus be described as producing interfering cross-reactions. Fifth, procuring the right sorts of materials required for houses in general is not enough. As a contractor you also need to ensure that they are properly adapted to each other. Yes, you’ll need nuts and bolts, pipes and fittings, electrical cables and conduits. But unless nuts fit properly with bolts, unless fittings are adapted to pipes, and unless electrical cables fit inside conduits, you won’t be able to construct a usable house. To be sure, each part taken by itself can make for a perfectly good building material capable of working successfully in some house or other. But your concern here is not with some house or other but with the house you are actually building. Only if the parts at the construction site are adapted to each other and interface correctly will you be able to build a usable house. In short, as a contractor you need to ensure that the parts you are bringing to the construction site not only are of the type needed to build houses in general but also share interface compatibility so that they can work together effectively. Sixth, even with all and only the right materials at the construction site, you need to make sure that you put the items together in the correct order. Thus in building the house, you need first to lay the foundation. If you try to erect the walls first and then lay the foundation under the walls, your efforts to build the house will fail. The right materials require the right order of assembly to produce a usable house. Seventh and last, even if you are assembling the right building materials in the right order, the materials need also to be arranged appropriately. That’s why, as a contractor, you hire masons, plumbers, and electricians. You hire these subcontractors not merely to assemble the right building materials in the right order but also to position them in the right way. For instance, it’s all fine and well to take bricks and assemble them in the order required to build a wall. But if the bricks are oriented at strange angles or if the wall is built at a slant so that the slightest nudge will cause it to topple over, then no usable house will result even if the order of assembly is correct. In other words, it’s not enough for the right items to be assembled in the right order; rather, as they are being assembled, they also need to be properly configured. Now, as a building contractor, you find none of these seven hurdles insurmountable. That’s because, as an intelligent agent, you can coordinate all the tasks needed to clear these hurdles. You have an architectural plan for the house. You know what materials are required to build the house. You know how to procure them. You know how to deliver them to the right location at the right time. You know how to secure the location from vandals, thieves, debris, weather and anything else that would spoil your construction efforts. You know how to ensure that the building materials are properly adapted to each other so that they work together effectively once put together. You know the order of assembly for putting the building materials together. And, through the skilled laborers you hire (i.e., the subcontractors), you know how to arrange these materials in the right configuration. All this know-how results from intelligence and is the reason you can build a usable house. But the Darwinian mechanism of random variation and natural selection has none of this know-how. All it knows is how to randomly modify things and then preserve those random modifications that happen to be useful at the moment. The Darwinian mechanism is an instant gratification mechanism. If the Darwinian mechanism were a building contractor, it might put up a wall because of its immediate benefit in keeping out intruders from the construction site even though by building the wall now, no foundation could be laid later and, in consequence, no usable house could ever be built at all. That’s how the Darwinian mechanism works, and that’s why it is so limited. It is a trial-and-error tinkerer for which each act of tinkering needs to maintain or enhance present advantage or select for a newly acquired advantage.
I hope that quote answers most of your questions. Later, I'll discuss how Meyer addresses the same question, and I'll contrast it with the approach taken by Kalinsky.vjtorley
January 24, 2010
January
01
Jan
24
24
2010
09:29 PM
9
09
29
PM
PDT
Nakashima at 353 I agree. However, if someone wants to suggest that man has never created by unique thought, then its fair to ask that person to follow their suggestion to its logical end. By the way, I don't think there is any information that a person who hallucinates invents anything they are not already aware of. - - - - - - - Hello Zeph, (at 380) You say: "It seems to me that evolution must always be seen as steering towards “survival and propagation” within some environment." Evolution cannot be viewed this way at all, ever. The mechanism to drive evolution must be blind with regard to fitness and function. To say anything else is to add foresight and intent. For the materialist, function and fitness must arise without either. This is, of course, something they have not done. (Nor have they offered a material explanation for the elemental drive toward "survival and propogation" in the first place.) You then say: "The whole point is that feedback look – more adaptive changes being “rewarded” with higher survival rates." There is no evidence of anything in the genome tracking the suvival rates of a population, which is a measure that would logically need to exist in order to drive adaptation toward increasing it. The reward you speak of is said to be the output of selection, but the mechanism that powers selection cannot be anything more than random chance - and as such - it is not the recipient of a (non-existant) feedback loop. "What you have to avoid is injecting “designer foreknowledge”. Yep.Upright BiPed
January 24, 2010
January
01
Jan
24
24
2010
09:26 PM
9
09
26
PM
PDT
"I’m wondering how may ID advocates would agree with you that ID is not a science as most people understand the term" I am a thorn in the side of a lot of people and what I say in my comments has never been challenged by the pro ID people though I am sure some would not agree with them. On a lot thing we do not agree and I have learned over time a lot from the pro ID people as they have questioned my assessment. I try not to say things that cannot be backed up and some times I exaggerate to make a point. I don't believe anything I say would be questioned by Behe, Dembski or Meyers based on my read of what they wrote. Dembski has twice criticized me here but for other reasons, nothing to do with my take on ID and I was twice sent into moderation by Dave Springer when he ran the site because he disagreed with me. I doubt if Dembski reads 1% of what I write unless it is about something he posts. He has a lot better things to do. I am willing to hear from people on my take about how ID fits into science. ID is just another way of analyzing data within a particular science domain. As it happens the most important one is biology and the sub disciplines of evolutionary biology and origins of life. There does not seem to be any room for it in such things as thermodynamics, chemistry, astronomy, plate tectonics, meteorology etc. In such areas as anthropology, archaeology, forensics, cryptology there seems to be a use because no one questions that intelligence intervention has an affect. It is just this intermediary area of life that generates the heat. It also has some place in cosmology and the origins of the universe. "It sounds like Darwinian evolution is a science because it operates within that paradigm for better or worse. It can turn out to be a false or incomplete theory, which will become discredited eventually, while still remaining a *scientific* theory in nature." One of the books I suggested you read is Denton's book, "Evolution a Theory in Crisis." He makes a distinction between what he calls the Darwin's general theory of evolution and Darwin's special theory of evolution. Darwin never made this distinction but what Denton means is micro evolution and macro evolution. He calls what is micro evolution, Darwin's special theory and there is not much controversy there. The general theory is that all changes over time follow the microevolutionary path but the evidence does not support it. The Achilles heel of Darwinian processes is the building of information. If they had anything of consequence, the critics here would be all over it. Instead we get computer algorithms and nothing in real life. People point to small changes in information like it is pivotal or consequential when it reality it is ho hum. The Edge of Evolution is all about this and how little the changes are when large scale changes are necessary to get to major new capabilities. This all assumes that you understand basic biology and role of DNA and the transcription/translation process. "If I’m understanding you, ID does not even propose to meet that challenge of scientifically supplanting Darwinian evolution with a better theory, as a new theory in physics might." Re-read my comments where I talk about what ID is about. Intelligent intervention is not a process that you can measure with an experiment. That is what natural laws are about. Intelligent intervention could be a one time event and would show up when the natural laws do not play out as expected. When that happens, one searches for alternative explanations. ID says that one alternative might be intelligent intervention. In 99.999% of the time there is no possibility of intelligent intervention. In some rare cases it seems it could be a possibility and that percentage above is a little less for evolutionary biology. In other words there are definitely some places within that particular discipline where intelligent alternatives make sense but it is still a small percentage. Here again are my thoughts on this. https://uncommondescent.com/intelligent-design/lenny-susskind-on-the-evolution-of-physicists/#comment-326046 "In honesty, I haven’t yet understood what ID is in your view, if it’s not a science like others; the analogy with statistics didn’t make sense yet" You are confusing a domain of inquiry with the possible causes for events within that domain. It is not like a discipline such as evolutionary biology which has a domain of inquiry. It is a potential explanation or conclusion of a finding within that domain. In other words it expands the possible causes for an event. Right now science a priori rules it out in certain domains of inquiry. One of the domains is evolutionary biology. It does not rule it out in anthropology or archaeology. You are trying to absorb in a few days what it has taken a long time to come to and after the reading of several books, watching videos, reading comments here and elsewhere etc. Essentially 10 years of looking at this topic.jerry
January 24, 2010
January
01
Jan
24
24
2010
08:52 PM
8
08
52
PM
PDT
From Abel's paper cited in #:
Evolutionary algorithms, for example, must be stripped of all artificial selection and the purposeful steering of iterations toward desired products.
This is an interesting question. It seems to me that evolution must always be seen as steering towards "survival and propagation" within some environment - whether an ecosystem or a genetic algorithm's synthetic universe. The whole point is that feedback look - more adaptive changes being "rewarded" with higher survival rates. What you have to avoid is injecting "designer foreknowledge". For example, suppose you are training a GA based system in facial recognition. Feedback of "more success" vs "less success" is integral. But if the programmer were to give survival rewards to "better contrast detection" based on intelligent knowledge from outside this system, that this ability will aid the final product - that would be injecting design knowledge. That is - rewarding contract enhancement would be based on predicting future payoffs, and evolution doesn't do that. However, rewarding simple success is not only fair play, it's the core concept. ZephZeph
January 24, 2010
January
01
Jan
24
24
2010
06:12 PM
6
06
12
PM
PDT
vjtorley, You may have missed the little exchange that took place over on the gravity thread but it has implications for your discussion here. One of the posters listed there links that supposedly show information growing in genomes. I followed one of them and it led me to a page which had a host of references to this topic. Here is my post from this thread "I went to the Adami article referenced by caminintx and then to the articles citing it and it a veritable gold mine of stuff on complexity and information with lots of full text articles. Here is the link I accessed http://scholar.google.com/scholar?q=link:http%3A%2F%2Fwww.pnas.org%2Fcgi%2Fcontent%2Fabstract%2F97%2F9%2F4463 It seems all their data for complexity arising is in computer programs and not actual genomes. You would think that genomes is where the action should be." We should try to move this discussion some place else since it is getting very long.jerry
January 24, 2010
January
01
Jan
24
24
2010
05:48 PM
5
05
48
PM
PDT
Jerry, Just a quick observation. ID is not a science such as physics, thermodynamics, evolutionary biology, plate tectonics but rather a supplementary way of analyzing the same data from these various disciplines that have been analyzed by other scientists. Is there a science of Intelligent Design? There might be in the future similar to statistics. Your take on ID is certainly an interesting one - including the list of things you assert ID accepts in one of the comments you referenced for me. Your view of ID is quite different that the mainstream stereotypes, and I thank you for explaining it so clearly. I'm wondering how may ID advocates would agree with you that ID is not a science as most people understand the term, tho in the future it might become something sort of like statistics (forgive and correct my short paraphrase). It sounds like Darwinian evolution is a science because it operates within that paradigm for better or worse. It can turn out to be a false or incomplete theory, which will become discredited eventually, while still remaining a *scientific* theory in nature. Certainly some of the challenge to Darwinian evolution appears to be scientific to me, and indeed can exist within the same framework. For example, many of the complexity arguments, correct or not in the end, are quite scientific in nature. But these are not per se defining ID, they are disputing the current orthodoxy. As you say, flaws in one naturalist theory don't prove non-natural theory, as they may just once again show need for a revised or new naturalistic theory. However, once the hegemony of Darwinian explanations has been challenged, there is breathing room for another scientific theory to better explain the phenomena and answer similar challenges in turn. If I'm understanding you, ID does not even propose to meet that challenge of scientifically supplanting Darwinian evolution with a better theory, as a new theory in physics might. It proposes a non-naturalistic explanation of sorts which will remain very vague ("intelligent design was involved" appears to be just about the whole corpus of results that I've seen so far, with all efforts aimed at sustaining that conclusion rather than elaborating or refining it). As such ID will never be subject to the same kind of "naturalistic" scientific scrutiny that Darwinian evolution must sustain. While defeating Darwinian evolution needs scientific evidence because this is done within the convential scientific framework, ID doesn't because it's not really a science but more of a perspective or philosophical interpretation or approach to analyzing data. (In honesty, I haven't yet understood what ID is in your view, if it's not a science like others; the analogy with statistics didn't make sense yet). I'm sure I'm misunderstanding your take on ID in many ways (and perhaps almost entirely), but putting this reflection & paraphrase on the table might help in clarifying.Zeph
January 24, 2010
January
01
Jan
24
24
2010
05:38 PM
5
05
38
PM
PDT
Mr Zeph, I'm in violent agreement with you, just a question of nuance and an awareness of the counterexamples. Since DNA is read sequentially, there are mutations and indels that cause frame shift error which can disable (or enable) the reading of arbitrarily large amounts of DNA code. In developing Genetic Programming, John Koza was careful to work with instruction sets of 'virtual machines' that never faulted - all programs always executed. For example, the divide operator was protected so that division by zero was caught and forced to return a value.Nakashima
January 24, 2010
January
01
Jan
24
24
2010
05:11 PM
5
05
11
PM
PDT
Speaking of machine language, I once ran into an interesting mutation... Well, actually it was a bug, involving one bit set wrong. This was in a galaxy long ago and far away called CP/M. An assembly language program I was developing hung the computer, except when running under the debugger where it worked fine, so I had to trace it step by step. I had a branch wrong (jump when zero rather than jump not zero), and as a result it tried to do a premature "return", which popped garbage off the stack and jumped to it. In normal usage this resulted in a hung computer - jumping into random locations is often like that. When I traced it down, it turned out that under the debugger the stack was just before the code, and the first two bytes of the program's code, when interpreted as an address popped of the stack, happened to be EXACTLY the address after the buggy branch where it would have gone without the bug, so execution proceeded exactly as it was supposed to when running under the debugger (and for those who understand the implications here, the first two bytes of the code were never executed again so overwriting them when pushing the next call onto the stack didn't hurt anything). How likely was that? Here we have a case of a (nominally) intelligent designer's mistake being corrected by "random chance", at least in terms of functionality. I wonder if the designer of Life on Earth included microevolution mechanisms (to use the framework of ID) to help correct their own minor oversights or errors? (smile)Zeph
January 24, 2010
January
01
Jan
24
24
2010
04:59 PM
4
04
59
PM
PDT
Nakashima, Thanks for responding; I've admired your posts. Re: one bit change in software being likely to break something, maybe something major. It's true that some bits are not critical, and the part broken might not matter (eg: it's in an error routine that is almost never going to be called anyway). I don't have any stats on what portion of machine code is how critical. I stand by the basic concept however - Intel Pentium machine instructions are poorly designed for mutation and selection. For example, there is very little redundancy in the sense having and using three copies of some code block in parallel, such that if any one of the copies becomes disabled, the other two can carry most of the load. Essentially DNA/RNA involves a lot of parallel templates; breaking one copy often affects no others, or few others. On the other hand with machine language which is sequentialy executed, a one bit error can not rarely mess up the functioning of an indefinitely large portion of the rest of the code (like never executing it). It's all a matter of degree, but machine code is not well adapted keep mostly function in the face of internal mutation. Nor should it be - not needed, different environment. So I would not tend to try a genetic algorithm which scrambled and tested Pentium machine language instructions. One of the useful things about DNA/RNA is that you can carry around a partially "damaged" fragment (mutated) for some while, if other redundant fragments carry on any essential functions (and the mutation isn't actively harmful). This means there's relatively more chance of two "defective" mutations encountering each other to find some synergy, or a second mutation of the same segment to occur. At least compared to sequentially executed code like a computer. I'm not saying that's "enough" to solve the complexity issues brought up by ID, or that it isn't, but this robustness of DNA/RNA encoding is part of the toolkit which can't be ignored. Whether originally evolved from simple components or designed, DNA/RNA based life today is pretty well adapted for evolving, ie: for mostly error free copying after corrections, along with error tolerance, with parallel template operation rather than sequential, with sexual mixing for many species, etc. (Intel pentium instruction sets are well designed for different purposes)Zeph
January 24, 2010
January
01
Jan
24
24
2010
04:44 PM
4
04
44
PM
PDT
Mr Vjtorley, Thank you the link to Abel 2005. Figure 4 reinforces for me my fundamental problem with much of Abel's work. His text might say in one place FSC alone provides algorithmic instruction. which would lead you to believe that FSC is a ategory that does not overlap with OSC and RSC. But this diagram informs us the OSC and RSC shade into each other, and FSC is just some example of complexity that exhibits function. He hasn't said, for any given measurement of complexity, that it can't be functional - just that for some it is improbable that they are functional. I'm sorry my question about digit sequences was unclear. i was giving examples of several numbers that have decimal representations which are infinite sequences. For the purposes of my question, I was asking for a consideration of the 100 digits in each sequence that started at digit 1000 and went onward in the sequence. The point was that in each case the digit sequence contained various qualities of order, randomness and function, yet Abel's qualitative categories really didn't distinguish any of them from another. In general, Abel's papers suffer a lack of awareness of workers such as Yarus and the stereochemical hypothesis.Nakashima
January 24, 2010
January
01
Jan
24
24
2010
04:34 PM
4
04
34
PM
PDT
vjtorley #171,172: Excellent comments. Both of those papers are great and one of them is an excellent reiteration and further explanation of the probability bound and how to utilize it to effectively eliminate chance -- basically an "expansion" of Dembski's CSI.CJYman
January 24, 2010
January
01
Jan
24
24
2010
03:11 PM
3
03
11
PM
PDT
Why would ID take Darwinian mechanisms into account, when evolution is the best example of intelligence, chance, and law working together? It is the non-IDer who needs to explain the presence of replicating, information processing systems, and the fortuitous matching between non-uniform search space and search algorithm necessary for an evolutionary algorithm. It is actually the ID critic who needs to account for the above "Darwinian mechanisms" beginning from a randomly chosen set of laws, absent intelligence (Random.org could be useful in this effort). The materialist also needs to provide an account of where events neither best defined by the physical/material/measurable properties of matter/energy nor best defined by chance come from, without providing the "magical emergence" cop-out." These events would include arrangements of letters (ie: essay), arrangement of parts (ie: machines), and arrangements of nucleotides (genetic information). If those events aren't defined by physical properties of matter/energy (laws) or chance, and if they are routinely seen to result from the application of foresight, then where should we look to begin to explain such events?CJYman
January 24, 2010
January
01
Jan
24
24
2010
03:06 PM
3
03
06
PM
PDT
I will respond later to the oft-repeated claims on this thread that ID doesn't take Darwinian mechanisms into account, and that ID arguments make use of the "tornado-in-a-junkyard" fallacy. These statements, as it turns out, are canards, as readers of Dembski and Wells' "The Design of Life" and Meyer's "Signature in the Cell" should be aware.vjtorley
January 24, 2010
January
01
Jan
24
24
2010
02:43 PM
2
02
43
PM
PDT
Nakashima (#329) Thank you for your post. You write:
Abel never proves that his categories of OSC, RSC, and FSC do not overlap or are complete, nor provides an effective procedure for deciding into which category something will fall....
I refer you to Figure 4 in Abel and Trevors Theoretical Biology and Medical Modelling 2005 2:29 doi:10.1186/1742-4682-2-29. See: http://www.tbiomed.com/content/2/1/29/figure/F4 . Note the caption:
Superimposition of Functional Sequence Complexity onto Figure 2. The Y_1 axis plane plots the decreasing degree of algorithmic compressibility as complexity increases from order towards randomness. The Y_2 (Z) axis plane shows where along the same complexity gradient (X-axis) that highly instructional sequences are generally found. The Functional Sequence Complexity (FSC) curve includes all algorithmic sequences that work at all (W). The peak of this curve (w*) represents "what works best." The FSC curve is usually quite narrow and is located closer to the random end than to the ordered end of the complexity scale. Compression of an instructive sequence slides the FSC curve towards the right (away from order, towards maximum complexity, maximum Shannon uncertainty, and seeming randomness) with no loss of function.
The reason why Abel never proves that his categories of OSC, RSC, and FSC do not overlap is that by his own admission, they do overlap. Functional Sequence Complexity (FSC) applies to sequences that are generally high in Random Sequence Complexity (RSC). However, they have an extra dimension of complexity on top of this, as the graph clearly shows. That's why FSC is shown on the Z-axis. You also wrote:
If we examined digits 1000-1100 from the numbers 1/7, 22/7, 2^(1/2)phi, pi, and e, how would Abel sort them into those three categories and/or the additional category “none of the above”?
I have to say I am mystified by this comment of yours. First, the terms RSC, OSC and FSC apply only to sequences, not digits. The digits 1000-1100 clearly exhibit OSC: they're ordered. The remaining numbers, 1/7, 22/7, 2^(1/2)phi, pi, and e, are for the most part of mathematical significance. Although it would be safe to bet that they were picked by an intelligent agent, they perform no function as a sequence. Thus they do not exhibit FSC. I suppose Abel would just have to say they exhibit RSC, unless you could nominate a reason why you picked those numbers in that particular sequence. Did they just randomly pop into your head? Hope that helps.vjtorley
January 24, 2010
January
01
Jan
24
24
2010
02:40 PM
2
02
40
PM
PDT
Mustela Nivalis (#335) Thank you for your post. You wrote:
You have yet to define FSCI in a mathematically rigorous fashion, nor have you shown it to take into consideration known physics, chemistry, or evolutionary mechanisms, nor have you demonstrated that it is a clear indication of intelligent intervention.
I refer you to: Abel, D. “The Capabilities of Chaos and Complexity,” in International Journal of Molecular Sciences, 2009, 10, pp. 247-291, at http://mdpi.com/1422-0067/10/1/247/pdf . I quote (here X_f means X with a subscript f; X_g means X with a subscript g; and t_i means t with a subscript i):
Durston and Chiu have developed a theoretically sound method of actually quantifying Functional Sequence Complexity (FSC) [77]. This method holds great promise in being able to measure the increase or decrease of FSC through evolutionary transitions of both nucleic acid and proteins. This FSC measure, denoted as Xi, is defined as the change in functional uncertainty from the ground state H(X_g(t_i)) to the functional state H(X_f(t_i)), or Xi = delta H (X_g(t_i), X_f(t_j)) (3) The ground state g of a system is the state of presumed highest uncertainty permitted by the constraints of the physical system, when no specified biological function is required or present. Durston and Chiu wisely differentiate the ground state g from the null state H_0 . The null state represents the absence of any physicodynamic constraints on sequencing. The null state produces bona fide stochastic ensembles, the sequencing of which was dynamically inert(physicodynamically decoupled or incoherent [196, 197]). The FSC variation in various protein families, measured in Fits (Functional bits), is shown in Table 1 graciously provided here by Durston and Chiu. In addition to the results shown in Table 1, they performed a more detailed analysis of ubiquitin, plotting the FSC values out along its sequence. They showed that 6 of the 7 highest value sites correlate with the primary binding domain [77]. Table 1. FSC of Selected proteins. Supporting data from the lab of Kirk Durston and David Chiu at the University of Guelph showing the analysis of 35 protein families. [Meaning of each row - VJT] [Name of Protein] [1.] Length (aa) [2.] Number of Sequences [3.]Null State (Bits) [4.]FSC (Fits) [5.] Average Fits/Site Ankyrin 33 1,171 143 46 1.4 HTH 8 41 1,610 177 76 1.9 HTH 7 45 503 194 83 1.8 HTH 5 47 1,317 203 80 1.7 HTH 11 53 663 229 80 1.5 HTH 3 55 3,319 238 80 1.5 Insulin 65 419 281 156 2.4 Ubiquitin 65 2,442 281 174 2.7 Kringle domain 75 601 324 173 2.3 Phage Integr N-dom 80 785 346 123 1.5 VPR 82 2,372 359 308 3.7 RVP 95 51 411 172 1.8 Acyl-Coa dh N-dom 103 1,684 445 174 1.7 MMR HSR1 119 792 514 179 1.5 Ribosomal S12 121 603 523 359 3.0 FtsH 133 456 575 216 1.6 Ribosomal S7 149 535 644 359 2.4 P53 DNA domain 157 156 679 525 3.3 Vif 190 1,982 821 675 3.6 SRP54 196 835 847 445 2.3 Ribosomal S2 197 605 851 462 2.4 Viral helicase1 229 904 990 335 1.5 Beta-lactamase 239 1,785 1,033 336 1.4 RecA 240 1,553 1,037 832 3.5 tRNA-synt 1b 280 865 1,210 438 1.6 SecY 342 469 1,478 688 2.0 EPSP Synthase 372 1,001 1,608 688 1.9 FTHFS 390 658 1,686 1,144 2.9 DctM 407 682 1,759 724 1.8 Corona S2 445 836 1,923 1,285 2.9 Flu PB2 608 1,692 2,628 2,416 4.0 Usher 724 316 3,129 1,296 1.8 Paramyx RNA Pol 887 389 3,834 1,886 2.1 ACR Tran 949 1,141 4,102 1,650 1.7 Random sequences 1000 500 4,321 0 0 50-mer polyadenosine 50 1 0 0 0 Shown are sequence lengths (column 1), the number of sequences analyzed for each family (column 2), the Shannon uncertainty of the Null State H_0 (the absence of any physicodynamic constraints on sequencing: dynamically inert stochastic ensembles) for each protein (column 3), the FSC value Xi in Fits for each protein (column 4), and the average Fit value/site (FSC/length, column 5). For comparison, the results for a set of uniformly random amino acid sequences (RSC) are shown in the second from last row, and a highly ordered, 50-mer polyadenosine sequence (OSC) in the last row. The Fit values obtained can be discussed as the measure of the change in functional uncertainty required to specify any functional sequence that falls into the given family being analyzed. (Used with permission from Durston, K.K.; Chiu, D.K.; Abel, D.L.; Trevors, J.T. Measuring the functional sequence complexity of proteins. Theor Biol Med Model 2007, 4, Free on-line access at http://www.tbiomed.com/content/4/1/47).
I have to say that this looks pretty "mathematically rigorous" to me. As for your assertion that the paper does not "take into consideration known physics, chemistry, or evolutionary mechanisms," I looked through the paper, and verified that it discusses the following models for the origin of life, and examines their deficiencies: the RNA Word and pre-RNA World models [refs. 208, 209], clay life [210-213]; early three-dimensional “genomes” [214, 215]; “Metabolism/Peptide First” [216-219]; “Co-evolution” [220-223]; “Simultaneous nucleic acid and protein” [224-226]; “Two-Step” models of life-origin [227-229]; autopoeisis [230-232]; complex adaptive systems (CAS) [137, 237, 238]; genetic algorithms [140, 194, 298, 314, 315]; hypercycles [42-49]; and “the Edge of Chaos” [7, 8, 21, 22, 50-57, 198, 316-328]. That sounds pretty comprehensive to me. Or would you like to propose another model? What about "a clear indication of intelligent intervention"? Simple enough. I suggest you look at the introduction (pp. 248-250) and the conclusion (pp. 275-276), from which I quote the following excepts (bold type mine - VJT):
If Pasteur and Virchow’s First Law of Biology (“All life must come from previously existing life”) is to be empirically falsified, direct observation of spontaneous generation is needed. In the absence of such empirical falsification, a plausible model of mechanism at the very least for both Strong and Type IV emergence (formal self-organization) is needed... Attempts to relate complexity to self-organization are too numerous to cite [4, 21, 169-171]. Under careful scrutiny, however, these papers seem to universally incorporate investigator agency into their experimental designs. To stem the growing swell of Intelligent Design intrusions, it is imperative that we provide stand-alone natural process evidence of non trivial self-organization at the edge of chaos. We must demonstrate on sound scientific grounds the formal capabilities of naturally-occurring physicodynamic complexity. Evolutionary algorithms, for example, must be stripped of all artificial selection and the purposeful steering of iterations toward desired products. The latter intrusions into natural process clearly violate sound evolution theory [172, 173]. Evolution has no goal [174, 175]. Evolution provides no steering toward potential computational and cybernetic function [4, 6-11]. The theme of this paper is the active pursuit of falsification of the following null hypothesis: “Physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration.”... Let the reader provide the supposedly easy falsification of the null hypothesis. Inability to do so should cause pangs of conscience in any scientist who equates metaphysical materialism with science. On the other hand, providing the requested falsification of this null hypothesis would once-and-for-all end a lot of unwanted intrusions into science from philosophies competing with metaphysical materialism... The capabilities of stand-alone chaos, complexity, self-ordered states, natural attractors, fractals, drunken walks, complex adaptive systems, and other subjects of non linear dynamic models are often inflated. Scientific mechanism must be provided for how purely physicodynamic phenomena can program decision nodes, optimize algorithms, set configurable switches so as to achieve integrated circuits, achieve computational halting, and organize otherwise unrelated chemical reactions into a protometabolism. To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: “Physicodynamics cannot spontaneously traverse The Cybernetic Cut [9]: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration.” A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis.
Abel is issuing a challenge that methodological materialists have thus far failed to meet. Pasteur and Virchow’s First Law of Biology (“All life must come from previously existing life”) has yet to be empirically falsified. By default, ID is the only hypothesis still standing.vjtorley
January 24, 2010
January
01
Jan
24
24
2010
02:19 PM
2
02
19
PM
PDT
Walter Kloover (#346) Thank you for your post. You wrote:
The first sentence of paragraph (c) says (among other things) that CSI is information that is specified. The second sentence explains what it means for an event to be specified. I don’t usually think of information as an event. Might it be better to say a thing is specified? Secondly, if as stated in paragraph (d), information is just a measure of complexity, what does it mean to say that information (itself a measurement of complexity) can be specified and complex?
Regarding your first query, I would agree with you that specification is best attributed to things or objects (such as strings of characters), although I suppose you could also attribute it to an event such as the manifestation of the object in question, or the relationship between its constituents. Indeed, Dembski's definition of specified complexity (in Dembski, W. A. and Wells, J. “The Design of Life.” 2008. Foundation for Thought and Ethics, Dallas), says that information can be a property of objects (p. 320):
An event or object exhibits specified complexity provided that (1) the pattern to which it conforms identifies a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY). Specified complexity is a type of INFORMATION.
In answer to your second query: if a thing, or object (e.g. a string of characters) can be specified, it can also be complex. Its complexity can be measured by its mathematical improbability. Using Dembski's definition, its specificity can be measured by the brevity of its description. To the extent that it is highly improbable, it can be said to contain information. Of course, if the term "information" simply refers to the mathematical improbability (i.e. probabilistic complexity) of the string in question, then the term "specified information" has no meaning. You can say that a string is specified, but you can't say that a number is. But if the term "information" is used to denote a string of characters possessing the trait of probabilistic complexity, then it makes sense to say that the same string also possesses the property of specificity. For instance, Dembski, 2008, defines complex specified information on p. 311 as "information that is both complex and specified" - in other words, the property possessed by a string which is highly improbable and easy to describe.vjtorley
January 24, 2010
January
01
Jan
24
24
2010
12:23 PM
12
12
23
PM
PDT
"So, what does it reveal about someone who never says anything of substance and continually try to evade answering direct questions?" I don't know but it describes perfectly the anti id people here.jerry
January 24, 2010
January
01
Jan
24
24
2010
12:00 PM
12
12
00
PM
PDT
Osteonectin (#347) Thank you for your post. Sorry for not getting back to you sooner. You wrote:
According to your comment at 310
Meyer also defines functional complex specified information (FCSI).
Do you have a reference for this?
My apologies; I should have been a little more precise. Dr. Meyer doesn't use that exact term, but he does use the term "complex functional specificity" (his italics) on page 388 of his book. Here's a full quote (pages 387-388). The italics are Meyer's; the bold type is mine (VJT).
Though information theory has a limited application in describing biological systems, it has succeeded in rendering quantitative assessments of the complexity of biomacromolecules. Further, experimental work has established the functional specificity of the base sequences in DNA and amino acids in proteins. Thus the term "information" as used in biology refers to two real and contingent properties: complexity and functional specificity. Since scientists began to think seriously about what would be required to explain the phenomenon of heredity, they have recognized the need for some feature or substance in living organisms possessing precisely these two properties together. Thus Erwin Schrodinger envisoned an aperiodic crystal; (19) Erwin Chagaff perceived DNA's capacity for "complex sequencing";(20) James Watson and Francis Crick equated complex sequences with "information," which Crick in turn equated with "specificity";(21) Jacques Monod equated irregular specificity in proteins with the need for a "code";(22) and Leslie Orgel characterized life as "specified complexity."(23) The physicist Paul Davies has more recently argued that the "specific randomness" of DNA base sequences constitutes the central mystery surrounding the origin of life.(24) Whatever the terminology, scientists have recognixed the need for and now know several locations of, complex specificity in the cell, information crucial for transmitting heredity and maintaining biological function. The incorrigibility of these descriptive concepts suggests that specified complexity constitutes a real property of biomacromolecules - indeed, a property that could be otherwise, but only to the detriment of cellular life. Indeed, [page 388] recall Orgel's observation that "Living organisms are distinguished by their specified complexity. Crystals ... fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity."(25) The origin of specified complexity, to which the term "information" in biology commonly refers, therefore does require explanation, even if the concept of information connotes only complexity in Shannon information theory, and even if it connotes meaning in common parlance, and even if it has no explanatory or predictive value in itself. Instead, as a descriptive (rather than explanatory or predictive) concept, the term "information" (understood as specified complexity) helps to define an essential feature of life that origin-of-life researchers must explain "the origin of." So, only where information connotes subjective meaning does it function as a metaphor in biology. Where it refers to complex functional specificity, it defines a feature of living systems that calls for explanation every bit as much as, say, a mysterious set of inscriptions inside a cave." References (19) Schrodinger, Erwin. What is Life? Mind and Matter, 82. Cambridge: Cambridge University Press, 1967. (20) Alberts, Bruce D., DennisBray, Julian Lewis, Martin Raff, Keith Roberts, and James D. Watson. Molecular Biology of the Cell, 21. New York: Garland, 1983. (21) (a) Watson, James D. and Francis H. C. Crick. "A Structure for Deoxyribose Nucleic Acid." Nature 171 (1953): 737-38. (b) Watson, James D. and Francis H. C. Crick. "Genetical Implications of the Structure of Deoxyribonucleic Acid." Nature 171 (1953): 964-67. (c) Crick, Francis. "On Protein Synthesis." Synposium for the Society of Experimental Biology 12 (1958): 138-163. (22) Judson, Horace Freeland. The Eighth Day of Creation: Makers of the Revolution in Biology, 611. Exp. ed. Plainview, NY: Cold Spring Harbor Laboratory Press, 1996. (23) Orgel, Leslie E. The Origins of Life, 189. New York: Wiley, 1973. (24) Davies, Paul. The Fifth Miracle, 120. New York: Simon & Schuster, 1999. (25) Orgel, Leslie E. The Origins of Life, 189. New York: Wiley, 1973.
Here's another quote, from page 359 of Dr. Meyer's book, on the same theme. The bold type is mine (VJT):
Since specifications come in two closely related forms, we detect design in two closely related ways. First, we can detect design when we recognize that a complex pattern of events matches or conforms to a pattern that we know from something else we have witnessed… Second, we can detect design when we recognize that a complex pattern of events has a functional significance because of some operational knowledge that we possess about, for example, the functional requirements or conventions of a system.
Hope that helps.vjtorley
January 24, 2010
January
01
Jan
24
24
2010
11:07 AM
11
11
07
AM
PDT
Jerry:
As I said people reveal themselves by their comments.
So, what does it reveal about someone who never says anything of substance and continually try to evade answering direct questions?efren ts
January 24, 2010
January
01
Jan
24
24
2010
10:59 AM
10
10
59
AM
PDT
"Well-defined vocabulary makes constructive discussion possible. Poorly or ambiguously defined key concepts get in the way of productive conversation" If you have paid attention over the years you would know that FSCI is well defined and by the way FSCI is discussed on this thread in some detail. Spoken like a true whiney 6 year old. As I said people reveal themselves by their comments.jerry
January 24, 2010
January
01
Jan
24
24
2010
09:20 AM
9
09
20
AM
PDT
Well-defined vocabulary makes constructive discussion possible. Poorly or ambiguously defined key concepts get in the way of productive conversation. I don't think this is a principle limited to 6-year olds.Aleta
January 24, 2010
January
01
Jan
24
24
2010
09:13 AM
9
09
13
AM
PDT
"My question was rather if Dr. Meyer defined FCSI/FSCI in his recent “Signature in the Cell” because to my best knowledge even here at UD only a minority of commenters (KF, Jerry) are using it while leading ID-theorists like Drs. Dembski and Behe never mentioned it." Who gives a rat's rear end if they use the same terminology. They are using the same ideas. The term was meant to show how CSI is used in life. If the terminology we use here is not adopted elsewhere but the same ideas are used, who really cares. It is the inane people who object to the term that show they have nothing of substance to bring to the argument that help make our case. They are like a whiney 6 year old that sticks his tongue out at you and then says, they are not using your words, nyah, nyah nyah nyah. A good comparison, anti ID people and whiney 6 year olds. Sometimes it is hard to tell the difference.jerry
January 24, 2010
January
01
Jan
24
24
2010
08:06 AM
8
08
06
AM
PDT
zeph, I haven't got time to answer your questions now. I will try to get some time later today or on Monday. Just a quick observation. ID is not a science such as physics, thermodynamics, evolutionary biology, plate tectonics but rather a supplementary way of analyzing the same data from these various disciplines that have been analyzed by other scientists. Is there a science of Intelligent Design? There might be in the future similar to statistics. Statistics is not content specific but is used in nearly every scientific discipline. ID uses statistics and other probability concepts to analyze data from various disciplines and as such is science as much as statistics is when applied to various sciences. As for micro evolution. It definitely does happen. Just how much is a question. Dawkins book gives some examples but Dawkins claims a lot of things and it is not clear if all he claims did happen but they might have. So it is not an issue to fight. It just makes ID look like a bunch of malcontents ready to fight anything and makes it less believable on the issues that matter. One way to fight them is to show how trivial they are. On another site a couple weeks ago one of the anti ID people who has commented here in the past brought up teosinte and corn. Corn is a variety of teosinte that was artificially selected by the natives of the Americas from this wild plant that is useless as a food stuff. Nature had 10's of millions of years to develop corn and it didn't while the local inhabitants of the Americas were able to do so in a short time. This person went to the wall to say that this is an example of evolution when all it was is an example of artificial selection like getting a better dairy cow. This person couldn't understand how he was undermining his cause by emphasizing such a trivial example. So Dawkins by emphasizing artificial selection in his book is actually admitting he has no argument for macro evolution. Otherwise he would go right to it and forget about artificial selection. That is one of the reasons why I recommend Dawkins book. The other reason is that he has some very interesting things in it but none threaten ID. But artificial selection only allows one to extract what is in the gene pool such as a better dairy cow, corn, or a labrador retriever. A great book that is still in print somewhere is by Ray Bohlin, about the limits of biological change discusses this in a very scientific way. http://www.amazon.com/Natural-Limits-Biological-Change/dp/0945241062 Here is a podcast by Bohlin from last year that I just found. He is now involved in religion but has a Ph.D in micro biology. http://www.podfeed.net/episode/The+Limits+to+Biological+Change+An+Interview+with+Ray+Bohlin/1390108 I have no idea what it contains since I just found it. Dawkins' book is full of examples that might have happened in nature and some actually did happen. It makes no sense to fight them as they could have happened that way and Dawkins shows examples of those that did. He has a section on evolution in our life time called before our eyes. Dembski in one of his books gives an example or two. I maintain, and this is just me, that micro evolution is great design. It is a way for a population to adapt to changing environments and is what a good designer would incorporate into a design. However, it has limits.jerry
January 24, 2010
January
01
Jan
24
24
2010
07:52 AM
7
07
52
AM
PDT
349 jerry 01/23/2010 6:37 am
“If Meyer defines the term he should give credit to the guys mentioned above.”
Kairosfocus pointed out that someone used the term “specified information” or “specified functional information”" in 1978 in talking about OOL. I believe it was Orgel. It’s on the web someplace.
My question was rather if Dr. Meyer defined FCSI/FSCI in his recent “Signature in the Cell” because to my best knowledge even here at UD only a minority of commenters (KF, Jerry) are using it while leading ID-theorists like Drs. Dembski and Behe never mentioned it.osteonectin
January 23, 2010
January
01
Jan
23
23
2010
08:40 PM
8
08
40
PM
PDT
Zeph, as I said you ask good questions and make interesting points and I think I will enjoy your posts :-)tribune7
January 23, 2010
January
01
Jan
23
23
2010
05:21 PM
5
05
21
PM
PDT
Mr Zeph, Can one evolve a sorting algorithm which creates “mostly sorted” lists from random lists, using only randomization and selection for “more sorted”, but no computer science theory of sorting? Google "Hillis sorting networks" and you will see that this is an area where simulated evolution (co-evolution, actually) was used very fruitfully.Nakashima
January 23, 2010
January
01
Jan
23
23
2010
04:58 PM
4
04
58
PM
PDT
Mr Zeph, Welcome to the conversation. Literally one bit changing in a 10 megabyte program is fairly likely to break something, maybe major. I would take issue with this. If you do some code analysis you'll see that a lot bits are in data or parts of the code that are rarely (if ever) executed. Both DNA and computer code have places where one bit change will be deadly and other places where wide variation and continued function are still possible.Nakashima
January 23, 2010
January
01
Jan
23
23
2010
04:24 PM
4
04
24
PM
PDT
tribune7: I can't scientifically project to infinite time, because it's like dividing by zero and beyond science. But to get the gist - would a very, very, very long time suffice? In some cases, no. I believe that a given toolkit has inherent limitations - there is a finite number of ways the parts can be arranged, and only those arrangements are possible outcomes. More time doesn't change that. The "toolkit" for face recognition GA is mostly various coefficients and constants that go into other software. This toolkit is by design limited mainly to "image recognition". It doesn't have keyboard input or an internet link; those are outside it's universe. (Remember, only the facial recognition aspect is evolving, not the software which does the GA evolution itself). I used this example for only the limited point that useful complexity can be created by non-sentient processes, not as tackling the Big Questions directly. The building blocks or toolkit of life are DNA/RNA/proteins (and other chemicals). Anyone who has studied these is aware that these are astoundingly flexible building blocks - far, far more sophisticated than the face recognition building blocks. That is, the range of patterns they can build is huge - take for example every organism that ever existed on Earth, in their full complexity. Those DNA/RNA/protein/etc building blocks can even support intelligent life which can design things like Genetic Algorithms or have these discussions! Life is by far the most complex system of which we are aware - and it's build atop the most flexible toolkit we know of. No GA experiment is going to rebut ID per se; at most it might weaken some of the arguments for it. Arguments for and against ID or Darwinism or string theory get weakened and strengthened all the time. Again, you speak as if the change variation was the key element, but it's not. Change variation on a web browser's machine instructions will just mess it up. But the toolkit underlying it - eg: the Intel Pentium instruction set - is not very suitable for evolution. Literally one bit changing in a 10 megabyte program is fairly likely to break something, maybe major. By contrast the genotype has a lot of redundancy and self-repair mechanisms that Intel didn't need to include in their design because it didn't need to self replicate or to evolve. To get evolvable systems, you have to create a virtual world of sorts within the software (or you could design hardware, but that's much more expensive!) I don't mean a 3d analog of this world, I mean something like a simulator for a much simpler computer than an Intel Pentium, whose instruction set is more adaptable. Yes, this is a design product of humans - but the point here is not to prove that no intelligence is needed to "set up the system", but a smaller one of seeing how much "new intelligence" can be created by mechanistic processes of variation and selection. Results would would be suggestive, not definitive. The key is not the randomness, it's the non-randomness - the selection process. Focussing on words like "random chance" is to greatly miss the point of the Darwinian evolution approach, which must be understood well before it can be countered. The question is not whether random variation could create a web browser, it's whether there are incremental selection forces (favoring becoming more browserlike) which can pull a useful "signal" out of the noise of limited random changes. In the case of a web browser, what would the criteria be for a "more usable" browser? Billions of people using billions of variant browsers and copying the best ones comes to mind - but the source codes for browsers are not built on a very evolution friendly toolkit, like DNA/RNA/protein. Not likely to work in the real world. Alas, web browsers are too far out along a non-evolutionary flavor of complexity to be very relevant to this discussion. Except that if we found a web browser in nature, not created by humans, I'd be an instant convert to ID! [smile] ZephZeph
January 23, 2010
January
01
Jan
23
23
2010
02:29 PM
2
02
29
PM
PDT
Jerry, Thanks for posting those links in #110; very interesting. You are too modest about your prose ability, you do explain your viewpoint well. I'm more intrigued than ever from your description. Alas, I haven't yet found a better blog post than this (sufficiently recent, sufficiently on topic, and with fewer comments) to which to attach these discussions; suggestions welcome. re your referenced notes, I'm a little puzzled why ID advocates would assume MicroEvolution as a given, if the evidence for it is indeed so scant: https://uncommondescent.com/intelligent-design/ud-commenters-win-one-for-the-gipper/#comment-299358 Where I am finding the most difficulty with accepting ID as science is what comes across to at least naive newcomers as a double standard of evidence. Perusing here, I find many examples where Darwinists are challenged to come up with a detailed non-speculative mechanism deemed plausible by the challenger, for some attribute or change; absent that mechanism, the similarity of the results to what human design produces is considered per se evidence of ID. But it appears that ID proponents are free of any need to explain in any detail whatsoever, or provide even a hint of a plausible mechanism for the infusion of intelligence into the system. For example, is ID research compatible with an omnipotent and omnicient intelligence, or just human type finite intelligence extrapolated a few centuries into the future? Or one could posit advanced alien biologists dropping in on Earth every few million years to infuse new genotypes into the ecosystem - the Cambrian field trip was a doozy - in which case, can we discern which pieces of DNA came in externally and when? Or perhaps there is some diffuse etheric force which subtly and non-materially biases supposedly random events towards directed goals over time - just shifting probablilities a bit over many centuries, without any new molecules being introduced (eg: unlike the alien spacecraft carrying interventionary biologists). Can ID shed any light on which if these radically different mechanisms is best supported by evidence? It seems clear that until biology has a complete picture (which could be many centuries if ever), the ID folks are going to win every debate in which they can just write "a miracle of intelligent origin happens here" atop any small or enormous gaps in their hypothesis, but the Darwinists are required to fully explicate their hypothesis in near indisputable detail without major extrapolation. I don't yet see how one can have a scientific debate in the face of that much disparity in expectations. It is a given that humans have large gaps in their knowledge about how life has come to be. Yes, as humans we tend to downplay that, but it's true. Take a gap in the fossil record. Darwinists are extrapolating from the pieces for which they do have decent mechanisms and the evidence they do have (eg: before and after fossils), to fill in the gaps with "something similar but even larger in scope happened here". That is certainly not "proof" or incontrovertable evidence and it's fair to search for more solid explanations, because there is sometimes a big extrapolation without enough evidence. But just saying "some unknown and undefined agent did something of whose mechanism we have no clue or evidence in order to create the later life forms during the time of the gap" does not seem to meet the criteria of better explaining it scientifically. What is this alternative mechanism whose plausibility we can weigh against the Darwinist proposal, whose mathematical odds can be calculated to be better than those of mutation and selection? I cannot so far see how ID can compete as science (not as faith or philosophy) until it competes on level ground with other proposed scenarios. Plate tectonics ("continental drift") would never have won the geological world over if it just said "some unexplained and mysterious force of which we have not the slighted understanding moved the continents apart"; nor would it have yielded any insights. Once there were plausible mechanisms which could be weighed against evidence, it became taken seriously. ID claims only to be a new science without all the answers yet, but I'm looking for even a broad theoretical framework of HOW earth biology interventions happened, how many interventions there have been or is it continuous, the scope of the intervention (what was it capable of, and what was it not?), and such. Is there evidence of a single designer, or of multiple designers with different styles (human intelligent design is strongly marked by discernable "design styles") Does ID have evidence that design has continued to occur in the last hundred years ("micro ID") or does ID require deep time to operate ("macro ID"), or has it entirely stopped currently, perhaps to show up again in a million years? There are dozens more of these in my mind; is ID even beginning to form fuzzy shapes from the murkiness of an extremely vague "designer" and "design implementaion mechanism"? I haven't found those here yet. Mendel was able to infer a lot about the structure of genetics before the detailed mechanisms were discovered (even if not entirely accurate). What has ID learned about the structure and nature of intelligent intervention in earthly life? I'll give another example. It was factual that bats were able to somehow navigate in profound darkness. Imagine a group that hypothesized that their navigation was not unlike the guidance of spirit, and gathered evidence that the darkness of some caves was so deep that even "extremely light sensitive eyes" was insufficient to explain it. The spiritual guidance theory would not have gained much scientific credence until that group provided a plausible mechanism, or at least was able to measure and classify where bats were able to navigate and where they were not and provide a "spiritual guidance" framework which explained those observations better than eyes. They might, say, predict and later measure that bats were unable to navigate in the presence of brimstone or sulfer bearing rocks because these rocks had been shown elsewhere to interfere with spirtual guidance. Until then, all their evidence that "sensitive eyesight is not enough" was only evidence that conventional biology was still incomplete, not that bats got spiritual guidance from an unfathomable source. Of course, echolocation was discovered, and this gap in conventional biology was filled, albeit with some overturning of the previous consensus - because it had a testable mechanism to FILL the knowledge gap scientifically, not just point it out. In addition to my interest in "truth" (whether or not it pleases me), I admit that I would truly love to see something solid from ID as a science. Why? Because it would make the universe more interesting to me. (I'd also like to see SETI find a signal). It would open up some fascinating vistas. However, my mild preference that "ID be validated" is smaller than my desire to avoid illusions, even comforting ones. So far, what I as newcomer am finding is a scientific critique of the completeness of Darwinism (which seems very valuable, by the way - whether it eventually pushes Darwinists to expand and refine, or it overturns Darwinism), married to an apparent philosophical one-upsmanship where ID can win every debate because it doesn't have to come even close to the same standards of explication, mechanism and evidence that it imposes on its opponents. Imagine two kids fighting a battle in a virtual computer world, where one has to follow known physics and the other can invoke magic. One says "I don the titanium powersuit whose 440 kg mass can be rapidly moved using power from twin 3mj batteries for 20 minutes". The magic bearers can just say "an ineffable entity just teleported the sword of Glyndor into my hand, and this sword can slice through your reinforced titanium armor like butter". The magic user always wins, because they don't need to posit any plausible mechanism. However comfortable a related modality may be to philosophical debate, ID needs to transcend this "advantage" in the quest to become serious science. AND - I'm new to this. Every school of thought can have brilliant advocates who expand potential human knowledge and, um, true believers who say less than wise or helpful things to support it (including Darwinism!). I'm still sifting the wheat from the chaff in regard to ID. I see some "pro-ID" advocates whose arguments appear fuzzy minded, but those unfortunate camp followers should not deter our exploration of the more serious thinkers who are also evident, nor be held against them. (Again, this is true of both sides of many debates). It's possible that the science of ID is in the process as we speak of drumming the "magic users" from its ranks, proving that it's a science with serious internal quality control and not a political coalition of convenience.Zeph
January 23, 2010
January
01
Jan
23
23
2010
01:41 PM
1
01
41
PM
PDT
Zeph -- Can one evolve a sorting algorithm which creates “mostly sorted” lists from random lists, using only randomization and selection for “more sorted”, but no computer science theory of sorting? You ask great questions. The selection criteria and the actions based on it would have had to have occurred by random events for this to be a rebuttal of ID. And we would even have to go deeper in that it would not be lists but items that randomly formed lists. But assuming this came about how long would it take for this algorithm to produce something useful. Something else to consider: given infinity would it be possible for the face-recognition software to evolve into a browser? There is a commonly held view that given infinity -- which of course we don't have with evolution -- anything is possible, but is it? Or consider this -- what if the code in the face-recognition software were randomly changed akin to genetic mutations? Would it be more likely to evolve into a web-browser or become useless? Since you're new, one thing to keep in mind is that ID is not anti-evolution. There is a common misconception that it is. Also, keep on the lookout for posts by Gil Dodgen. You and he seem to have similar interests.tribune7
January 23, 2010
January
01
Jan
23
23
2010
01:18 PM
1
01
18
PM
PDT
1 2 3 4 14

Leave a Reply