Uncommon Descent Serving The Intelligent Design Community

Two forthcoming peer-reviewed pro-ID articles in the math/eng literature

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The publications page at EvoInfo.org has just been updated. Two forthcoming peer-reviewed articles that Robert Marks and I did are now up online (both should be published later this year).*

——————————————————-

“Conservation of Information in Search: Measuring the Cost of Success”
William A. Dembski and Robert J. Marks II

Abstract: Conservation of information theorems indicate that any search algorithm performs on average as well as random search without replacement unless it takes advantage of problem-specific information about the search target or the search-space structure. Combinatorics shows that even a moderately sized search requires problem-specific information to be successful. Three measures to characterize the information required for successful search are (1) endogenous information, which measures the difficulty of finding a target using random search; (2) exogenous information, which measures the difficulty that remains in finding a target once a search takes advantage of problem-specific information; and (3) active information, which, as the difference between endogenous and exogenous information, measures the contribution of problem-specific information for successfully finding a target. This paper develops a methodology based on these information measures to gauge the effectiveness with which problem-specific information facilitates successful search. It then applies this methodology to various search tools widely used in evolutionary search.

[ pdf draft ]

——————————————————-

“The Search for a Search: Measuring the Information Cost of Higher Level Search”
William A. Dembski and Robert J. Marks II

Abstract: Many searches are needle-in-the-haystack problems, looking for small targets in large spaces. In such cases, blind search can stand no hope of success. Success, instead, requires an assisted search. But whence the assistance required for a search to be successful? To pose the question this way suggests that successful searches do not emerge spontaneously but need themselves to be discovered via a search. The question then naturally arises whether such a higher-level “search for a search” is any easier than the original search. We prove two results: (1) The Horizontal No Free Lunch Theorem, which shows that average relative performance of searches never exceeds unassisted or blind searches. (2) The Vertical No Free Lunch Theorem, which shows that the difficulty of searching for a successful search increases exponentially compared to the difficulty of the original search.

[ pdf draft ]

—————

*For obvious reasons I’m not sharing the names of the publications until the articles are actually in print.

Comments
JayM 270, I thought I'd check back on this. You've missed my points completely and are reduced to repeating yourself. The only way you can continue your assertion of "uniform probability distributions are not valid for biology" is to ignore specific cases where they are and then refer back to cases where they are not. Humorously, you then say that I haven't addressed your points yet I did not attempt to address them in the first place and took pains to explain why: they are valid within their limited scope as I've already said multiple times! If you don't get it at this point I don't know what else to say. Others are understanding my meaning. Although, it's obvious you haven't read No Free Lunch or other explanations of calculating CSI (or FCSI) since Dembski did exactly what you demanded in #255 and again in #270. You also made a gross error by referring to the entire genome instead of the specified sequence. So at this point I'm forced to conclude that you do not comprehend what you're attempting to criticize. Now my copy of No Free Lunch is somewhere in storage. Dembski spent pages going over the flagellum, but the result was 10^-1170 or around 3800+ informational bits (no calculator on hand so that # may be way off). As I said in 267 "the Explanatory Filter can take multiple types of inputs (which also makes it susceptible to GIGO and thus falsification). Two are (a) the encoded digital object and (b) hypothetical indirect pathways that lead to said objects." Now if MET mechanisms can indeed traverse an indirect pathway to the flagellum then Dembski's calculations amount to Garbage In. So, essentially what you're demanding of Dembski is that he accept what is in question as already verified: to assume that an indirect pathway for every object exists when doing calculations. That premise would make Dembski's position untenable. But since no one can even come up with a potential indirect pathway then how can we calculate otherwise? BTW, based upon the flagellum research of the time Dembski ran his calculations based upon the assumed IC core of 30 proteins out of the total 42. It's possible that less are required for the IC core, but even if halved that would not substantially change the outcome since it'd still be well over 500 informational bits. This is also why Dembski and Behe's positions are inter-dependent. As Behe acknowledged long ago IC as an indicator for intelligent agency is only sustainable if all potential indirect pathways are infeasible, which is where Dembski's work comes in. And Behe is trying to research the limits of MET mechanisms, which would validate Dembski's assumption. But like you I prefer Behe's line of inquiry since it's based on things more tangible and easier to comprehend. On a side note, here's one foresighted mechanism. And the specific ID-compatible hypotheses I was referring to were variants of front-loading and other hypotheses about foresighted mechanisms (viral, epigenetics, etc.). I cannot help it if you are ignorant about the full breadth of ID thought but please do not attempt to argue with me on this basis alone.Patrick
February 5, 2009
February
02
Feb
5
05
2009
08:42 AM
8
08
42
AM
PDT
Jerry [273] "I just pointed out that it is hard to use an example of one variant of a species morphing into another variant of the same species as proof of Darwinian macroevolution." Questions of species don't change the fact that three single point mutations caused gross changes in the body plan of teosinte AND a new tissue in the covering of the kernels. Do micro evolution three times and presto, you've got macro evolution. "...no one here said teosinte came from corn when it was obvious that it is the reverse." I misinterpreted your statement in [269]: "The example of teosinte to corn is an example of devolution since the gene pool of teosinte probably contains the gene pool of corn except for a mutation or two." But, since this is now the topic of discussion, it would be begging the question to discuss it.djmullen
February 4, 2009
February
02
Feb
4
04
2009
03:28 AM
3
03
28
AM
PDT
rna, These are interesting examples. And should be brought up at the appropriate time in the new Durston thread. If you do not, then I will. It could mean that I have misinterpreted what Durston has said, always a possibility, or there could be some other explanation. Read Durston's comments at https://uncommondescent.com/intelligent-design/mathematically-defining-functional-information-in-biology/#comment-303307 Thank you for the information.jerry
February 3, 2009
February
02
Feb
3
03
2009
10:10 AM
10
10
10
AM
PDT
Jerry @271
Maybe this will come out in Kirk Durston’s new thread when it appears but just what are the implications if the distribution is not uniform. Is it a big deal or is it closer to a nit that someone throws out to divert the discussion away from the obvious by posing a pseudo objection?
Excellent question. This should get the conversation back on topic. If the probability distribution of solutions generated by MET mechanisms is not uniform then the NFL theorems and Dr. Dembski's two new papers are not applicable, as written, to biological systems. In fact, MET predicts and observations confirm that the solutions generated by MET mechanisms do not form a uniform distribution, very far from it in fact. Further, the search space itself is not uniform -- there are clusters of viability. Given that, Dr. Dembski's papers cannot be said to be "pro ID" without further work to tie his results specifically to real world biological systems. The uniform probability distribution is also important in the calculation of CSI, but the discussion above has already explained that. JJJayM
February 3, 2009
February
02
Feb
3
03
2009
09:56 AM
9
09
56
AM
PDT
Jerry #271# "A more interesting question seems to be can any change or any distribution of changes ever lead to a protein reaching one of these other isolated protein islands of foldable proteins?" This has been investigated and there are cases where very few changes are necessary to connect two different islands of foldable proteins. One reported artificial example for a small protein (56 amino acids) showed that only seven mutations where necessary to reach a different stable! fold and a new functionality (Proc. Natl. Acad. Sci. USA, Vol. 104, No. 29 pages 11963-11968, Orban, Bryan & coworkers). An even more extrem example is the so called Arc-repressor protein where two mutations are necessary to change the fold and a single muation results in a protein existing in an equilibrium between two different folds. (Sauer & coworkers, Nature structural biology 2000, Vol. 7, page 1129-1132) For folds of RNA there is one example where an RNA can adopt two totally different folds associated with two totally different functions. Thus, the islands of possible protein folds are not necessarily isolated. Furthermore, one should not forget that not all functional proteins must be folded to function but many fold only upon interaction with a ligand. This means that even partially unfolded proteins might not be non-functional. The cited papers should be open access since they are more then a year old.rna
February 3, 2009
February
02
Feb
3
03
2009
08:47 AM
8
08
47
AM
PDT
djmullen, I am stopping this conversation. Some others can waste their time with you. You do not know what ID is about and seem only to impose your own ill conceived perceptions. So you can have the last word till you start dealing with reality and what people say. For example, I know all the problems with the definition of species you bring up and they have been discussed here before and I have read about the problem in pro-Darwinian and anti ID books as well as with pro ID books and discussions and so it is a non issue. Wolves and dogs theoretically parted millions of years ago but still remain the same species by biological textbooks. My example of the Chihuahua and the wolf was meant to point out the absurdity of your example. I just pointed out that it is hard to use an example of one variant of a species morphing into another variant of the same species as proof of Darwinian macroevolution. Especially when it looks like the mechanism is probably artificial selection and not natural selection. And if it was by natural selection, ID accepts that example with open arms. So what point are you making by first choosing it and then defending it. And you defend it badly because no one here said teosinte came from corn when it was obvious that it is the reverse. It seems the indigenous people of the Americas started the process several thousand years ago probably first by accident and then maybe they got smart. So what is your point. Your discussion seems pointless. Don't bring up trivial examples that ID is wholly in sync with and expect us to take you seriously. Nothing you said so far has any merit except maybe in your own mind. Adios.jerry
February 3, 2009
February
02
Feb
3
03
2009
07:14 AM
7
07
14
AM
PDT
jerry [269] I see you've found some of the problems you run into when trying to use the species concept in an evolved world. "Species" is an idea straight from the ancient Babylonian world and its Old Testament variations. An Intelligent Designer God designs and builds animals and plants separately and they grow separately and you expect to always be able to clearly separate different species from each other. But in an evolved world, a single population starts to split and you have two populations that share some attributes and not others and you have real trouble separating one from the other. Hence we have the problem of teosinte and corn and wolves and chihuahuas. Are they a single species or two? Scientists wrestled with this idea for quite a long while before finally, rather grudgingly, settling on the ability to cross fertilize as the definition of species. Our official, man-made definition of species is any group of animals that can mate and produce fertile offspring. Under evolutionary conditions, this marks the point where some key genes have diverged enough so they don't match up when mated and the offspring either dies or some key part of it's reproduction process is inoperative. It works in the sense that from that point on the two populations are forever separated and will always evolve separately, but, as you've shown, that definition has a lot of problems. Another problem is that it's useless with non-sexually reproducing organisms and that includes the vast majority of all living creatures, which are single celled. You and I, for instance, have more bacterial cells in us (mostly in the digestive tract) than human cells. Under evolution, there's just no hard and fast division between two populations that are evolving apart short of reproductive incompatability, even though one is very much needed. For instance, how do you call it when two plants have 99.9999+% identical genomes, but one is a clump of grass and one is corn? Or one is a chihuahua and one is a wolf? You can say they're both the same species, but if you plant teosinte instead of corn, your family is going to starve and if a chihuahua goes for your throat it's very annoying, but if a wolf goes for your throat, you're dead. I guess one way to know if ID or evolution is right is to ask how easy it is to separate species and so far the data points entirely towards evolution. I'm afraid teosinte is NOT mutated corn. We have archeological specimens of teosinte plants and pollen going back a long, long ways, but corn doesn't even start to appear until about 9,000 years ago and it's a very crude, teosinte-like version of corn. Incidently, all of the early differences are due to natural selection. Humans changed the teosinte environment in a big way when they started eating it and planting it. Once people started saving their best plants for seed and eating the lesser plants, it put natural selection on steroids and the plan transformed from teosinte, a barely edible grass, into corn, a highly desireable and nutritious plant to humans, in a few thousand years. "When we refer to no natural examples of FCSI ever being formed by nature we do not include life since life is the topic under discussion." My mind just boggles at this. When we refer to no natural examples of FCSI ever being formed by nature, we do not include life, although life is a part of nature, since life is the topic under discussion." Since the question is, "Does life produce FCSI?" you seem to have constructed an impregnable position. A veritable Maginot line of logic. "... the changes we have seen are all trivial in terms of genomic change and no complex new functional capabilities have ever been demonstrated." It's certainly true that the changes we see in teosinte to corn are "trivial in terms of genomic change". There are about 2 billion DNA base-pairs in the corn genome and if we have three point mutations, that's only 3/2,000,000,000ths of the corn genome. And yet just those three changes make the plant taller and narrower and concentrate the ears near the stalk, the ears get larger and the coating on the kernels gets softer. "Trivial" changes in the genome can obviously make very significant changes in the phenotype and it's the phenotype that feeds you or rips your throat out. As for "no complex new functional capabilities have ever been demonstrated", I'd say that just those three mutations produce functional changes that are complex enough to turn a barely edible weed into something that can start to feed the entire New World. How many base-pairs do you think mutated to turn teosinte into corn? I'm betting it's way under a thousand, which, as you say, is "trivial in terms of genomic change". I'm glad you brought up bat sonar. I read something about that a few years ago. This is from memory and I don't have a citation handy, but since it's the topic under discussion I don't see why I have to provide one. Don't want to beg the question, after all. For starters, you and I can already do a crude form of echolocation. If you shout, "Hello!" and hear a fainter "Hello!" a few seconds later, you know you're some distance from a large reflecting object, such as a cliff. If you say, "Hi!" and hear an almost immediate series of echos, you're in a gymnasium or concert hall sized room. If you say, "Hi!" and instantly get a load of echos in return, you're in a small room. If you hear a lot of high frequencies in the echo, the surfaces are hard, if not they're soft. You can tell direction pretty well too, by using the already existing ability of your ears and brain to tell what direction a sound is coming from. Blind people get very good at this, by the way. Now if you're a small flying mammal hunting insects in dim light, what kind of changes can help you find more insects via echo-location? Well, for starters you can shout, "Hi!" louder. This will detect insects farther away. You can make your shouts shorter, too, which makes your distance measurements more accurate. Or, like some bats, you can "chirp" where you send out a loud cry that sweeps from a low to high frequency. That helps you sort out the mass of echos that come back to you. If you hear low and high sounds, you'll know that the low pitched sounds are coming from a greater distance away because they were uttered first. Better ears will help too. If the shape of your outer ears changes, they can make sounds coming from different directions have phase relationships that increase your ability to tell direction. Your face can change too, to make those phase relationships clearer. (That's why bats tend to have faces like gargoyles - it helps them determine directions.) If you develop a really, really loud chirp or yelp, you can have a problem with your voice damaging your hearing. (I've heard some bats have such loud calls that they can stun insects with them.) Contracting muscles in the middle ear while you shout can clamp the auditory bones, protecting them from damage. I'm sure there are many more ways to improve echo location. Now can you tell us which, if any, of these changes are impossible to produce? And remember before answering that all of them and more exist in contemporary bats and other echo-locators.djmullen
February 2, 2009
February
02
Feb
2
02
2009
11:38 PM
11
11
38
PM
PDT
Maybe this will come out in Kirk Durston's new thread when it appears but just what are the implications if the distribution is not uniform. Is it a big deal or is it closer to a nit that someone throws out to divert the discussion away from the obvious by posing a pseudo objection? If one had 60 seconds to explain the implications of the likely distribution to the President of the US, what would that conversation sound like. Is it a big ho hum or does it have serious implications for anything? A more interesting question seems to be can any change or any distribution of changes ever lead to a protein reaching one of these other isolated protein islands of foldable proteins?jerry
February 2, 2009
February
02
Feb
2
02
2009
10:11 PM
10
10
11
PM
PDT
Patrick @267
JayM 261 First You only succeed in continuing the "non-uniform argument" by changing the metric to things that seem nonsensical. You might as well have said, "I ignore what you just said, I ignore the pointed questions, and I prefer to focus on aspects of the discussion that are already acknowledged as being valid."
Frankly, I could leverage the same charge against you, with far more supporting evidence. You have responded with great verbosity, yet have not even begun to address a single one of my arguments. Allow me to demonstrate:
Now back to the main disagreement. Which is that uniform probability distributions do not apply to biology at ALL in regards to MET. In fact, I would expect non-uniformity to be more complete in its breadth if some ID-compatible hypotheses are true.
Why? What specific hypotheses do you mean?
But to be fair we’re assuming MET and Darwinism as a starting point in evaluating its claims.
My understanding as well is that they are equally likely, but MET mechanisms do not create genomes from scratch so the assumption of a uniform probability distribution across the entire genome is invalid.
I’m struggling to even comprehend your objection. Non-foresighted variation is produced by copying errors. How does the mere act of replication cause these copying errors at any particular point(s) in the genome to become unequally probable/non-uniform?
MET mechanisms do not assert that every location in a genome is likely to change, as would be required by a uniform probability distribution. In fact, MET mechanisms are very conservative. Genomes change very, very little from generation to generation. Further, only those changes that are viable are preserved and only those that are viable and result in surviving offspring are propagated. This is far from a uniform distribution in genome space.
Depending on the form of variation, and the error correction (human versus bacterium) they’re certainly unequal in size or scope, but that does not change this observation.
What observation? The fact is that MET mechanisms result in child populations differing very, very little from their parent populations in terms of allele frequency. There is not a uniform distribution of potential changes.
And at this point natural selection (the localized search you’re so focused on) is irrelevant since it’s after the fact and may not even be a component at all in calculations.
I have no idea what point you're trying to make here. Natural selection operates on the non-uniform distribution of changes, probabilistically preserving those with greater fitness. This mechanism is completely at odds with the assumption of a uniform probability distribution of solutions.
Now if we have foresighted variation that seems to be outside of the bounds of MET (and said existence was predicted by ID proponents). Although some insist that the non-foresighted mechanisms of MET somehow produce the foresighted mechanisms we’ve observed.
What foresighted mechanisms have been observed? I have seen nothing in the ID or peer-reviewed mainstream literature to suggest that foresight has been rigorously documented.
This is why Dr. Dembski’s papers, however interesting, do not appear to be directly supportive of ID theory.
And the ID theory of Dembski and Marks conflicts with micro-evolution how?
Not at all. In fact, the papers do not describe any "ID theory" at all and do not tie their assumptions and conclusions to any real or hypothesized biological mechanisms.
Again, I’d rather Dembski and Marks make the connection to biology specific but my assumption is that their focus is narrow and they’re largely constraining their argument to specific cases relevant to (a) macro-evolution via the unguided processes of MET and (b) OOL.
However, they do not do this. Their arguments do not address the claims of MET at all, as far as I can see. That's why these two papers cannot be said to support ID as they stand.
This would still not be a uniform probability distribution, because the probabilities of small changes given MET mechanisms is much greater than that of large changes.
You’re changing the metric again.
No, I am not. A uniform probability distribution requires that all changes be equally likely so that subsequent populations are exploring the whole genome space. That is not the claim of MET and is not what is observed in real biological systems. Any mathematics that presumes a uniform probability distribution, as do the NFL theorems, is not applicable to biological systems.
Yes, small changes are more probable.
Thank you. You have admitted that MET mechanisms do not result in the uniform probability distribution required by the two papers under discussion.
But your point is only relevant to the localized search. What exactly about the OCCURRENCE of unguided variation makes small or large changes to any specific location of the genome unequally probable?
We're not talking about "unguided variation" in general, we're talking about MET mechanisms in particular. They do not result in uniform probability distributions of genomes within genome space.
Here’s the point related to biology. I forget the informational bits for the flagellum as calculated by Dembski (isn’t that in No Free Lunch…it’s been years since I read it?). But if anyone submits a reasonable hypothesis for an indirect pathway composed of steps which are less than 500 informational bits EACH then Dembski’s design inference for that particular biological object is rendered invalid.
This is "ID of the gaps." We need to prove that it couldn't happen, not sit back and challenge scientists to prove it couldn't. MET mechanisms demonstrably generate complexity by funneling information from the environment to the genomes of subsequent populations. So far, evolutionary biologists haven't demonstrated something as complex as a flagellum evolving, but they have shown the evolution of some structures and pathways. Until we show real limits to those mechanisms, the idea that many small steps result in a large journey is at least credible.
In addition, the calculation of CSI uses the length of the genome as a factor in determining how "specified" a sequence is. That is not logical given the small changes possible with MET mechanisms.
The calculation of CSI in biology does??
Yes.
Can you be more specific in what you mean (or provide a reference)? Or did you mean the "length[complexity] of the specified sequence"?
Exactly.
Dembski’s specification is very general in its application. Which leads to the problem of subjective specifications. Art is subjective and requires an intelligent agent to identify it (although I suppose it can be argued that realist landscapes can be objectively specified on a comparative basis). But biology contains objective specifications independent of the observer based upon the machine functionality. That’s why people refer to FCSI in regards to biology.
None of this addresses the problem I pointed out, namely that the CSI calculation assumes creation from scratch, which is not the claim of MET. Further, people refer to FCSI, but no one in the ID camp has yet demonstrated how to calculate it objectively for real world biological organisms or structures.
Viable regions are clearly clustered or you and I wouldn’t be here. We each differ from both of our parents, and yet we live. That demonstrates that there are multiple connected points in genome space that are viable. The question, of course, is how large these regions are and how connected.
You make my point for me, which is all I meant by saying "'can' be clustered".
If they are clustered, then MET mechanisms can easily keep subsequent populations in the "sweet spot", so all the calculations that assume a uniform distribution of solutions in the genome space are not applicable. If we want to make ID theory truly scientific, we have to address what is known and provide better predictions of what is not known. That means finding where the real "edge of evolution" lies, not attempting to apply invalid assumptions like uniform probability distributions to observed phenomena to which they clearly do not apply. JJJayM
February 2, 2009
February
02
Feb
2
02
2009
05:54 PM
5
05
54
PM
PDT
"But teosinte into corn?" Poor choice since both are the same species. Just as a Chihuahuas and a wolves are the same species. You are confusing physical difference with macro evolution of complex new functional capabilities which all take place at the genome level. It is also not clear which differences between teosinte and corn are due to artificial selection and which to natural selection. The example of teosinte to corn is an example of devolution since the gene pool of teosinte probably contains the gene pool of corn except for a mutation or two. It is unlikely you could get teosinte from corn but it may happen and would be an interesting experiment. When we refer to no natural examples of FCSI ever being formed by nature we do not include life since life is the topic under discussion. Let me lay out the logic of the argument for you. Excluding life there is no example of FCSI forming by law or chance. That is nature does not create FCSI. You can not point to life because that is begging the question. Intelligent activity produces FCSI all the time. Now if one set of causes has not shown the capability of producing the outcome while another set of causes does it all the time it seem reasonable to say that the reason for the FCSI could be due to the activity that does it all the time and not to the one that has never done it. Notice I did not say absolutely only that it is reasonable that it originated this way. One cannot do it and another can. So which one would you include as a potential cause. Think how unreasonable it would be to say that the process that has never done it is the only that can be considered as the cause while the process that does it all the time cannot be considered. How does one justify that logic? Now once FCSI is there and there are mechanisms for changing it, then one has to look at these mechanism and see what power they might have to create new FCSI through this change mechanism. So ID does not deny that these change mechanisms may generate new FCSI. It is just that the changes we have seen are all trivial in terms of genomic change and no complex new functional capabilities have ever been demonstrated. In other words not only does the gene pool have to be significantly increased but this increase along with other elements that were there previously must coordinate a system with new capabilities that did not exist before. Think bat sonar, insect wings, avian oxygen processing system for some starters not a a morphing of teosinte to corn or a wolf to a Chihuahua.jerry
February 2, 2009
February
02
Feb
2
02
2009
11:42 AM
11
11
42
AM
PDT
JayM [231] "Your overall message, while somewhat brutally stated..." "Not all ID proponents are young earth creationists." I have to apologize to you and the others who have pointed this out. I didn't mean to say that only Young Earth Creationists are clinging to the old "God in Six Days" theory or that all IDists are also YECs, although I gather that most of them are. Put it down to writing using stolen moments on a netbook with a tiny screen. What I really meant is even more brutal: ID is the only part of the Old Testament theory that has even a faint patina of intellectual respectability today. The "God in Six Days Theory" incorporated a young earth, simultaneous creation of plans and animals, a separate creation of man and an intelligent creator. The Old Earth was dead long before "Origin of Species" was published. If I remember right, Reverend William Buckland, Oxford professor of geology and fervent Christian believer finally gave up and admitted that Genesis could not be reconciled with Geology circa 1835. The reverand spent decades trying to reconcile the two, he was a very strong believer, very intelligent and very knowledgable about both the Bible and geology and his pronouncement pretty well killed the young earth part of the theory. Simultaneous creation of plants and animals and separate creation of man were lost later, though I'd have to dig out the books to tell you when and how. The only thing left of the Old Theory is an intelligent being doing the creating. This idea is still very faintly respectable in that you can say that God is using evolution to do the heavy lifting and only giving it occasional tweeks or that God put enough info into the system around the Origin of Life to do the guiding. Front loading has lost almost all respectability with the discovery that life is billions of years old and that information that's not preserved by natural selection doesn't last nearly that long. The divine tweaking of evolution hasn't exactly been disproven yet, but it God is doing that, he's making it look just like evolution. Anyway, that's my reading of the present state of the "God in Six Days" theory. jerry [234] "No natural process has ever been seen creating specification..." Look at my examples of teosinte to corn evolution. Natural processes - point mutations - change the branch structure, make the ears bigger and move the ears closer to the stem. Three mutations, three specifications. Natural processes DO create specification. CJYman - Forget all the math coming from Dembski and Marks. They are making an elemental error. They are trying to prove that it's impossible to land in the target area of the genome (what I've been calling the "sweet spot") from the outside. This target area is the part of all possible genomes which contains DNA strings that will construct and operate an organism that is both alive and successfully reproducing. IF YOU'RE NOT IN THE SWEET SPOT, YOU'RE EITHER DEAD OR STERILE! Dembski and Marks are saying that if you're dead or sterile, you're not going to evolve into life and I have to agree with them there, but it's a complete non-sequitar because nobody outside of ID is saying that anything evolves into a part of the genome that supports life. Instead, the First Living Thing was in some not-too-improbable portion of the sweet spot and all of that organisms descendants have stayed in it. All of this NFLT and active information and what-not are just wrong. Embarassingly wrong. Paul Giem [236] "It is true that we haven’t found one, but to assume that the Garden of Eden, or even part of it, could survive the Flood seems to be stretching it." We have found traces of some things that existed prior to The Flood and after it. Egypt, for instance. Those Egyptians were building the pyramids during the time the flood is supposed to happen. The world wide Flood is another Babylonian myth that has been disproven. As for the fossils, it took about a century or more just to get believers to admit that yes, they really were the remains of extinct animals. Even the concept of extinction was hard to believe. Stupid design is stupid if you or I or some other mere human, can think of a better way to do it. There's a nerve in the giraffe that goes from the brain all the way down that long neck, loops around a bone and then travels all the way back up that long neck to the voice box, which is situated just below the brain. I would have run it direct. I think you would have too, along with just about everybody else. We're intelligent enough to do that. Presumably, an intelligent designer would have done the same. But not mindless Darwinian evolution - it's not smart enough to break the nerve and rejoin it on the right side of that bone, so as the neck lengthened, the nerve lengthened twice as much. Dumb for an intelligent designer, but about what you'd expect to see from time to time with mindless evolution. (Referring to the point mutation that invigorates Apolipoprotein AI milano) "...this misses the point. If one is given wild-type protein, one can get to milano protein with one mutation." "Now if you can create the wild-type protein with only a series of beneficial single mutations from some other protein, we will be quite impressed." Okay, look at the teosinte-corn evolution example I gave in [230]. Changing one gene makes the ears grow bigger and grow on short branches. Changing a second gene makes the kernels grow more rows of seeds. A third mutation softens the hard kernel casings, making them easier to eat. Three mutations, which occurred over a period of a few thousand years, made the ears larger, restricted them to growing near the stem and softened them to make them more edible. More mutations eventually turned teosinte, which looks like a clump of grass and whose ears are tiny, hard to harvest, rock hard and barely edible into all the varieties of modern corn. Better yet, this evolutionary event is under widespread study today. We have the original plant (it still grows wild in Mexico), we have archeological samples from various times during its evolution (don't know if we can get DNA from them, though) and we have all the modern corn plants. Even better, a method has been discovered that helps find the mutations that caused these changes. There are several labs hard at work finding and reporting these mutations and we will soon have a road map showing how teosinte became corn and approximately when and in what order the mutations occurred that did it. We're making new tissues here! And new traits! And we're going way beyond Behe's so-called "edge of evolution". And all in under ten thousand years! You wonder where the wild-type proteins came from? Well, with about three and a half billion years of evolution and over 500 million years of multi-cell evolution, I don't think there's any real mystery of how they were produced. We don't know the details, but we know the process and we have an example of its extreme power to create new forms of life. Micro-evolution becomes macro-evolution. You just have to let it run long enough and your clump of grass becomes corn. gpuccio [238]: (Regarding Apolipoprotein A-1 milano) "...its protective effect in vivo is usually assumed, but not really proved..." The mutant protein was discovered when people noticed that the descendants of an 1800's couple didn't have heart attacks. This happy condition stuck out like a sore thumb and sent medical people looking for the reason. A-1 milano turned out to be the reason. Yes, the original protein also removes cholesterol. But A-1 milano has a much stronger effect, strong enough to prevent heart attacks. So far as I know, no adverse affects have been seen and the family has been studied since the '80s. CJYman [240] "All you need is a specified pattern (many biological patterns are specified by functional events), an understanding of the probabilistic resources available (max. number of bit operations available to life on earth based on mutation rates), an approximation of specified patterns of same length (research in progress), and the measurement of “shannon information” (provided by Hubert Yockey and others). There is no assumption of anything to measure for a specification." The "measurement" comes when the new organism tries to make a living with its new DNA. If the organism lives and successfully reproduces, it passes, if not it dies and we don't worry about it until the next time this particular mutation occurs. This process is normally called "natural selection". jerry [243] "ID accepts everything that could be classified as micro evolution and your examples were within micro evolution." "Certainly new functions and capabilities have developed but while important for the species, medicine or other areas they are trivial for evolutionary biology which must explain how the very complicated information network in a genome arose and changed over time to specify these novel complex functional capabilities. As an aside, no one has been able to explain it to us here or in any written medium that we are aware of." What kind of explanation would you be looking for? A step by step, mutation by mutation explanation of how every base-pair in modern DNA came to be, starting with the first living organism? I think you're going to be pretty safe from challenges like that. But teosinte into corn? Ten years from now, we should be able to hand you a list of every mutation that converted the former into the latter and the approximate order they occurred in. Will that be enough? That's a pretty big change, after all. You basically can't eat teosinte without crushing the kernels with a rock to get at the edible innards. The organization of the plants is quite different - corn is tall and narrow with the ears near the shaft, teosinte is branched and the tiny ears are scattered all over the place. We've got different tissues being created, too. Teosinte has a hard kernel, corn has a very soft coating - two different tissues. Why is this not macro-evolution? And if they're not, could you please tell us what IS macro-evolution? gpuccio [249] "3) As the current explanations for that CSI in biological information by non design models are completely unsatisfying and empirically inacceptable" Translation: You don't believe CSI can be manufactured by non-intelligent processes. But that's just a belief. I believe differently. What facts can you cite to support your belief? What would it take to convince you that CSI could be manufactured non-intelligently? How many base-pairs do we have to manufacture to convince you?djmullen
February 2, 2009
February
02
Feb
2
02
2009
04:19 AM
4
04
19
AM
PDT
gpuccio
And it is obvious, at least to me, that in the second case there are all the reasons to assume (and maintain) the hypothesis of a quasi-uniform distribution.
Exactly. We're only considering specific cases, not MET, since it obviously succeeds in certain cases, or all of biology in general. JayM 261 First You only succeed in continuing the "non-uniform argument" by changing the metric to things that seem nonsensical. You might as well have said, "I ignore what you just said, I ignore the pointed questions, and I prefer to focus on aspects of the discussion that are already acknowledged as being valid." Like:
It is impossible to get out of the “sweet spot” using MET mechanisms, because only offspring in that region survive to form the next generation. This is why I think this is such a rich area for ID research. It’s a clear prediction of the limitations of MET mechanisms. If the islands of viability are not connected, all the way back to the origin of life 3.8 billion years ago, MET doesn’t explain the evidence.
Agreed 100%. You say that "I am starting to feel a bit like a Behe-ist in a camp of Dembski-ists". And personally I prefer Behe's approach since it's grounded in real observations and Darwinists cannot retreat into hypothetical la-la land. But for the sake of readers I do not let this argument rest since Dembski does have a good point as well. Now back to the main disagreement. Which is that uniform probability distributions do not apply to biology at ALL in regards to MET. In fact, I would expect non-uniformity to be more complete in its breadth if some ID-compatible hypotheses are true. But to be fair we're assuming MET and Darwinism as a starting point in evaluating its claims.
My understanding as well is that they are equally likely, but MET mechanisms do not create genomes from scratch so the assumption of a uniform probability distribution across the entire genome is invalid.
I'm struggling to even comprehend your objection. Non-foresighted variation is produced by copying errors. How does the mere act of replication cause these copying errors at any particular point(s) in the genome to become unequally probable/non-uniform? Depending on the form of variation, and the error correction (human versus bacterium) they're certainly unequal in size or scope, but that does not change this observation. And at this point natural selection (the localized search you're so focused on) is irrelevant since it's after the fact and may not even be a component at all in calculations. Now if we have foresighted variation that seems to be outside of the bounds of MET (and said existence was predicted by ID proponents). Although some insist that the non-foresighted mechanisms of MET somehow produce the foresighted mechanisms we've observed. Whatever. In any case, these foresighted mechanisms have only been observed in limited instances, but as I've already noted if they operate in all aspects of biology that would support ID, not MET. You then restate the basis of MET in regards to micro-evolution which I fully comprehend. I'm NOT attacking this as I already stated in the 2nd paragraph of comment #258.
This is why Dr. Dembski’s papers, however interesting, do not appear to be directly supportive of ID theory.
And the ID theory of Dembski and Marks conflicts with micro-evolution how? Again, I'd rather Dembski and Marks make the connection to biology specific but my assumption is that their focus is narrow and they're largely constraining their argument to specific cases relevant to (a) macro-evolution via the unguided processes of MET and (b) OOL.
This would still not be a uniform probability distribution, because the probabilities of small changes given MET mechanisms is much greater than that of large changes.
You're changing the metric again. Yes, small changes are more probable. But your point is only relevant to the localized search. What exactly about the OCCURRENCE of unguided variation makes small or large changes to any specific location of the genome unequally probable? The indirect pathway is presumed to be composed of small changes since that is what is required by MET in order to reach long-range targets. And each step is considered independently in reference to the EF. (I've directly queried Bill about this so I'm pretty certain I'm correct in saying the following unless I've misunderstood somewhere along the line.) For a specific example, the Explanatory Filter can take multiple types of inputs (which also makes it susceptible to GIGO and thus falsification). Two are (a) the encoded digital object and (b) hypothetical indirect pathways that lead to said objects. My name "Patrick" is 56 informational bits as an object. My name can be generated via an indirect pathway in a GA. An indirect pathway in a word-generating GA is likely composed of steps ranging from 8 to 24 informational bits. Let's say you take this same GA and have it tackle a word like "Pseudopseudohypoparathyroidism" which is 30 letters and 240 informational bits. It can be broken down into functional components like "pseudo" (48 informational bits) and "hypo" (32 informational bits). Start with "thyroid" (56 informational bits). For this example I'm not going to check if these are actual words, but add "ism", then "para", and then "hypo". "hypoparathyroidism" is a functional intermediate in the pathway. The next step is "pseudohypoparathyroidism", which adds 48 informational bits. Then one more duplication of "pseudo" for the target. That may be doable for this GA but what about "Pneumonoultramicroscopicsilicovolcanoconiosis" (360 informational bits) or, better yet since it's more relevant Dembski's work (UPB), the word "Lopado­temakho­selakho­galeo­kranio­leipsano­drim­hypo­trimmato­silphio­karabo­melito­katakekhy­meno­kikhl­epi­kossypho­phatto­perister­alektryon­opto­kephallio­kigklo­peleio­lag?io­siraio­baph?­tragano­pterýg?n" (1464 informational bits). I'm not going to even try and look for functional intermediates. Here's the point related to biology. I forget the informational bits for the flagellum as calculated by Dembski (isn't that in No Free Lunch...it's been years since I read it?). But if anyone submits a reasonable hypothesis for an indirect pathway composed of steps which are less than 500 informational bits EACH then Dembski's design inference for that particular biological object is rendered invalid. I challenged Khan (previously linked on this page) based upon generous assumptions to name ANY single step in the pathway when you start from the T3SS as a functional intermediate and he could not do it, nor can anyone else I know of. There might be hypothetical indirect pathway but it seems likely to consist of steps with more than 500 informational bits based upon observation.
In addition, the calculation of CSI uses the length of the genome as a factor in determining how “specified” a sequence is. That is not logical given the small changes possible with MET mechanisms.
The calculation of CSI in biology does?? Can you be more specific in what you mean (or provide a reference)? Or did you mean the "length[complexity] of the specified sequence"? Dembski's specification is very general in its application. Which leads to the problem of subjective specifications. Art is subjective and requires an intelligent agent to identify it (although I suppose it can be argued that realist landscapes can be objectively specified on a comparative basis). But biology contains objective specifications independent of the observer based upon the machine functionality. That's why people refer to FCSI in regards to biology.
Viable regions are clearly clustered or you and I wouldn’t be here. We each differ from both of our parents, and yet we live. That demonstrates that there are multiple connected points in genome space that are viable. The question, of course, is how large these regions are and how connected.
You make my point for me, which is all I meant by saying "'can' be clustered".Patrick
February 1, 2009
February
02
Feb
1
01
2009
10:48 AM
10
10
48
AM
PDT
JayM, it's a big tent :-)tribune7
February 1, 2009
February
02
Feb
1
01
2009
06:28 AM
6
06
28
AM
PDT
Adel DiBano @264 Thank you for the kind words. I've been trying hard not to let my personal frustration with some aspects of the ID movement affect my presentation. These ideas are important. I am starting to feel a bit like a Behe-ist in a camp of Dembski-ists, so I doubly appreciate your note. We're not big enough to schism yet! ;-) JJJayM
February 1, 2009
February
02
Feb
1
01
2009
06:09 AM
6
06
09
AM
PDT
JayM: This thread is winding down, but I just want to put on the record that I have enjoyed your dispassionate and well-reasoned comments. You clearly understand our opponents better than many of us. I look forward to learning more from you. (As on the Durston thread.) ADBAdel DiBagno
February 1, 2009
February
02
Feb
1
01
2009
05:44 AM
5
05
44
AM
PDT
UpRight Biped @262
My understanding as well is that they are equally likely, but MET mechanisms do not create genomes from scratch
Uhm, yes they do.
No, they create them from existing genomes by making typically very small changes (although some mechanisms like gene duplication or frame shifts are larger).
It is impossible to get out of the “sweet spot” using MET mechanisms
Then it should occur to you that there is no way to get into the “sweet spot” with them either.
That doesn't follow. MET is not a theory about the origin of life, but about how it diversifies once it exists. MET mechanisms operate on high fidelity replicators. Those replicators are already viable, by definition. MET mechanisms just keep subsequent generations in the viable region, whatever that may be. If we want ID theory to be taken seriously, we need to address MET as it is exists. Assumptions about uniform distributions and searches from scratch don't meet that criteria and make it easy for ID opponents to dismiss our arguments. JJJayM
January 31, 2009
January
01
Jan
31
31
2009
01:39 PM
1
01
39
PM
PDT
JayM
My understanding as well is that they are equally likely, but MET mechanisms do not create genomes from scratch
Uhm, yes they do. The core of MET is chance and mechanical necessity. NS is a condition of the two; without them there would be no NS, but without NS, chance and law would remain. And by the way, this forms the foundation of all approved thought on the matter. - - - - - - - - You make this comment just mere inches from where I suggested that MET is "a theory of Life allowed the luxury of not having to address itself to how the conditions of the theory got started to begin with." So, the argument can go on forever, given that nothing of the core claim need be addressed - but taken for granted instead. But even so, maybe headway is being made:
It is impossible to get out of the “sweet spot” using MET mechanisms
Then it should occur to you that there is no way to get into the "sweet spot" with them either.Upright BiPed
January 31, 2009
January
01
Jan
31
31
2009
01:04 PM
1
01
04
PM
PDT
Patrick @258
Why a uniform probability distribution? That isn’t applicable to biological systems.
Depends on the metric, does it not? If you’re talking about the information encoding an object, then in this case we’re dealing with nucleotides and as far as I’m aware all 4 options (T, A, C or G ) are equally likely within biology.
My understanding as well is that they are equally likely, but MET mechanisms do not create genomes from scratch so the assumption of a uniform probability distribution across the entire genome is invalid. MET mechanisms start from a known viable state and explore nearby regions in genome space around that known viable point. Very few changes are made and those that are made are small. Only viable offspring, by definition, survive to become part of the next generation of the population. This is about as far from a uniform probability distribution across the genome as it is possible to get. This is why Dr. Dembski's papers, however interesting, do not appear to be directly supportive of ID theory.
This can be extended to a measuring JUST the steps in a pathway, since each step in the pathway consists of changes to nucleotides and need not be in reference to the entire search space.
This would still not be a uniform probability distribution, because the probabilities of small changes given MET mechanisms is much greater than that of large changes. In addition, the calculation of CSI uses the length of the genome as a factor in determining how "specified" a sequence is. That is not logical given the small changes possible with MET mechanisms.
It’s true that viable regions in the overall search space "can" be clustered, but that objection seems only relevant to natural selection and micro-evolution (local searches within a cluster of viable regions).
Viable regions are clearly clustered or you and I wouldn't be here. We each differ from both of our parents, and yet we live. That demonstrates that there are multiple connected points in genome space that are viable. The question, of course, is how large these regions are and how connected.
As you say:
MET mechanisms demonstrably channel information from the environment (simulated or natural) to the populations that are evolving. The question isn’t whether this happens, but how far it can go.
And when you get out into this purported void of deleterious and neutral spots in-between functional regions are they not all equally likely?
It is impossible to get out of the "sweet spot" using MET mechanisms, because only offspring in that region survive to form the next generation. This is why I think this is such a rich area for ID research. It's a clear prediction of the limitations of MET mechanisms. If the islands of viability are not connected, all the way back to the origin of life 3.8 billion years ago, MET doesn't explain the evidence. JJJayM
January 31, 2009
January
01
Jan
31
31
2009
07:18 AM
7
07
18
AM
PDT
Patrick: Some sense at last! Thank you. Sometime, somewhere, we will have to discuss more in detail the fundamental difference between calculating a search space for the emergence of a new protein from scratch and calculating the search space for a random transition without any selection of the intermediates: I don't know why anybody in the other field seems convinced that the second thing cannot be done. And it is obvious, at least to me, that in the second case there are all the reasons to assume (and maintain) the hypothesis of a quasi-uniform distribution. But again, that's not a discussion which can be made here, at 259+.gpuccio
January 31, 2009
January
01
Jan
31
31
2009
12:46 AM
12
12
46
AM
PDT
Patrick:
And I think a lot of people are talking past each other in this thread since (A) we know micro-evolution via unguided processes works and (B) the real problem for MET is macro-evolution (never mind OOL, etc). So some people seem to be needlessly defending A while what ID proponents are focused on is B.
A willful (and well-fed) remnant, no doubt, of a theory of Life allowed the luxury of not having to address itself to how the conditions of the theory got started to begin with. A theory entitled to intellectual dogma without ever having to square up with the growing (not diminishing) mountain of evidence against its broadest claims. Special Theory = Yes, General Theory = No.Upright BiPed
January 30, 2009
January
01
Jan
30
30
2009
08:34 PM
8
08
34
PM
PDT
Why a uniform probability distribution? That isn’t applicable to biological systems.
Depends on the metric, does it not? If you're talking about the information encoding an object, then in this case we're dealing with nucleotides and as far as I'm aware all 4 options (T, A, C or G ) are equally likely within biology. This can be extended to a measuring JUST the steps in a pathway, since each step in the pathway consists of changes to nucleotides and need not be in reference to the entire search space. I'm no mathematician but when you're talking about a long indirect pathway where the target is inactive for natural selection should not all functional intermediates (and that's assuming they exist for ALL long-range targets) be considered as equally likely in reference to the goal? Even contained within viable cluster there are still a very large number of neutral and deleterious spots, and in regards to JUST random variation (not natural selection or foresighted variation or some limited forms of variation[mutational hotspots]) I have to ask what makes things unequally probable/non-uniform? But you're focused on the overall search space and localized/clustered viable spots. It's true that viable regions in the overall search space "can" be clustered, but that objection seems only relevant to natural selection and micro-evolution (local searches within a cluster of viable regions). And I think a lot of people are talking past each other in this thread since (A) we know micro-evolution via unguided processes works and (B) the real problem for MET is macro-evolution (never mind OOL, etc). So some people seem to be needlessly defending A while what ID proponents are focused on is B. The major point of contention is (a) whether macro-evolution involves searching for isolated clusters and (b) whether in nature there exists fitness functions for all targets (aka how do you get squirrels to die often enough that the search is funneled toward becoming a flying squirrel). As you say: "MET mechanisms demonstrably channel information from the environment (simulated or natural) to the populations that are evolving. The question isn’t whether this happens, but how far it can go." And when you get out into this purported void of deleterious and neutral spots in-between functional regions are they not all equally likely? I'm not an expert, so someone please correct me if I'm wrong in my comment. In any case, I'm hesitant to try and connect Dembski and Mark's work to overall biology. I'd like to see them make the connection specific since I get the feeling they're focused on particular scenarios which are the difficult cases and not in general (aka obviously they're not trying to argue that unguided processes do not work at all!).Patrick
January 30, 2009
January
01
Jan
30
30
2009
06:08 PM
6
06
08
PM
PDT
R0b: "I wish you would. How else can I correct my misunderstandings?" I am really tired of this thread. Too many things have been said, and too many misunderstandings have arisen. I have continued till now not to let some points completely truncated, and out of respect for some of the participants, like Mark and yourself. I have written # 249, which is truly long even for my standards, because I thought I owed you some clarifications, in name of our past constructive discussions. And as you may know, with all the huge respect and admiration I have for Dembski, and especially because of that, I usually don't comment on direct observations about his work. I stand for my points, and take full responsibility for them, and that's all.gpuccio
January 30, 2009
January
01
Jan
30
30
2009
03:13 PM
3
03
13
PM
PDT
gpuccio:
I think that in your post you have also seriously misunderstood Dembski, but I will not comment on that.
I wish you would. How else can I correct my misunderstandings?
Thank you for the kind appreciation. But you probably know that what originates in a way can be greatly transformed through multiple passages. That’s also the basis for very popular games, I believe.
Indeed it is. My comments came out much more culpatory than I intended, and I apologize for that. I just wanted to make sure the JayM didn't get stuck with a strawman charge.R0b
January 30, 2009
January
01
Jan
30
30
2009
02:42 PM
2
02
42
PM
PDT
CJYman @246
For my more detailed explanation of “specification” and an actual measurement, check out “Specifications (part 1), what exactly are they?” in the top left margin on my blog.
Your blog post seems to be aligned with the standard ID definitions. Unfortunately, I still don't see where you have shown how to calculate CSI for any real system. You also didn't address the questions I asked: Why a uniform probability distribution? That isn’t applicable to biological systems. What is exactly does “able to be formulated as an independent event” mean? How could this be done for even a simple biological system? There seem to be two pervasive points of confusion in discussions of CSI. The first is that the word "specification" itself suggests that one and only one solution need be considered. The fact that a particular gene codes for a protein that is part of a flagellum, for example, doesn't make it specified a priori. In a different environment, that protein might be part of something else. In the same environment, a different protein might do an equal or better job. The second point of confusion is that the specification is computed using the length of the genome or protein, with the implicit assumption that all genomes of that length are equally probable. That's simply not the case in biological systems. This is the uniform probability distribution assumption sneaking in again. Are you aware of anyone ever computing the CSI of a real biological component and publishing a detailed, step by step description of how he or she arrived at a specific value in specific units? I've looked for something like that since first reading about CSI, with no luck. I strongly suspect, in fact, that if two different people were to undertake this task for the same biological construct, their answers would be different. Unless and until CSI can be rigorously calculated, it can't be considered as evidence for design. In a very real sense, unless it can be calculated it has no actual meaning. JJJayM
January 30, 2009
January
01
Jan
30
30
2009
02:35 PM
2
02
35
PM
PDT
CJYman:
Furthermore, it seems that once Dembski began to formalize the concept of active information, Wolpert fell silent.
Wolpert addressed Dembski's work only once that I know of (correct me if I'm wrong), in 2002. Dembski started to formalize active information in 2006. Why would you connect Dembski's active info work with Wolpert "falling silent"? That's a rhetorical question. You seem to be simply repeating what Dembski said in an interview.R0b
January 30, 2009
January
01
Jan
30
30
2009
02:26 PM
2
02
26
PM
PDT
CJYman @245
The fitness functions used by Tierra and the group evolving antenna designs at MIT have no knowledge of the solution nor of what the optimum performance will be (in the case of Tierra, the concept of optimal performance is not even applicable).
They don’t know what the exact solution will look like, however, they program the search space constraints and the relevant math used to determine what the constraints of the design will be.
They specify a fitness function that permits genomes to be ranked. This is a very simple simulation of the far more complex time varying fitness function we call "the real world."
IOW, they know the parameters which need to be provided by them (the intelligent designers) and the program which will work with those parameters in order to solve a specific problem.
That's not particularly accurate in the case of antenna design and not all all accurate in the case of Tierra which is measuring replicability. The fitness function is just a simulation of natural selection. The criteria used within it may have meaning outside the simulation, in the case of antenna design, but the interesting thing about GAs is that they show that even simple simulations of MET mechanisms can transfer information from the environment (the fitness function) to the population over generations.
If they had no idea they were looking for an antenna which was optimized according to their provided criteria, would the EA produce an optimized antenna which is exactly what they are looking for?
The GA will produce populations that have successively greater average fitness, as measured by the fitness function. The fact that the fitness function provides value outside the simulation, in the case of antenna design although not of Tierra, is immaterial. This is completely analogous to a superset of the very same mechanisms producing successively more fit populations as measured by the ability to survive and reproduce in the natural world. JJJayM
January 30, 2009
January
01
Jan
30
30
2009
02:15 PM
2
02
15
PM
PDT
R0b: "gpuccio is the origin of what you called a straw-man variant" Thank you for the kind appreciation. But you probably know that what originates in a way can be greatly transformed through multiple passages. That's also the basis for very popular games, I believe.gpuccio
January 30, 2009
January
01
Jan
30
30
2009
02:14 PM
2
02
14
PM
PDT
Mark: I hope that my previous post answers your # 247, at least form my point of view. I know it is a little bit long anyway, so don't feel obliged to read it... And I think you can reach CJYman'blog by clicking on his username.gpuccio
January 30, 2009
January
01
Jan
30
30
2009
02:10 PM
2
02
10
PM
PDT
CJYman:
I was actually not responding to gpuccio re: specification not being an example of circular reasoning. I was responding to JayM comment #232.
Yes, and JayM was responding to djmullen [229] who was responding to gpuccio [226]. gpuccio is the origin of what you called a straw-man variant.
Yes, because it would take some type of intelligence to “cheat” in that way.
Yes, that's the idea, but gpuccio seems to disagree, as he's looking for a "new specification".
However, that is based on a pre-specification as opposed to a specification.
I've never actually seen any example that meets Dembski's definition of a "prespecification", but that's neither here nor there.R0b
January 30, 2009
January
01
Jan
30
30
2009
02:09 PM
2
02
09
PM
PDT
R0b: I think there is some misunderstanding here. Is it possible to ask that affirmations be read in the context in which they are made? The paragraph about specification which you cire is taken from my response to Mark, and is intended to explain why, in my view, the ID arguments are not only "negative", but do contain a very positive element in the concept of specification. Now, first of all I was not giving an operational definition of specification for application in design detection. That was rather a reflection on the philosophical meaning of the theory. And obviously, it was my personal reflection, and did not imply anyone else, least of all Dembski. I'll try to explain my personal thought better. For me, design as we see it in human artifacts is "always" the product of a conscious, intelligent process. That process can be observed both objectively and subjectively (by the designer). I think that should be simple. Human design comes from a conscious, intelligent process. Or do we disagree even about that? At this point, I wouldn't be surprised of anything. So, when I say that "specification comes from the conscious representation of an intelligent, conscious beings" I am referring to what we observe in human design. Probably, I could have been more analytical, and should have said: "I call specification the conscious representation, meaning and purpose projected by humans into an artifact in the process of design". Let's go on. The next step is to understand how we can recognize in the artifact that specification. Here, we have to give "operational" definitions of specification, in other words we have to define which formal properties, or aspects, of an object can be considered a sign of the design process in human artifacts. That is a vast field of discussion, and I don't want to discuss it here. We have discussed this many times, and many times with you. I will just remind you that I often give a very restricted, but effective (IMO) definition of a subset of specification, which is the functionality of a digital string of information. We could discuss about the definition of functionality, but again to avoid that we can focus on specific examples, which can be more easily understood and analyzed, like human made computer software, where the function can often be easily defined in a specific context. The concept that any function has to be defined in a context remains, for me, of fundamental importance. So, to sum up: if we have a piece of software which performs a well definable function (for instance, an ordering algorithm) in a digital environment, then we can see that function as a specification of that piece of digital information. Now, if we know for certain who wrote that software, it is easy to say that the specification (the function) is the product of a consciuos intelligent process (the designer's intention, programming, implementation, and so on). So, it is easy to say: we objectively see here a specification (a function), and we know that it was produced by a consious process of design by this designer. But if we see the object, and recognize a function in it, and assume a specification and therefore a conscious process of design, without having direct evidence of it, are we right? Here comes the quantitative aspect of complexity, necessary for design detection. Remember, we are still speaking of design detection in supposed human artifacts, here. And the complexity has essentialy one role: to rule out possible configurations of the digital information we are observing which, while suggesting an intentional function and therefore a process of design, could instead be the product of chance. That's what I called "pseudo-specifications". A high complexity (improbability) of the digital, functionally specified sequence we observe is a safeguard against the possibility that the observed function may be there as a product of chance: in other words, that it may be a pseudo specification, a function which is there, but has never been designed by any conscious agent. You may like it or not, but that improbability of the observed function (or of any other kind of specification) is exactly the main tool of the empirical science of design detection in human artifacts, as Dembski has repeatedly discussed in his works. The concept of CSI is just a way to define a threshold of complexity which can give us an "operational" definition of specified complexity, not very sensitive, but specific enough to empirically avoid any false positive. The practical application of all that is the EF. All the above is about human design, and nothing else. Now, let's extend the reasoning. Once we have given some operational definition of CSI to be used as a threshold in the EF, we can observe that: 1) CSI, as defined in our context, can easily be described in human artifacts, but not in all of them. As repeatedly stated, there are many designed artifacts which are not complex at all, but are designed just the same. These are the false negatives. Paul's example of the signal by the hanging lanterns is a perfect model of that. Obviously, if the artifact is not complex enough, we can identify it as designed only if we have direct infromation about the designer and/or the design process. But just by observation of the artifact, we cannot be reasonably sure that it is designed. 2) CSI cannot be found in natural objects, with only one known exception: 3) CSI is easily described in biological information, and especially in biological digital information (genome and proteome). Up to now, there is no assumption, no inference, no theory. We have just described what we observe, and given some operative definitions which can be applied to the observation and measurement of certain observed facts. Obviously, anybody can disagree with the descriptions and the definitions, especially with the operational definition of CSI. We all know well those controversies, and this post in no way pretends to analyze them here. But now comes the true, the only important inference in ID theory. Which is more or less the following: 1) As we know that CSI is a reliable sign of human design 2) As we can observe CSI in biological digital information (genomes and proteomes) 3) As the current explanations for that CSI in biological information by non design models are completely unsatisfying and empirically inacceptable 4) We make the inference that the observed CSI, and therefore the specification, observed in biological information could be explained as the product of a process similar to the process of human design, in other words as the product of a conscious intelligent process, implying intention, programming, implementation. That inference is the basis for an explanatory theory, which we call ID, and for all possible further research or discussion about that theory. Now, it's not my intention here to defend all the steps of that reasoning from the inevitable attacks, more or less civil, which will follow here: not in this thread, which is already 239 posts long, and not with the general atmosphere which has been predominating in the last part of it. I have made the above reasoning only for two motives: a) to clarify better my thought b) to show that the final inference in 4) can be liked or disliked, accepted or ferociously refuted, but is not circular. Finally, I owe you a last clarification about the pi example. You say: "According to his logic, we should not infer design from an extraterrestial signal that contains the digits of pi, since there is “no new specification” in it." I have never implied that. My example meant only the following: a) we have a software designed to output the digits of pi. b) after some time of calculation, the program outputs the first 10^200 digits of pi. That output is CSI: it is specified (the digits of pi are a mathemathical object which cannot be found in nature); and it is complex enough. If I received a signal of that type, I would infer CSI and design. c) Now the program goes on calculating, and after some time it outputs the first 10^300 digits of pi. What has happened? If we calculate quantitatively the CSI in the second output, the complexity has greatly increased: the result is much more improbable than the previous one, and so we would say that the computer program has created new CSI. Now, my point was simply that the program has created new complexity, but under the same specification (the digits of pi). Indeed, it has only applied repetitively the same algorithm. That is evidence, IMO, that it is the specification, and not the complexity, which is the true distinctive product of conscious design. A final disclaimer, which seems to be necessary at this point: all the above is only my personal view. I think that in your post you have also seriously misunderstood Dembski, but I will not comment on that.gpuccio
January 30, 2009
January
01
Jan
30
30
2009
01:59 PM
1
01
59
PM
PDT
1 2 3 10

Leave a Reply