Uncommon Descent Serving The Intelligent Design Community

Mathematically Defining Functional Information In Biology

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Lecture by Kirk Durston,  Biophysics PhD candidate, University of Guelph

[youtube XWi9TMwPthE nolink]

Click here to read the Szostak paper referred to in the video.

 HT to UD subscriber bornagain77 for the video and the link to the paper.

Comments
jerry[319], Yes, "behind my back" was a joke, of course. Yes, I find the discussion interesting, for the most part. No, I do not have such secret beliefs but I try to take your arguments at face value. As for my criticism, I agree that it is mostly "negative" but when claims are made by ID supporters that certain mathematical and statistical arguments discredit evolutionary theory, and I find these arguments erroneous or unconvincing, I comment. Just like a lot of ID activity is directed at criticizing evolutionary biology. We're all chipping in here and there, on both sides. I'm up for a couple of cold ones any time. You know where to find me, deep in the heart of Texas! Prof_P.Olofsson
JayM[318], Yes, there are some gently mocking comments but they are intended to be friendly. I do not intend to ridicule anybody or anybody's beliefs. Prof_P.Olofsson
Prof_P.Olofsso, you said "I didn’t know this thread was still going on. Are you talking about me behind my back?" No what I said about you in my comment I already said to you last week and knew fair well that you might still see it. The comment list on the main page is not a good indication of what has been said as it moves so fast. You also must remember the direct comment to you last week: "I do detect a small lack of constructive criticism on your part. Yours is one of pointing out the flaws in other’s arguments, which is well and good but I do not see any attempt at helping others how to solve the problem other than your criticism. For example, can the problems you raised in the past about the flagellum be solved or ameliorated somewhat if there were probability estimates of the number of potentially functional proteins from the totality of possible proteins. Now I do not know enough about the technicalities of either probability theory or the behavior or random polymers of amino acids to make any intelligent assessment but I bet that there are some that do. By the way I have hardly read all your comments so my assessment could be quite wrong. I was only using the sampling of the ones I have read to make my judgment and like any statistical analysis there is a potential error. The sampling also indicate a cordial and generally nice person." Further sampling has not changed the assessment. And we chatted quite nicely after that. All that I am pointing out is that you have indicated a lot of procedural dents in the argument but never touch the substance which is that the naturalistic processes needs a lot of incredibly improbable events to take place in order to be a legitimate explanation. If you can show why some of the arguments used to mathematically assess that proposition are not quite kosher, then you have not necessarily undermined the substance as I said. Instead of going for the substance and cutting off the life of the argument before it gets anywhere you instead nibble at the periphery. All I am doing is pointing that out. If I had the time to do the math again then I might be more effective at suggesting work arounds to your objections. Work arounds which you might be able to suggest. As I said before, I believe you know some work arounds otherwise you might jump in and show it couldn't be done and actually prove that our assumptions are fallacious. If you did that you would be a hero to the anti ID establishment because not only would you vanquish the "no nothings" but actually produce a statistical analysis that would stand as basic naturalistic dogma. But I suspect you understand the implications of what is said here and for whatever reasons you have chosen to essentially stand on the side and shoot arrows every once in awhile. If we didn't discuss ID I am certain we could have a good time over a couple cold ones. So this thread is about dead but I assume you will come back to check out if I answered you. By the way I do not believe you are here to mock us. Otherwise you are doing a poor job at it. If anything I am mocking some of the people here by pointing out the inanity of their arguments. My guess you find some of this interesting or why waste your time. I believe you secretly believe we have something. jerry
Prof_P.Olofsson @317
He's here to mock us.
No, that is certainly not true.
I will take your word for that (sincerely). I thoroughly enjoy your posts here, although I will say that I do interpret some of your statements as being said in a (gently) mocking tone. Your posts show good humor, and a good sense of humor, regardless.
The amount of personal attacks on me far exceed the few times I strike back, and I try to do so in good humor, which may perhaps be described as mockery.
As someone who usually watches from the peanut gallery, I agree that you take far more than you dish out.
As for substance, I think it’s a big mistake for ID supporters to accept every argument that is presented in favor of ID and dismiss any counterargument.
Unfortunately, I agree with your assessment here, as well. Scientific research is an exciting endeavor that leads to learning. Sometimes you learn things that contradict what you thought you knew before. If ID positions can't change in the face of evidence and if ID proponents aren't willing to follow all the evidence where it leads, the ID movement will stagnate and die. JJ JayM
JayM[313],
He’s here to mock us.
No, that is certainly not true. I like most of the people here and in my entire contribution to ID, I think you will find very little mockery. Just consider this fact: Davescot (!) started a thread titled "Some thanks for Professor Olofsson" a while ago, in regards to an article I wrote. The amount of personal attacks on me far exceed the few times I strike back, and I try to do so in good humor, which may perhaps be described as mockery. I'm nicer than PZ, aren't I? As for substance, I think it's a big mistake for ID supporters to accept every argument that is presented in favor of ID and dismiss any counterargument. In this thread, and its continuation, we have discussed inference based on probabilities. I have some knowledge in this field and it is clear that the statment Kirk made in the video is not consistent with his explanations thus far in this thread. It has nothing to do with ID as such, which is why I presented the Sally Clark example in post 221. The only person in the ID camp who even tries to undesrtand my arguments is the always nice and friendly tribune7. Prof_P.Olofsson
jerry[311]. I didn't know this thread was still going on. Are you talking about me behind my back? Anyway, you say:
I haven’t seen anything in any of the people mentioned that has substance.
Come on, that wasn't nice! Nothing of substance? Not even in 185? Come on, please, please, acknowledge that there is a smidgen of substance in it! Otherwise I will call you a creationist nut! ;) Cheers! Olofsson, the Diverter and Confuser Prof_P.Olofsson
Adel DiBagno @314
Science is a nasty, mean, ugly (hat tip to Arlo) process that relies on a very aggressive marketplace of ideas.
Only to the unprepared. Otherwise, it’s remarkably collegial.
Perhaps I was overstating the adversarial nature a tiny bit for dramatic effect.... JJ JayM
JayM:
Science is a nasty, mean, ugly (hat tip to Arlo) process that relies on a very aggressive marketplace of ideas.
Only to the unprepared. Otherwise, it's remarkably collegial. (At least in my humble experience.) Adel DiBagno
jerry @311
I have seen nothing in Mark Frank’s, R0b’s, or certainly Prof_P.Olofsson’s comments that warrant either of the criticisms you’ve leveled. Science is about the details and these individuals are politely yet firmly holding our feet to the fire to provide those details.
I haven’t seen anything in any of the people mentioned that has substance.
I think you need to look harder. They are carefully following the discussion and they are spending significant time analyzing our arguments and criticizing them. Are they doing so because they oppose ID? Of course! And, that's fantastic! Science is a nasty, mean, ugly (hat tip to Arlo) process that relies on a very aggressive marketplace of ideas. This adversarial approach improves good ideas and kills bad ones quickly. The gentlemen I mentioned are showing ID a great deal of respect by participating here. They're showing that it is worthy of consideration.
The nice and very civil professor throws some procedural road blocks but that is it. He has not offered anything that would undermine the substance of the argument.
That's simply not the case. He has pointed out holes in the arguments he has addressed and identified problems with at least the way certain mathematical ideas are being expressed. That's of enormous value. If ID proponents can't answer his polite questions or address his calmly presented counter arguments, there is no way we're ready to deal with the mainstream scientific community.
Nor has he offered how best to get around the procedural hurdles so that ID could be fruitfully tested.
That's not his job, it's ours.
As much as I kid with him and like him personally, he is not here to help us but to divert us or confuse us and he seems to have done that with you, judged by your comment.
Actually, I've noticed from the start that the charming Professor O isn't trying to do either. He's here to mock us. That's okay, he does it amusingly and provides value in the process.
But we are not trained scientists and here we are proclaiming on science so we make mistakes in the details.
And we should thank those who point out those errors because that enables us to correct them.
But we can see the big picture and ask for rebuttals and the people you mention don’t do that. They nit pick at the periphery and never undermine the substance.
The devil is in the details and the questions asked by these gentlemen get to the heart of the matter. Far from nitpicking, many of the comments regarding the mathematics underlying ID theory are essential to address. If CSI, for example, really is uncomputably vague, our "big picture" is wrong.
So I stand by my comments. I do not see any good will in their so called objections. If they have substantive objections we would hear them all the time. But they are silent so they much to do over nothing.
I don't care if their will is ill or good. If we can't directly and clearly address the issues raised by the people you are criticizing, then the ID movement will fail. Deservedly. JJ JayM
As much as I kid with him and like him personally, he is not here to help us but to divert us or confuse us...
jerry, I don't think you are being fair to yourself or to Professor Olofsson. I think you have a strong enough will and intellect to avoid being diverted or confused. And I doubt that the Professor is acting in any way inconsistent with his profession. He is a scholar and a teacher. Adel DiBagno
"I have seen nothing in Mark Frank’s, R0b’s, or certainly Prof_P.Olofsson’s comments that warrant either of the criticisms you’ve leveled. Science is about the details and these individuals are politely yet firmly holding our feet to the fire to provide those details." I haven't seen anything in any of the people mentioned that has substance. The nice and very civil professor throws some procedural road blocks but that is it. He has not offered anything that would undermine the substance of the argument. Nor has he offered how best to get around the procedural hurdles so that ID could be fruitfully tested. As much as I kid with him and like him personally, he is not here to help us but to divert us or confuse us and he seems to have done that with you, judged by your comment. In a lot of ways we are a bunch of amateurs here. We have good minds and can see the obviousness of arguments and the shortcomings of those that are bogus. But we are not trained scientists and here we are proclaiming on science so we make mistakes in the details. But we can see the big picture and ask for rebuttals and the people you mention don't do that. They nit pick at the periphery and never undermine the substance. So I stand by my comments. I do not see any good will in their so called objections. If they have substantive objections we would hear them all the time. But they are silent so they much to do over nothing. jerry
Upright BiPed @302
300 posts…and still the same old tired objections. The very ones dealt with in even a modest review of ID materials.
jerry @305
You have to understand that those who are against ID have no material so they have to use tired irrelevant arguments over and over.
I think both of these comments are unfair to the ID opponents participating here, as well as the ID proponents who are trying to provide better support for ID theory, possibly in ways other than what you choose to do. I have seen nothing in Mark Frank's, R0b's, or certainly Prof_P.Olofsson's comments that warrent either of the criticisms you've leveled. Science is about the details and these individuals are politely yet firmly holding our feet to the fire to provide those details. Frankly, "a modest review of ID materials" raises more questions than it answers. We need more specifics. We need examples of CSI calculations. We need to show papers unfairly rejected from mainstream peer-reviewed journals. We need to show that ID theory explains the observations of modern biology and makes better predictions than modern evolutionary theory. Is it unfair to ask for this level of detail from ID theorists at this stage in it's lifecycle? Perhaps. Getting some of these answers takes resources that the ID movement doesn't yet have. If that's the case, though, we shouldn't be afraid to admit it. "We're working on that." (with references to the people doing the actual work, of course) is a far better answer than "Your arguments are tired and irrelevant." Sometimes the answer in science is "I don't know." And that's okay. JJ JayM
The idea was to try and find the conditions under which the claim of design of life would be falsified. Mark, I'm guessing you don't believe life to be designed. How have you falsified it? And since it can't be overemphasized, ID can be falsified and it is not the only claim for a design of life. tribune7
#306 and #307 Thanks for answering these questions. The idea was to try and find the conditions under which the claim of design of life would be falsified. So far we don't seem to have found any! Mark Frank
Mark -- Let us assume that our RM+NS explanation does account for the organisation of the amino acids. Mark, at this point you leave biology, enter chemistry and become silly. IC is irreducible complexity -- a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. Jerry, fair point. Only IC in the flagellum would be falsified. tribune7
"But if you do show that the flagellum could come about by RM+NS you will have falsified IC." Not true. There are thousands of IC systems. Finding a natural explanation for one or even one as famous as the flagellum is not a major blow. Finding a natural explanation for several of the more hard core ones would be a blow. But none have been touched yet except through fanciful speculation. So we are safe for the moment. jerry
Upright Biped, You have to understand that those who are against ID have no material so they have to use tired irrelevant arguments over and over. It is the only way they know how to undermine the basic logic. Distract, divert, deflect, delude etc. The other technique is to find small fault with anything said but which really has nothing to do with the issue. And on other sites they would resort to mockery or some similar tactic. And they are supposedly intelligent IQ wise which is the mystery. They wouldn't stand for the same approach in some other topic area except maybe politics. jerry
Tribune [297] OK, the answer is yes. Design would be still preferable over chance since you would still have to account for the amino acids organizing into proper sequence to form proteins that organize into proper sequence to form the flagellum. Thanks for answering the question. Let us assume that our RM+NS explanation does account for the organisation of the amino acids. Is design still possible? If you find that inconceivable, take a non-living example, is it possible that the coast of Norway was designed by a designer with undefined powers and motives (I am Douglas Adams fan)? But if you do show that the flagellum could come about by RM+NS you will have falsified IC. I am sorry I have forgotten what IC is. Mark Frank
gpuccio [301] What confuses me is that I ask a question for which the answer is "yes" or "no" and you answer with 260 words and I still don't know whether the answer is "yes" or "no". Let me try to make the question as clear as I can. "For outcomes where RM+NS is shown to be a plausible explanation, is it also possible that the outcome was designed by an intelligence of undefined power and motives?" I am not asking you whether God was involved. I am not making an argument. I am just asking a question. Mark Frank
300 posts...and still the same old tired objections. The very ones dealt with in even a modest review of ID materials. A search for reasonable understanding? ...sure, OK. Upright BiPed
Mark: "All I am saying is - how do you know the designer is not responsible for any given phenomenon on earth if you don’t specify the powers and motivation? As Dembski (correctly) explains - the explanatory filter can give false positives. So even if we find an account of the evolution of the flagellum through RM+NS it is still also possible that the flagellum was designed. Do you deny this? On what grounds would you prefer the RM+NS account?" I am confused by your arguments. Let's start again, briefly: 1) ID is about "design detection". There are designed things where design cannot be detected. That kind of things are not the object of the ID theory and methodology. So, even if biological information were really designed, if that design cannot be detected, ID would be falsified just the same. Again, believing in a designer is one thing (whether it be God or not). Detecting design by objective methodology is another. ID is about the second thing, not the first. 2) The EF is built in a way that it can have false negatives, not false positives. If biological information were a false negative, just the same ID would be falsified, as clarified in point 1. If instead it is demonstrated to be a positive, then we believe it is a true positive, because the EF is planned not to give (empirically) false positives (false positives remain always logically possible, as discussed many times). 3) If we find an account of the evolution of biological information (obviously, the flagellum alone could not be enough, but it would certainly be an important start) through RV + NS, by definition biological information becomes a negative to the EF. Therefore ID is falsified. There is no question or doubt about that. There is nothing to "prefer". If design is not objectively detected, there is no more any ID account. I think that should be clear. If you insist with your objection, could you please specify what is not clear in the above points? gpuccio
gpuccio:
Where are crystals and mandelbrots a “string of digital information which is functionally specified”? I try to disambiguate things, but it seems to me that you don’t cooperate…
You're right, the crystals and Mandelbrot set don't fit your personal definition of CSI, so I shouldn't have mentioned them. The point still remains that your claims about CSI haven't been tested. As far as the unequivocality of your definition, I'll have to disagree. Any physical phenomena can be represented as a "string of digital information". It just so happens that the mapping for DNA is more obvious than the mapping for, say, a rendition of Beethoven's 5th, or a tree. So how do we determine whether a given physical phenomenon fits your "digital" criterion? Is the functionality criterion a yes/no question, or are there degrees of functionality? Most importantly, as far as the improbability criterion, under what hypothesis(es) should it be improbable? I'm not trying to be a troublemaker here. I think these kinds of issues would have to be fleshed out in order to scientifically test sweeping CSI claims. I've strayed way too far. My original point was in regards to the oft-stated ID argument based on the premises that humans create CSI and nature doesn't. I merely pointed out that those premises are not a scientific given, and indeed, CSI is not generally recognized as a scientific concept. With that, I'll stop derailing this thread further. I really appreciate gpuccio, StephenB, jerry, and others for putting up with me. R0b
jerry:
The whole scientific community accepts the concept of functional complex specified information. They just do not call it that. If you talk about how DNA is information and is complex, they will all nod their heads yes. If you talk about how DNA specifies a protein. They will nod their heads yes and know you are talking about the translation process and transcription process.
Yes, but when the ID community says "complex" in this context, they mean "improbable under all law+chance hypotheses", or maybe "improbable under all known law+chance hypotheses", or maybe "improbable under a hypothesis of uniform chance", depending on which ID proponent you talk to and when. Outside of the ID community, "complex" usually means "complicated". What do you mean when you say "complex"? And it's interesting that you connect Dembski's idea of "specification" with the fact that DNA specifies proteins. Does the fact that DNA specifies something imply that DNA is specified? Do you think that DNA is a specifying agent?
Now I believe that Kirk Durston may be doing just what you are asking for but it won’t have much effect on people’s way of thinking.
I was asking about Dembski's specified complexity, not Durston's functional information. They're not the same. R0b
ROb: But I was hoping for evidence that the unfair suppression of ID is widespread. Dembski details his experiences with Baylor University What happenings when you accept a pro-ID article for publication in a referred journal And have you watched Expelled? tribune7
Mark If we find an account of the evolution of the flagellum through RM+NS is it also possible that the flagellum was designed? . . . If the answer is NO - why not? (the explanatory filter explicitly mentions false positives). If the answer is YES - is there any reason for preferring chance or design? OK, the answer is yes. Design would be still preferable over chance since you would still have to account for the amino acids organizing into proper sequence to form proteins that organize into proper sequence to form the flagellum. But if you do show that the flagellum could come about by RM+NS you will have falsified IC. tribune7
gpuccio StephenB:
Rob, those comments come from me not from GPuccio. I appreciate being associated with him, but I don’t think we should hold him accountable for my words.
My apologies. Brain cramp.
We will settle for the freedom to define our own terms, fashion our own paradigms, and establish our own methods.
Done. At least in free countries.
At a bare minimum, we would ask that the academy, the press, and the United States court system to stop lying and to desist from characterizing ID as a faith-based initiative when its methods are clearly empirically based.
Sounds like everyone's out to get ID. Perhaps it isn't as clear to them as it is to you that ID's methods are empirically based. Publishing more data generated by empirical ID methods might help clear up this confusion.
Apparently you haven’t noticed, but the scientific community is not open to any other premises than those embedded in their own paradigm. They have made that clear thousands of times.
You're right, I haven't noticed that.
Are you aware of the fact that the Kansas City school system has established Darwinism as the only possible answer to the question of origins?
I wasn't aware of that. That's certainly strange for a school system to take a scientific stance that a priori rejects any current or future contravening data.
According to whom? Do you not realize that you are, once again, bootlegging the argument from authority into the discussion?
When did I argue from authority before? And how am I doing so now? I'm not saying that ID is wrong because the scientific community dismisses it (that would be an argument from authority). I'm saying that maybe ID is dismissed because their output doesn't meet the standards that the scientific community has set for itself. I'm not saying it's right, and of course I'm speaking in very general terms here -- obviously there is no universal scientific organization that comes up with official definitions or sets on official standards for their stamp of approval.
How many ID papers and research proposals have been rejected? That’s not a rhetorical question. Nothing personal here, Rob, but I am beginning to think that you are joking.
Why? Are there so many that I must be clueless to not be aware of them? That also is not a rhetorical question. Behe has mentioned an ID paper that got rejected. I'm sure there are others, but are there more than a handful? To make a case for unfair "expulsion", why not publish these papers on the web, with the stated reasons for rejection, so the world can see how ID is being suppressed?
Where were you when Baylor closed the door on Dr. Dembski and Dr. Marks?
I agree that it was bad form for Baylor to return funds that it had already approved. Let's put that on our list of rejected ID proposals.
Where were you when Dr. Behe’s associates at Lehigh decided that they would never even speak to him again if they saw him in the hallway?
I'm sure that you've heard this from a good source and that you checked out both sides of the story, so I'll join with you in decrying this behavior as inexcusably rude.
Where were you when Dr. Gonzalas was refused tenure at Iowa State simply because he accepts and researches the “anthropic principle.” The problem is systemic in every way imaginable.
Whether denying Gonzalez tenure was fair or not is a matter of opinion, but it's certainly a data point. I've heard the handful of oft-repeated anecdotes from the ID side, and of course the other side has their anecdotes also. As I said, it would be very surprising if no ID proponent had ever been treated unfairly. But I was hoping for evidence that the unfair suppression of ID is widespread. Again, making rejected papers public, along with the reasons for rejection, would provide such evidence.
Just so that you will know, 95.8% of evolutionary biologists are either agnostics or atheists, and, yes, we do have the data to support that. Is that disproportionate enough for you?
That's certainly disproportionate. R0b
Tribune [290] Me: So even if we find an account of the evolution of the flagellum through RM+NS it is still also possible that the flagellum was designed. Do you deny this? tribune7: Mark, if we should ever find this to be the case we would falsify the aspect of ID known as IC. We would not falsify God. You are conflating ID with God. They are not the same. ID is not faith. This is nothing to do with God. It is a simple question. I will rephrase it very slightly. If we find an account of the evolution of the flagellum through RM+NS is it also possible that the flagellum was designed? To save some time the follow-up questions are: If the answer is NO - why not? (the explanatory filter explicitly mentions false positives). If the answer is YES - is there any reason for preferring chance or design? Mark Frank
DaveScot[293], What model? I'm not quite sure what we are discussing here. My original point was about Dembski's shopping cart from his book No Free Lunch. Prof_P.Olofsson
Prof O You are probably not applying the model correctly. Behe did it for 10^20 reproductive events focusing on genes evolving to defeat two different anti-malarial drugs. Problems involving a multitude of genes become intractible. That doesn't mean the model is wrong. Quantum mechanics is well enough established but if you try to model the precise behavior of more than a couple of particles at once the calculations become intractible. It's the same situation in molecular biology. DaveScot
And that should be "doesn't take them into account." Been in the South for too long I have. Prof_P.Olofsson
DaveScot[289],
I think you’re raising natural selection to some exalted, mysterious status it doesn’t deserve. You called it reproduction and selection and said Dembski doesn’t treat it properly.
No I'm not. They're not mysterious. I only pointed out that Dembski's shopping cart model don't take them into account. I find this simple point hard to argue against. In fact, an ID supporter on this blog agrees with me (and I have also honestly admitted that I don't have a better model to suggest). Dembski does not suggest any evolutionary scenario that warrants his model so he can hardly claim to have rejected an evolutionary hypothesis. Prof_P.Olofsson
So even if we find an account of the evolution of the flagellum through RM+NS it is still also possible that the flagellum was designed. Do you deny this? Mark, if we should ever find this to be the case we would falsify the aspect of ID known as IC. We would not falsify God. You are conflating ID with God. They are not the same. ID is not faith. tribune7
Prof O I think you're raising natural selection to some exalted, mysterious status it doesn't deserve. You called it reproduction and selection and said Dembski doesn't treat it properly. What you describe is nothing more than a simple process used purposely in all kinds of situations. It's called "trial and error". The trials are generated by reproductive events and the errors are evaluated by natural selection. It's a simple search algorithm. Its power lies in big numbers - many trials conducted in parallel (populations) with many serial iterations (generations). For an organism like a bacteria the simultaneous trials number in the trillions and the number of iterations over deep time in the billions. Recombinationm, horizontal gene transfer, and a changing environment complicate matters which is why many of us prefer to focus on asexual microbes and basic mechanisms found in all organisms like the ribosome. In that case recombination is not a complicating factor and neither is the external environment. By far the best evaluation of the capabilities of trial and error with big numbers is the discussed in Behe's book "The Edge of Evolution" with the malarial parasite. Its successful and unsuccessful responses to various intense manmade and natural selection pressures is quite illuminating. Those responses also fall in line quite well with the limits predicted by the probabilistic resources (mutation rate, population size, and generations). The math and the biology is all simple, tractible, and most importantly the predicted result was essentially the same as the actual result. DaveScot
Barry stated in response to the request for a new thread under tighter constraints; A new thread will be started. Kirk is working on the opening. bornagain77
Gpuccion [#281] I only used God as an example (actually you introduced him/it in #261). All I am saying is - how do you know the designer is not responsible for any given phenomenon on earth if you don't specify the powers and motivation? As Dembski (correctly) explains - the explanatory filter can give false positives. So even if we find an account of the evolution of the flagellum through RM+NS it is still also possible that the flagellum was designed. Do you deny this? On what grounds would you prefer the RM+NS account? Bfast may describe this as "yapping in the philosophical ether" but it seems to me to go the essence of what is wrong with ID. By avoiding "how" and "why" it gives no scope for assessing it. All it does is rely on other theories failing to account for things. Mark Frank
http://pandasthumb.org/archives/2009/01/durstons-deviou.html As you guys probably know, PZ posted a response to this talk to the Panda's Thumb blog. Havermayer
gpuccio @283
We recently had a more private, long, focused and rather civil and satisfying discussion about CSI and its calculation on Mark’s personal blog, with the contribution of many good willed people from both sides (including Mark and R0b). You could probably find some interesting input there, whatever your position may be.
I found the discussion using the search terms you specified, thank you. Unfortunately, the outcome of the discussion appears to be that no one knows how to objectively compute CSI in even short bit strings, let alone real world biological constructs. At the risk of sounding like a broken record, this reinforces my view that the most fertile ground for ID research is in the limits of the mechanisms proposed by modern evolutionary theory (MET), and the disconnectedness of viable regions in genome space. JJ JayM
IN CASE ANY ADMINISTRATORS ARE LISTENING, EVERYONE WANTS A NEW THREAD SO THAT KIRK DURSTON CAN EXTEND HIS CASE. It's a real live grass roots movement---honest! StephenB
B L Harville (# 278): We recently had a more private, long, focused and rather civil and satisfying discussion about CSI and its calculation on Mark's personal blog, with the contribution of many good willed people from both sides (including Mark and R0b). You could probably find some interesting input there, whatever your position may be. Just google "Mark Frank" and "Clapham Omnibus" and you will easily find it. gpuccio
bornagain: "DaveScot, Barry or any admin listening, Kirk Durston Has requested to start a new thread under his video lecture, that could somehow be more tightly constrained so as to be more productive and less messy." I would appreciate that too. And with the premise (or assumption, as we like) that the post stays, as far as possible, technical and focused. gpuccio
Mark: "Do you think that God is not capable of simulating an apparently perfect spontaneous and unguided process? Or are you saying that God would not want to do it? If you don’t know then of course it is possible." But ID theory in itself has nothing to do with the concept of God. We require a designer whose working can be "objectively" inferred. If that's not the case, we may still have a God, but we no more have an ID designer, and we no more have ID. Kenneth Miller will be happy, and I will be very, very unhappy. So, as you see, ID is taking all the responsibilities for its theory. If we lose, we lose, and if we win, we win. The game is open, and absolutely fair. Some of your objections could perhaps be valid for some form of creationism. But ID is not creationism. If you, who are intelligent and sincere, still do not acknowledge that, after all our discussions, I must really be a very bad interlocutor. gpuccio
R0b: Indeed, you are not treating me very fairly :-) In # 240 you reduce all to how we disambiguate things, after I have tried my best not to be ambiguous, and then you propose to calculate CSI on crystals and mandelbrots, when a very non ambiguous point in my post was: "Many times, even with you, I have suggested that, for operational reasons, we stick in our discussions of a very generic nature about ID to some very simple and unequivocal definition of CSI. As you know, my favourite one is “any string of digital information which is functionally specified (that is, for which a function can be explicitly described in a specific context) and which has a complexity (improbability of the target space) which certainly is lower than a conventional threshold (which for a generic discussion we can well assume at 1:10^150)." Where are crystals and mandelbrots a "string of digital information which is functionally specified"? I try to disambiguate things, but it seems to me that you don't cooperate... and then in # 264, you attribute to me Stephen's post (Steve, I appreciate being associated with you too, but maybe we have worked too much together recently, and have lost our separate identities?) :-) Never mind, we can always disambiguate that! :-) gpuccio
Mark Frank:
How would you know that intelligence had not been involved? Whatever series of events you observed they may have been designed."
Its time for you to think simply, fact-based, rather than yapping in the philosophical ether. If all evidence was consistent with a non-designed universe and biology, then there would be no good reason to hypothesize a designer. Further, any evidence that is nicely explained by non-guided means should be, and is by the majority of IDers, considered to be the product of unguided forces. So only one question remains, is there any data that is a poor to terrible fit with the available understanding of what is plausible via unguided means? Let me suggest two obvious places to look: 1 - The finely tuned big bang, 2 - the orgin of life. Do I need to list the multitude of other places to look? Now, if you can honestly suggest that the current unguided explanations for these phenomena are in any way plausible, reasonable and adequate, then you have information that should rightly shut down the entire ID movement. If not, however, I ask this, How do you know that intelligence was not involved in these phenomena -- in light of a dearth of plausible alternative explanations? bFast
I see numerous references to "CSI" on this website but I can't find any demonstration of a calculation of "CSI." Could someone please provide one? Thanks in advance. B L Harville
You might accept that ID was wrong – Mark, ID is a methodology. It is not my faith. ID could turn out to be completely unsustainable. I have no problem with that. It's not going to affect my faith. I think that's true for most of us here. tribune7
kairofocus: Thanks for your response, you state in post 228, "I suspect the issue over length vs fits has to do with degree of isolation: a more hard to find island of function has more info in it" The correlation being that the more functionally complex a protein is the more it will contribute to its rarity and thus to its required information content. Thus, I believe this answers my question in that this equation is not sensitive enough to provide the measure needed for Genetic Entropy on the single protein-mutation scale bornagain77
Great idea, JayM. I put it on the FAQ thread. tribune7
OK, Professor, thanks. Dittos to BA77's suggestion for a new thread for Kirk. tribune7
tribune7 @268
How many ID papers and research proposals have been rejected? That’s not a rhetorical question.
I’m pulling this one out so it doesn’t get lost. It’s a subject that’s often discussed here ROb.
This came up in a different thread, but got lost in the noise. Is there a website somewhere with ID friendly papers that have been rejected by mainstream, peer-reviewed journals (preferably with some indication as to why they were rejected)? That would be a great resource. JJ JayM
Gpuccion [261] Me: “How would you know that intelligence had not been involved? Whatever series of events you observed they may have been designed.” Gpuccion: Well, again I cannot agree with you on that kind of argument. You have no faith in the objectivity and serious scientific approach of us IDists! This is nothing to do with your integrity and seriousness. I am convinced that you (singular) have both. It is a logical consequence of the ID position that the designer has undefined powers and motives. You might accept that ID was wrong –but why? It would be illogical to do so unless you have some knowledge of the powers and motives of the designer. Do you think that God is not capable of simulating an apparently perfect spontaneous and unguided process? Or are you saying that God would not want to do it? If you don’t know then of course it is possible. Mark Frank
DaveScot, Barry or any admin listening, Kirk Durston Has requested to start a new thread under his video lecture, that could somehow be more tightly constrained so as to be more productive and less messy. bornagain77
tribune[267], By Bayes' formula, the probability of guilt given the evidence, P(D|E), equals P(E|D)*P(D) / [P(E|D)*P(D)+P(E|C)*P(C )] In the court example we have P(E|D)=1 so P(D|E) equals P(D) / [P(D)+P(E|C)P(C )] so if P(E|C), the probability of the evidence under the innocence assumption, is extremely small, 1/n for some huge n, and P(D) is as small, the resulting probability P(D|E) is about 1/2. Prof_P.Olofsson
-----Rob: “I’m not suggesting that you accept arguments from authority. My point is relevant only if you care whether the so-called authorities eventually accept ID. I’m suggesting that if you do care, then you might want to consider whether the larger scientific community accepts your premises before you base your arguments on them.” Rob, those comments come from me not from GPuccio. I appreciate being associated with him, but I don't think we should hold him accountable for my words. In any case, here we go: We don’t want affection, approval. or appreciation from the "scientific community" at the moment. We will settle for the freedom to define our own terms, fashion our own paradigms, and establish our own methods. At a bare minimum, we would ask that the academy, the press, and the United States court system to stop lying and to desist from characterizing ID as a faith-based initiative when its methods are clearly empirically based. Apparently you haven’t noticed, but the scientific community is not open to any other premises than those embedded in their own paradigm. They have made that clear thousands of times. Are you aware of the fact that the Kansas City school system has established Darwinism as the only possible answer to the question of origins? On the other hand, neo-Darwinists proponents will not subject themselves to scrutiny from the other side. They can’t defend their position and they know it. They visit here only to criticize ID; they never provide a rational justification for their own position. They only show up on a thread like this. When the subject matter requires a defense of chance based evolution, they head for the tall grass. That goes for all of them, astronomers, STATISTICIANS, biologists, you name it. ----“In your eyes, the rejection of ID is the fault of the mainstream scientific community, the so-called authorities who have long since lost their credibility. You say: ‘Evolutionary biology is a monolithic monster that survives solely by misinforming the public about the current status of evolutionary theory and by attempting to discredit ID scientists even to the point of slander.’ ----“You might want to consider an alternate point of view: Maybe ID has simply failed to do the requisite science.” According to whom? Do you not realize that you are, once again, bootlegging the argument from authority into the discussion? Are you not aware that all progress comes from the minority and that the majority must always be dragged in kicking and screaming? In any case, which sector of the scientific community do you grant the right to define science once and for all? Is it that sector that is currently arguing that obesity is contagious and that if you don’t watch out, you will “catch it?” Or, is it the group of life scientists that seek to clone humans as sex slaves? Perhaps you will give the nod to NASA researchers who promote global warming and continually go back to touch up their past reports to make the numbers look good. Perhaps you had better identify which scientific community you refer to and explain to me where they get their authority to define science and is methods. For my part, the “scientific community” needs watching a lot more than we do. -----‘I know that some scientists are jerks, although I don’t know that the number is disproportionate. I’d be surprised if ID proponents had never been treated unfairly, but how systemic is that treatment? How many ID papers and research proposals have been rejected? That’s not a rhetorical question. Nothing personal here, Rob, but I am beginning to think that you are joking. Where were you when Baylor closed the door on Dr. Dembski and Dr. Marks? Where were you when Dr. Behe’s associates at Lehigh decided that they would never even speak to him again if they saw him in the hallway? Where were you when Dr. Gonzalas was refused tenure at Iowa State simply because he accepts and researches the “anthropic principle.” The problem is systemic in every way imaginable. Just so that you will know, 95.8% of evolutionary biologists are either agnostics or atheists, and, yes, we do have the data to support that. Is that disproportionate enough for you? StephenB
How many ID papers and research proposals have been rejected? That’s not a rhetorical question. I'm pulling this one out so it doesn't get lost. It's a subject that's often discussed here ROb. tribune7
Not necessarily. You can have an extremely low P(D|C) but if P(D) is also extremely low, these numbers work against each other. OK, I thought I was keeping up but you lost me on this one. How so? tribune7
The intuitive argument here is that we cant’ just compute the probability of double SIDS and base our conclusion on that. I agree. We need to compare the chance of a double SIDS to the chance of double infanticide. One of the problems with SIDS is that it is used as a cover for infanticide and child abuse. tribune7
trib[260],
Professor — Distilled argument: If P(E|D) is n times as large as P(E|C), you cannot conclude that P(D) is n times as large as P(C ). I’ll agree but if n is large enough it should still make you hmmmm.
Not necessarily. You can have an extremely low P(D|C) but if P(D) is also extremely low, these numbers work against each other. Prof_P.Olofsson
gpuccio:
You must be very new here, so I will try not to be unduly harsh.
Thanks. I'm a very sensitive soul.
Suffice it to say that we don’t accept arguments from authority on this site, because the so-called “authorities” have long since lost their credibility.
I'm not suggesting that you accept arguments from authority. My point is relevant only if you care whether the so-called authorities eventually accept ID. I'm suggesting that if you do care, then you might want to consider whether the larger scientific community accepts your premises before you base your arguments on them. In your eyes, the rejection of ID is the fault of the mainstream scientific community, the so-called authorities who have long since lost their credibility. You say:
Evolutionary biology is a monolithic monster that survives solely by misinforming the public about the current status of evolutionary theory and by attempting to discredit ID scientists even to the point of slander.
You might want to consider an alternate point of view: Maybe ID has simply failed to do the requisite science. I know that some scientists are jerks, although I don't know that the number is disproportionate. I'd be surprised if ID proponents had never been treated unfairly, but how systemic is that treatment? How many ID papers and research proposals have been rejected? That's not a rhetorical question. R0b
gpuccio[261],
Personally, I find that even Dembski’s model of the shopping cart is not a very satisfactory approach (on that, I have to agree with Prof. Olofsson). But the fact that something is difficult to model does not in any way mean that it does not exist.
In return, I agree with you. And as I mentioned in some post above, I don't know what type of calculations we would get from a model based on evolutionary theory. I can spot the flaws in disregarding selection and reproduction in Dembski's model, but I cannot offer a specific better alternative. Prof_P.Olofsson
tribune[259], You can read about the Sally Clark case on your own, there's a lot of stuff online. It's a sad story. The intuitive argument here is that we cant' just compute the probability of double SIDS and base our conclusion on that. We need to compare the chance of a double SIDS to the chance of double infanticide. Both happen, both are rare, so we need to compare how rare they are. Bayes' formula allows us to do the formal computations. Prof_P.Olofsson
Mark (#241): Just two points. 1) "How would you know that intelligence had not been involved? Whatever series of events you observed they may have been designed." Well, again I cannot agree with you on that kind of argument. You have no faith in the objectivity and serious scientific approach of us IDists! In other word, if some structure (like the flagellum) which, according to ID theory, has CSI and IC, could be formed in the lab under controlled condition, and if we could understand the way it happens, that would falsify the point of ID. The idea that God could simulate an apparently perfect spontaneous and unguided process has nothing to do with ID. That could be an extreme (and rather pitiful) line of defense for a religious position, but ID is not based on any religious position. So, we would very simply accept the evidence. The point of ID is that we really are convinced that it cannot happen that way. If we see it happen that way, and understand how, then we were wrong, period. 2) Just a clarification about the CSI (and IC) in the flagellum. I think there is some confusion about that. First of all, we have a lot of CSI in all the individual proteins which make up the flagellum (I think they are almost 50). The TTSS can be used to make a very partial (and extremely unsuccesful) generic model for some of them, but not for all the others. And anyway, even if someone still believes that the TTSS is in some way a precursor of the flagellum (which I don't), the CSI in the TTSS still remains to be explained. But that's only the CSI at the level of single proteins. Obviously, there is much more in a complex, and IC, machine like the flagellum. There is the huge engineering necessary to design and put together all those individual components so that the global machine can arise and work. That the true IC, and certainly a lot of CSI. But you know my point: that kind of "higher level" CSI is very difficult to model, and that's why I never try to do that, and stick to the CSI of single proteins. Personally, I find that even Dembski's model of the shopping cart is not a very satisfactory approach (on that, I have to agree with Prof. Olofsson). But the fact that something is difficult to model does not in any way mean that it does not exist. Higher level CSI is everywhere in the biological world, in organization, regulation, error checking, body plan control, and so on. And sometime in the future, we will have to find a quantitative way of approaching that. In the meantime, for the flagellum, the "qualitative" approach to IC made by Behe in DBB remains absolutely valid, while for a real quantitative approach to higher level CSI in the whole machine we will probably have to wait. But we can always analyze quantitatively the cumulative CSI present in the single proteins. gpuccio
Professor -- Distilled argument: If P(E|D) is n times as large as P(E|C), you cannot conclude that P(D) is n times as large as P(C ). I'll agree but if n is large enough it should still make you hmmmm. To revisit the Sally Clark case: 2 infants die in her care within a relatively short time. SIDS is diagnosed as the cause. Do you: ignore it and say "no big deal, that sort of thing happens?" or investigate for the sake of future children? tribune7
Peter @252
According to MET, the flagellum arose from similar precursors via one or more of a set of natural mechanisms.
You left out the most important word two words! “According to MET, the flagellum may have arosen from similar precursors via one or more of a set of natural mechanisms.”
I see your point, but that's why I used the phrase "According to MET." In the context of MET, there is no "may have."
Evolution is based on supposition and has not been proven.
That overstates the case. Many examples of microevolution have been observed and documented. Any viable ID theory has to take that into account. All of this is beside the point, however. DaveScot's use of "ex nihilo" suggests that MET does say that the flagellum arose from nothing, which is not at all the claim. JJ JayM
Hi Prof. Olofsson, your participation and expertise on the issue is much appreciated. Your honesty is quite the turn around. Because of this I believe you have generated much respect from many ID proponents. I regard such criticism as being healthy for ID in general. I only have a few questions regarding your back-to-back with DaveScot. You say: Nevertheless, it is what the explanatory filter requires. Approximately how many chance hypothesis could there be? Is the other side proposing an infinite chance hypothesis to explain away biological systems as they have already explained away the anthropic principle with multi-verse theory? If so, how would you attempt to plug that into an equation? Isn't this equivalent to saying that 'God-did-it' since if you cannot even narrow down a chance hypothesis to specifics then you infer multiple or infinite sets of non-specifics to explain away the details? When does a chance hypothesis begin and when does one end? (Please note that I am not directly targeting these questions at you) ab
-No, my dear friend, Demsbki does not and I don’t even have to look at your link to know! Professor, I didn't read the book but I'll go on a limb and say I think his reasoning is that a 20 specific amino acids have to organize significantly large chains in a specific manner to form specific proteins which then have to organize in a specific manner to form the flagellum. It's something that can't happen by chance. tribune7
You can’t use significance levels of 10^-150, Sure you can. To illustrate the impossible. Besides, it’s not about the numbers per se, it’s about the logic. Professor, did you understand what I wrote? I was agreeing with you. tribune7
trib, "I thought we might not want to go more off-track with a debate about CSI" Agreed. Prof_P.Olofsson
tribune[250], No, my dear friend, Demsbki does not and I don't even have to look at your link to know! He never considers "design hypotheses" as his approach is strictly eliminative. Nice try but no cigarillo! Prof_P.Olofsson
JayM [245] "According to MET, the flagellum arose from similar precursors via one or more of a set of natural mechanisms." You left out the most important word two words! "According to MET, the flagellum may have arosen from similar precursors via one or more of a set of natural mechanisms." Evolution is based on supposition and has not been proven. Peter
tribune[240], Distilled argument: If P(E|D) is n times as large as P(E|C), you cannot conclude that P(D) is n times as large as P(C ). Prof_P.Olofsson
I still see no probability calculation in your reply. I'll appeal to authority. Dembski refers to it here and apparently goes into more detail in No Free Lunch. only designed objects have CSI and the flagellum structure exhibits CSI . . . .you are, with all due respect and in congeniality, somewhat vague. Well, I did say I was leaving that aside :-) I thought we might not want to go more off-track with a debate about CSI. The point, of course, is the design hypothesis of the flagellum. tribune7
tribune[240], First, the UPB is not relevant in everyday statistical analysis. You can't use significance levels of 10^-150, surely you understand that? Besides, it's not about the numbers per se, it's about the logic. If you don't even want to try to understand what I write, I'll stop. I understand the kneejerk reactions of many here to debate every single word uttered by an ID critic such as I, but actually, none of my posts in this thread are anti-ID. Prof_P.Olofsson
StephenB[243], You don't accept arguments from authority unless, of course, you make those arguments yourself such as in [220], by referring to Durston. Prof_P.Olofsson
tribune[244], I'm not trying to disprove ID or prove evolution, I'm just asking what you'd do. I still see no probability calculation in your reply. When you say
only designed objects have CSI and the flagellum structure exhibits CSII
you are, with all due respect and in congeniality, somewhat vague. Prof_P.Olofsson
DaveScot[231],
You say we can never rule out all chance hypotheses. That is true but, happily, we don’t need to.
Nevertheless, it is what the explanatory filter requires. Prof_P.Olofsson
DaveScot @231
Thus we can state the ID hypothesis for the flagellum as: The ex nihilo creation of a flagellum cannot occur absent intelligent agency.
That's a curious formulation given that modern evolutionary theory (MET) does not suggest that a flagellum can be created ex nihilo either. According to MET, the flagellum arose from similar precursors via one or more of a set of natural mechanisms. JJ JayM
But what do we do in practice? Consider the flagellum. Now give me a design hypothesis and tell me what probability it gives the formation of the flagellum. Leaving aside the positive argument that only designed objects have CSI and the flagellum structure exhibits CSI, you have the serendipituous ordering of the proteins, for which the anti-IDist invokes "the-unkown-law-of-the-gaps. and IC which says that Darwinian evolution is incapable of forming it, which the anti-IDists rebut with "can so". Now, I'm not saying that last rebuttal is completely without merit but it has not been definitely made despite the screaming. tribune7
-----Rob: “I’m saying that “specified complexity” has not been accepted as a legitimate scientific concept by the scientific community.” You must be very new here, so I will try not to be unduly harsh. Suffice it to say that we don’t accept arguments from authority on this site, because the so-called “authorities” have long since lost their credibility. Evolutionary biology is a monolithic monster that survives solely by misinforming the public about the current status of evolutionary theory and by attempting to discredit ID scientists even to the point of slander. Do you know anything at all about ID's current history? StephenB
Kirk, I understand perectly well if you choose to stay away altogether; there's a lot going on here and responding to everything will take a lot of time away from you. I asked above if it's possible to create a new thread that could stay focused, at least for a while. A think my question for you whether you wish to do a Bayesian analysis or a comparison of likelihoods is a good starting point. Regardless of what the biology is or how you define information etc, in the end you need to decide how to use your probabilities and what type of inference to draw. My main point is that your statement in the video is inherently Bayesian, yet, your explanation above is not. Prof_P.Olofsson
the ID hypothesis for the flagellum as: The ex nihilo creation of a flagellum cannot occur absent intelligent agency. The hypothesis can be falsified by a single observation of a flagellum forming ex nihilo by law and chance alone How would you know that intelligence had not been involved? Whatever series of events you observed they may have been designed. Mark Frank
Professor, A couple points with regard to guilt, statistics and the case you cite: One in a million does not even come close to approaching the UPB. Many of us might know someone who has had a one-in-a-million experience. If you had a one-in-a-million experience you would feel yourself quite fortune (or unfortunate) but you would recognize they do happen. Now, if you experienced something with a probability of 10^150 . . . There are, however, more significant reasons why we can't be certain of Ms. Clark's guilt -- which apparently based solely on the probability of SIDS -- to the point of conviction. Both children not dying of SIDS -- and they probably didn't -- does not mean Ms. Clark murdered them. One or both could have died of something other than SIDS or intent by Ms. Clark and the probability calculations would still hold true. Misdiagnosis by law enforcement or medical personnel is certainly within the realm of possibility. And of course, -- to be hard-nose ugly for a second -- just because a direct connection to murder can't be made doesn't mean it didn't happen. tribune7
gpuccio:
I think you are confounding facts with assumptions. In science, one has to start the discussion with “facts” that are already established, and then anyone can make any reasonable assumption about those facts.
I should say "premises" rather than "assumptions". And yes, scientific arguments should be premised on established facts.
As for me, I do prefer not to “be a part of mainstream science”
Ah, then my comments don't apply to you. I had assumed that ID proponents in general were bothered that ID is "expelled" from mainstream science. Maybe I'm completely wrong about that.
Your suggestion, that we should “start” with the “established assumption” that “human design activity reduces to law+chance” (it is, indeed, the preferred assumption of most scientists today), just because it is the prevailing assumption in a social circle, is called scientific tyranny.
Where did I suggest that?
That specified complexity is a coherent concept: that is not an assumption:
A premise, then.
Now, that is a very simple and unequivocal definition of CSI. Wee can discuss about what we mean for funtion, or about how the complexity can be calculated in specific cases, but that does not make the definition incoherent.
But it does make it equivocal, and we can't establish its coherence until we disambiguate it.
just take this (rather long) post. It is CSI in that sense, without any doubt. Are you doubting that?
It depends on how we disambiguate your definition of CSI. If the improbability is conditioned on all law+chance hypotheses, the we both have reason to question whether these posts contain CSI. You said yourself that we don't know for certain if human design activity reduces to law+chance.
I am not making a logical statement here. I am just making an objective statement about what we have so far observed (an empirical statement). If my statement is wrong, then you can easily show why: just show us a known example of spontaneous CSI in nature.
Again, it depends on how we disambiguate CSI. It may be that nature can't produce CSI by definition, in which case the statement isn't empirical but tautological. And I'm not saying that your statement is wrong, only that the ID community has provided no data to support it. You could, for instance, run a double-blind test where a large number of different subjects independently calculate the CSI of objects whose origins they don't know. Maybe throw in some large natural crystals (see the Cueva de los Cristales), or some accidentally discovered complex abstraction, like the Mandelbrot set. The point is that if you want your premises to be accepted as science, then you have to do the science. R0b
It seems that UD would be wise to create a parallel forum to allow all of the various threads and sideshows to flourish. William Wallace
I am very tempted to respond to some of the points that have been raised here, but I can see that there is a major problem with this discussion in that there are far too many ideas being discussed in too loose a fashion. I also see that some of the problems in this discussion are resulting from the convergence (or joint application) of logical argumentation, empirical probabilities and Bayesian probabilities. I would like to start a new thread that works through my thinking on this subject, but proceed in a meticulous (pedantic) fashion. I must say in advance, however, that I can only devote a small amount of time per week on this discussion, so it would proceed at a slow, but hopefully quality, pace. Still, I feel that a meticulous, slow-paced discussion can be much more productive than a fast-paced, multi-issue, loose one. I don't know how to start a new thread on this forum, but if this is agreeable to the powers that be, then those powers should let me know how to start a new thread and I'll proceed. I would make the first post, lay out a couple ground rules to keep the discussion tightly focused, present a couple definitions, and then pause to make sure we are in sufficient agreement to proceed. Once there is sufficient agreement on the basics, we can then proceed to one or two subsequent points, discus them, then move to the next point or two, and so on. In that way, I think we can accomplish something. We may not arrive at checkmate, but I am certain that we can at the very least make much better progress toward that end. KD
tribune[229], Good. Of course we can't. What we need is the probability of guilt conditioned on the evidence, hence we need a prior probability of guilt to apply Bayes rule. We can estimate that one from data by looking at the incidence of double infanticides. Whatever it is, it is small. If it is also 1 in a million, the probability of guilt given the evidence becomes 1/2 which makes perfect sense. Prof_P.Olofsson
kairosfocus[228], It's more illustrious to be number one than number two, even if counted from the bottom. I bet the Bhutanis regret scoring all those goals! Prof_P.Olofsson
This thread is getting very long and diverse. As I would like to hear what Kirk has to say and focus the discussion on his use of probabilitic analysis, is there any chance we can create a new thread? Prof_P.Olofsson
tribune[216],
What would convince you that it’s about design and not the designer?
If you say so, that's enough for me. But what do we do in practice? Consider the flagellum. Now give me a design hypothesis and tell me what probability it gives the formation of the flagellum. Prof_P.Olofsson
DaveScot[231], Your reasoning is all in the context of elimination and falsification which is fine, but what we have mainly discussed in this thread is likelihood comparison and Bayeisan inference. In that context, your flagellum hypothesis has no meaning because it does not confer a likelihood on the data under a separate design hypothesis about the existence of design. Your hypothesis immediately confers probability 0 under the chance hypothesis and you cannot assign to it a prior. As for fine-tuning, my questions was what probability distribution you propose. In #170 you said it can be made. Nevertheless, thanks for you posts. They are very informative and interesting. Prof_P.Olofsson
To DaveScot: "The ex nihilo creation of a flagellum cannot occur absent intelligent agency." With what other theory are you comparing ID to that can hypothesise a flagellum appearing out of nowhere? Evolution says the flagellum is built slowly and stepwise not out of nothing, our (ID) problem with that is that we think some of the steps were impossible. GSV
Prof O @192 You say we can never rule out all chance hypotheses. That is true but, happily, we don't need to. Proofs are for math. Science doesn't require proof. It requires falsifiable, at least in principle, hypotheses. Thus we can state the ID hypothesis for the flagellum as: The ex nihilo creation of a flagellum cannot occur absent intelligent agency. The hypothesis can be falsified by a single observation of a flagellum forming ex nihilo by law and chance alone. The fine tuning problem isn't the same because to make it falsifiable would ask for a universe to be observed being created by law and chance alone. That's not an observation that can be made even in principle in any known fashion. Thus the problem remains in theoretical physics and ID is one of three possible explanations along with discovery of a law that demands just the right amount of mass in a universe to within a single grain of sand so that stars and galaxies can form or that there are an absurdly large number of universes (such as the 10^500 solutions to sting theory) all with different mass/energy totals and ours is one of that set and we're in it because we couldn't exist in any of the others. This is a very real problem for physicists. Fine tuning isn't something that cdesign proponentists made up for apologetic use. It's something that emerges from the laws of physics. Einstein IIRC was the first to find it and called it the cosmological constant and was required for a flat universe. He later thought it was a mistake and should have been a value of zero and dropped from the general relativity field equations. However today we believe it isn't quite zero but rather a number on the order of 10^-60 which, back in Einstein's time, was not distinguishable from zero. Its value today comes from observation, not theory. Another huge problem in cosmology is that quantum field theory predicts the cosmological constant should be 10^120 times larger than the observed value. There is no quantum theory of gravity and that's a huge gap in our understanding of nature. The holy grail of theoretical physics is a theory of gravity that encomposses both the quantum and macro scales. Classical and quantum mechanics are reconciled for all forces of nature except for gravity. DaveScot
StephenB[224]:
It doesn’t help that your paragraph about what you do believe is followed by a clarification described in terms of what you don’t believe. Why not give it another try and put it in the form of an affirmation.
The first paragraph is what I'm affirming. You dispute it as follows:
We do not assume that humans create specified complexity, we know it to be a fact.
Who is "we"? The ID community or scientists in general? Facts in science are based on data. Where are the specified complexity data published?
That is why I raised the example of written paragraphs and sand castles. I trust that there is no need to provide a trillion other examples of humans creating design and no known examples of natural processes creating design. If you are disputing this point, please let me know in the most explcit terms possible so that we can discuss it.
You seem to be conflating design with specified complexity. Are the terms synonymous in your mind? In the most explicit terms possible, I'm saying that "specified complexity" has not been accepted as a legitimate scientific concept by the scientific community. R0b
Can we now say that it is a million times more probable that she is guilty than innocent? I wouldn't, I would say, however, that it is a million to one that I'd hire her as a babysitter. Actually, it's closer to 10^150 to 1 tribune7
BA 77: I have little time or space to engage all the rabbit trails on this thread [cf. my always linked . . . including on Bayes, Fisher and Caputo], though I note SB and GP have raised some very useful points. That Weak Arguments FAQ (and glossary . . .) revision from the existing one will prove useful I believe . . . I will however respond on your, @ 197:
Are they actually getting a pure measure of functionality in information her?e i.e. is it a, across the board approximation of 3-D functionality to information?
First, a "no-brainer" footnote, that bio-functional, algorithm-driving information is precisely that: information. The 4-state G/C/A/T digital patterns in DNA strands make a meaningful -- functional -- difference to the implementing cellular machinery that uses it to step by step assemble proteins, whose function is in key sections [e.g. for folding and/or key-lock fitting and/or bringing to bear the right chemical functional groups in the right slots in an enzyme . . . ] very sensitive to composition. KD cites Axe et al (in a peer-reviewed paper . . . FYI, Judge Jones!) on how that sometimes at least works out: ~ 1 in 10^65 - 70 or so in the peptide sequence space. [And that is an empirical -- observationally based -- probability estimate for those who don't know what such is.) Since Hazen et al specify -- one assumes observed or at least calculable in light of observations [e.g. on folding] -- function in the Fits eqn (p. 2), they are measuring just that: functional info in bits. I suspect the issue over length vs fits has to do with degree of isolation: a more hard to find island of function has more info in it -- it gives us a bigger "surprise" to see it, i.e. more info. And yes, surprise is an info metric too, leading up to the - log[prob] type metric. (Brillouin negentropy info is a related metric and my tie into the genetic entropy of your concern. Cf my note.) Hope that helps GEM of TKI PS: Our friend to the S celebrated his NY in fine style with a 12 km high blast. No hope for better futebol but Lionel Baker made the WI cricket team, which is -- [linked?] on the mend it seems. (At least, that's my hope. How are the formerly mighty fallen! Sigh.) kairosfocus
Dave Scot [217] If you want to continue participating in this thread I suggest you drop the pedantics. I apologise. I will refrain from pointing out any minor or careless errors that you make in the future. If the odds of something happening are given as 9:1 by definition the reciprocal, the odds of not happening, are 1:9. That is no error. I don't mind when real errors are pointed out. -ds Mark Frank
StephenB (#224): Very well said. I had not the time to follow this thread in the details, but I think you have pointed to very fundamental issues. So, just to add the strength of repetition: R0b says: "The point is this: If ID proponents want ID to be a part of mainstream science, with all of the benefits that entails, then they must start the discussion with assumptions that are already established." That really astonishes me, R0b. I think you are confounding facts with assumptions. In science, one has to start the discussion with "facts" that are already established, and then anyone can make any reasonable assumption about those facts. That's how science works. I can't understand why basic epistemology is so often violated in the discussions here. If we "had to start the discussion with assumptions that are already established", in order to please darwinists and "be a part of mainstream science", then any false assumption in mainstream science could never be challenged! Is that what you support? As for me, I do prefer not to "be a part of mainstream science" which is so seriously biased in many fundamental issues. Si, I will stick to facts, and not to "established assumptions". R0b says: "The assumptions that human design activity does not reduce to law+chance, that specified complexity is a coherent concept, and that humans create it and nature does not are not established facts in science." An assumption is not a fact, nor becomes a fact. At best, an assumption is "supported" by facts. So, let's pit things a little bit in order: a) The assumption that human design activity does not reduce to law+chance: that is as much an assumption as its opposite, that human design activity reduces to law+chance. Being two logically exclusive affirmations, one must be true, and the other false. I don't think we know for certain, at present. Therefore, anybody is free to choose either assumption, and to argue in its favour, trying to show which is the assumption which is at present the best explanations for the facts we can observe. That is called scientific debate. Your suggestion, that we should "start" with the "established assumption" that "human design activity reduces to law+chance" (it is, indeed, the preferred assumption of most scientists today), just because it is the prevailing assumption in a social circle, is called scientific tyranny. b) That specified complexity is a coherent concept: that is not an assumption: specified complexity is something we "define". As all definitions about something observable, it can be done in different ways, and darwinists speculate on those differences. But that is simply not correct. If one gives a definition, it can be coherent or not. If it is not coherent, just show why. If two definitions are slightly different, that is not incoherence: two different people are just defining two slightly different concepts, which can both be coherent, and bear no contradiction. Many times, even with you, I have suggested that, for operational reasons, we stick in our discussions of a very generic nature about ID to some very simple and unequivocal definition of CSI. As you know, my favourite one is "any string of digital information which is functionally specified (that is, for which a function can be explicitly described in a specific context) and which has a complexity (improbability of the target space) which certainly is lower than a conventional threshold (which for a generic discussion we can well assume at 1:10^150). Now, that is a very simple and unequivocal definition of CSI. Wee can discuss about what we mean for funtion, or about how the complexity can be calculated in specific cases, but that does not make the definition incoherent. A definition must only define something which we can observe. And CSI is an observable property, not a theory. c) and that humans create it: well, given a coherent definition of CSI, such as the one I gave in the previous point, I think it is very easy to show that this point is an observable fact (and a very easily observable one): just take this (rather long) post. It is CSI in that sense, without any doubt. Are you doubting that? Or are you doubting that I am human? Or are you just doubting that what I write has some meaning? So, are you doubting that humans can routinely output strings of digital information which have some definite function and exceed the complexity threshold I ave indicated? Please, be very clear on that. d) and nature does not: well, there is no assumption here. We are just affirming that nature, "as far as we know", and with the only exception of the subset of biological information, which is the onject of the ID discussion, show no example of spontaneous CSI. Again, we take here for simplicity the CSI definition I gave. And I am not saying that tomorrow we cannot find an example of spontaneous CSI in nature. I am not making a logical statement here. I am just making an objective statement about what we have so far observed (an empirical statement). If my statement is wrong, then you can easily show why: just show us a known example of spontaneous CSI in nature. So, to sum up, we have: in a), two competing, and mutually exclusive, assumptions about human design, none of which can be established as better by simple authority or conformism. in b) simple definitions of CSI which can individually be discussed for their coherence. in c) and d) two very objective statements about observable properties, provided that we use a definition whose coherence we have verified. Nowhere here I see anything like "established assumptions". Nowhere I see any indication to adopt a conformism which requires absolute betrayals of epistemology and logic just to be defined. gpuccio
Wow, thanks for posting this video. Outstanding. And it has PZ running scared. Great stuff. William Wallace
----Rob: "What I have said is that some of the fundamental assumptions on which specified complexity arguments are based have not gained acceptance in mainstream science, so it seems that the ID community might want to establish those assumptions before arguing from them." Perhaps I read something into your comments that weren't there. Which assumptions are you alluding to. For the record, here is your comment that I was responding to: ----"The assumptions that that human design activity does not reduce to law+chance, that specified complexity is a coherent concept, and that humans create it and nature does not are not established facts in science." And now your more recent comment: ----"Just to be clear, here are some things that I haven’t said in this thread: - Human design activity reduces to law+chance. - “Specified complexity” is an incoherent concept. - Humans can’t generate specified complexity. - Nature can generate specified complexity." It doesn't help that your paragraph about what you do believe is followed by a clarification described in terms of what you don't believe. Why not give it another try and put it in the form of an affirmation. Meanwhile, the critical point is this: We do not assume that humans create specified complexity, we know it to be a fact. That is why I raised the example of written paragraphs and sand castles. I trust that there is no need to provide a trillion other examples of humans creating design and no known examples of natural processes creating design. If you are disputing this point, please let me know in the most explcit terms possible so that we can discuss it. StephenB
StephenB[220], Just to be perfectly clear: I agree with everything you have said about spears and sandcastles. Now let us go back to the discussion we had when you joined. Tell me how to do the analysis for the flagellum. Don't just avoid answering by saying "like Kirk does." As it is unclear whether he intends to do a likelihood comparison or a Bayesian inference, I'll let you choose. Let's go. Prof_P.Olofsson
Stephen[220],
Well, we do it exactly the way that Durston has indicated.
Please, you tell me how we should do it for the flagellum. You ask a lot of questions, how about giving an answer for a change? Prof_P.Olofsson
All, Lest you will think that the discussion about conditional probabilities is purely academic, let us consider the real-life case of Sally Clark. She had two babies who died at young age and was charged with double murder. There was no other evidence against her. An expert witness stated that two cases of SIDS (sudden infant death syndrome) was extremely unlikely and she was convicted. The conviction was later appealed and whe was aquitted but only after spending a couple of years in jail. Let us look closer. We have the evidence E of two dead children. We have two competing hypotheses to explain the evidence: guilt (design) and innocence (chance). [For sake of simplicity, let us neglect the possibility of one murder and one case of SIDS.] Under the assumption of guilt, the evidence is certain so we have P(E|guilt)=1. Assuming innocence, the probaility of E is the chance of 2 cases of SIDS. To have a number, let us say it is one in a million: P(E|innocence)=10^-6 (fairly close to the real number). Can we now say that it is a million times more probable that she is guilty than innocent? I'll leave it there for now as a homework assignment. We can do the full analysis tomorrow. The analog with Kirk's analysis is obvious: he claims that the probabiity that ID was required is 10^80000 times more likely than chance, and he does it based on conditional probabilities where ID and chance are to the right of the conditioning bar (see his Example in his post 95). Prof_P.Olofsson
Professor Olofsson [214]: Well, we do it exactly the way that Durston has indicated. Beyond that, I will simply make the general point. If I saw a 500 word paragraph written on the surface of the planet Mars, I would know that it probably didn’t occur as a natural event. If, on the other hand, I simply observe the word "Olofsson," I would still assume the same thing but with much less mathematical certainty. If I only see the letters Olo, I will shrug it off as a coincidence (or natural occurrence). The mathematical proportions in the aforementioned example are clear enough, so there is no reason in the world why statistics cannot express those proportions, all your claims to the contrary notwithstanding. I have already successfully refuted the argument that we somehow need to have prior knowledge about these events to measure them or that we need to know anything about the behavior of the designer that caused them. So, if I notice four nucleotides, each similar to a letter in the alphabet, continually rearranging themselves in multiple patterns with millions of permutations and combinations and working in concert as if in a small factory, design is indicated with a high degree of mathematical certainly. There really shouldn’t be much debate about that. Even so, some on this thread deny or ignore even these elementary facts, which is why I find it necessary to call everyone's relectant attention to the fact that sand castles are obviously designed. Note that I had to bring everyone in kicking and screaming on that one. Under the circumstances, I have to believe, perhaps unjustly, that all their objections about methods are, at least in part, contrived. So, when we debate these same folks on the math, as we must, we must also deal with the fact that many of them, against reason, rule out design in principle. For them, design is nothing more than a mental construct, and this unwarranted presumption muddies the debate waters. In a way, it's like trying to discuss Shakespeare with someone who thinks that language is an “illusion.” So, the first order of business, for me at least, is to liberate ID critics from their neglect and horror of the obvious. StephenB
R0b, The whole scientific community accepts the concept of functional complex specified information. They just do not call it that. If you talk about how DNA is information and is complex, they will all nod their heads yes. If you talk about how DNA specifies a protein. They will nod their heads yes and know you are talking about the translation process and transcription process. If you say the proteins are functional they will nod their heads. If you ask the question the right way they will admit that they cannot think of any other place in nature where this happens. They will also admit that DNA acts like a code and that a computer code is similar to DNA in that the code is complex, specifies another process in the computer and this process has function. They will also say the same thing about human language. Now what they will not say is that the FSCI of DNA did not have a natural origin. You can bring all sorts of arguments such as probabilities, no obvious predecessors, the lack of similar other DNA strings etc. but they will not grant you anything. Just look at the response of some on this thread or on the thread of Dembski's two papers. Now I believe that Kirk Durston may be doing just what you are asking for but it won't have much effect on people's way of thinking. They will deny the hand in front of their face before they give ID an inch. It is an ideology debate not one of science. I have said elsewhere that the most interesting thing about this debate is the refusal of many to accept the obvious or even admit that the obvious might be possible. FCSI exists and is easy to explain but they will deny each part of it here and to use an expression from an above comment, that what we say is daft. jerry
DaveScot[217], Minor point: Probabilities as numbers between 0 and 1 is more than "legitimate," it is how they are defined mathematically and how all results and theorems are formulated. You are correct that everyday uses of percentages and odds are equivalent. Odds of 10:1 against an event corresponds to the probability of that event being 1/11. I'm OK with that and I suppose Mark is as well. The more substantive point he makes is your use of conditional probabilities though. When you say "probability of design" and then write P(e|ID), you are inconsistent. As I outlined in 185, we need to be careful with probability statements involving conditional probabilities. Assume guilt and a DNA match has probability 1. Observe a DNA match and you cannot compute the probability of guilt without estimating other probabilities. While you're here, I'd still be interested in how you think we should resolve the fine-tuning problem in posts [170] and [175]. The numbers 10^80 and 10^20 don't give us any information on what prior distribution we can assume (whatever it means that the particles of the universe are randomly generated in the first place). Prof_P.Olofsson
Mark Frank re; reciprocals If you want to continue participating in this thread I suggest you drop the pedantics. The most common forms of expressing probabilities to the vast majority of people are in percentages (look at a weather forcast) or as a ratio (look at horse racing and other gambling). Expressing them as a number between 0 and 1 is certainly a legitimate third way but it's not common usage. The Meriam-Webster Thesaurus entry for 'probability' lists as synonyms 'chance', 'odds', and 'percentage' so I don't know what you mean by saying odds are different than probability. Perhaps you should be arguing on Meriam-Webster's blog instead of this one. Good luck with that. DaveScot
Professor, 185 & 192 do address my points and I think they are sensible. I had overlooked them and hence, I apologize. If we assume “chance” we can compute probabilities I have no problem assuming chance to compute a probability. My problem is rejecting design to follow a dogma i.e. P(evidence given ID)=0 which I think is that state of things regarding the powers that be with regard to this debate. I think that one looks at the evidence, tacitly assumes a particular designer and concludes that, yes, that evidence is precisely what we would get from this particular designer. What would convince you that it's about design and not the designer? So the problem becomes, what exactly is the “ID hypothesis”? If it is merely “intelligent design has been observed,” about which we all agree, Professor, I think the hypothesis is more along the lines that intelligent design is quantifiable. And as much as I respect Dr. Dembski & I'm cheerleading for KD, I think ID is a work-in-progress and is subject to falsification, scrutiny, criticism & improvement. And it might even turn out to be unsustainable. But it is not something that should be dismissed (which I'm not saying you do). tribune7
StephenB [157] and jerry, I'm confused (which is admittedly a common state of mind for me). I can't tell which, if any, of my statements in this thread you disagree with. Can you help me out here? Just to be clear, here are some things that I haven't said in this thread: - Human design activity reduces to law+chance. - "Specified complexity" is an incoherent concept. - Humans can't generate specified complexity. - Nature can generate specified complexity. What I have said is that some of the fundamental assumptions on which specified complexity arguments are based have not gained acceptance in mainstream science, so it seems that the ID community might want to establish those assumptions before arguing from them. A good first step might be to submit a paper about specified complexity to a scientific or mathematical journal. (I know that Meyer's "Origin" paper talked about specified complexity, but as a survey paper, it reported on it rather than tried to make a case for it as a legitimate scientific concept.) If you want to know my own views on specified complexity or design as the complement of law+chance, I'm happy to discuss them. But I don't know why ID proponents would care about convincing someone like me. There are important fish to fry out there, and I'm not one of them. R0b
StephenB[213], Sure. Now back to the context: How do you use this insight to do likelihood inference or Bayesian inference of, for example, the flagellum or the origin of life? Prof_P.Olofsson
----PO: "The first paleontologist knew what a spear was though." Perhaps, perhaps not, but the first person to recognize the unique pattern in a spear need not have seen one previously. Let's go back to the sand castle. The first person ever to observe one (that wasn't built in his presence) knew (beyond a reasonable doubt) that it was designed. StephenB
tribune[210], If we assume "chance" we can compute probabilities of the evidence at hand, whatever it is, for example the flagellum. Now, "chance" can mean many different things, but at least we can conceptualize how to find probabilities: combinatorial arguments, previous data, etc. Thus, we can assess P(evidence given chance). Now assume "ID". Should we take it for granted that P(evidence given ID)=1? In doing so, I think that one looks at the evidence, tacitly assumes a particular designer and concludes that, yes, that evidence is precisely what we would get from this particular designer. I don't find this logic convinving. So the problem becomes, what exactly is the "ID hypothesis"? If it is merely "intelligent design has been observed," about which we all agree, I don't see how that helps us compute the probability of the flagellum. So what specific ID hypothesis do you want to state, and how to you compute the probability of the flagellum under this hypothesis? I will continue to point out that I am making effectively the same argument as Dembski here; he does not wish to consider design hypotheses, only rule out chance. One might argue that he is not very constructive if ToE is supposed to be replaced by another theory, but as criticism of evolution it is perfectly acceptable. The difference in Mark's and my replies above to what we would do as ID proponents is that I took the negative road to shoot down darwinism and he the positive road to establish an alternative. Prof_P.Olofsson
tribune[210], Some probability calculations are proper, some are not. Read my posts 185 and 192. You should be happy that I'm siding so much with Dembski on this issue! Prof_P.Olofsson
Professor, you seem to be saying that ID theory is improperly using probability calculations because "we (don't) have empirical data to assess our hypotheses in the first place" Now the formation of flagellum as per ToE specifically prohibits the consideration of design. In fact, it dogmatically says random mutations fixed by natural selection is adequate. Now with the mutations being random, chance is a big part of the ToE. So if we can't use probability calculations without some data as to formation of a flagellum -- which we really shouldn't count on getting -- how can we assess the reasonableness of the claims of the capabilities of random mutations? tribune7
tribune[200], Where do I "seem to" say anything like that? Prof_P.Olofsson
StephenB[207], The first paleontologist knew what a spear was though. There are plenty of data that assist us in making that type of inference about human activities. I don't see how we can use any such data to estimate the probabilities needed for a Bayesian or likelihood analysis of biological phenomena. Prof_P.Olofsson
-----Professor O: "Yep, me too. As long as we have data, we can do Bayesian inference, whether explicit or implicit. We don’t have much data on universes being created by chance or by design." Well, no, not really. We don't need data on previous or parallel universes to detect design in this one. This is the case at all levels. The first paleotologist to do the research was able to detect design in an ancient hunter's spear even when there was no precedent. It's the same thing with the first sand castle ever built, or the first love letter ever written. No parallel or precedent is needed. StephenB
-----PO: "Data about politicians can be used to assess the behavior of politicians. I wouldn’t use them to assess the behavior of the designer of the universe." On the other hand, you would use them to detect the "existence" of the designer of the universe. StephenB
tribune[203], Yep, me too. As long as we have data, we can do Bayesian inference, whether explicit or implicit. We don't have much data on universes being created by chance or by design. Prof_P.Olofsson
StephenB[202], Alliteral asphyxiation! By the way, now that you're here, in the last debate a while ago I wrote a last post for you. Just so I didn't do it in vain, here it is, comment 93. Apologies to the rest for using this thread fo personal communcation! Prof_P.Olofsson
Data about politicians can be used to assess the behavior of politicians. I wouldn’t use them to assess the behavior of the designer of the universe. Neither would I . I would used data than can be used to assess whether it was a noble Native American who put markings on a piece of rock rather than wind and rain :-) tribune7
----Prof Olofsson: "That's double daftness dude." It's doubtful that your double-daftness-dude deduction deftly describes the Daffy Duck diversion. StephenB
tribune[200], Data about politicians can be used to assess the behavior of politicians. I wouldn't use them to assess the behavior of the designer of the universe. Prof_P.Olofsson
One major difference between such cases and “life & universe” is that we have empirical data to assess our hypotheses in the first place (cheating politicians have been known to exist). But the point is that we do have empirical data that design exists and has quantifiable characteristics. Hence, it seem rather arbitrary not to allow these characteristics (or the methodology used to find them) to be applied to determine an aspect of the nature of life. Another ironic point -- especially for Mark Frank to consider -- is that to require a benchmark in universe creation to determine the likelihood of design is a science stopper. You seem to be saying it is unfair to calculate the chance of something happening so we must assume it was chance that caused it to happen. As Mr. Spock would say: ruling out probability calculations to determine whether something happened by chance is highly illogical. :-) tribune7
Jerry[194]. That's double daftness, dude. Prof_P.Olofsson
#190 (Professor Oloffson, Mark Frank, or whomever), assume you’re an ID advocate, what would you do differently than Dembski, Behe or Durston when it comes to calculating the probabilities or supporting a design inference? I would concentrate on creating a genuine design hypothesis to compare to evolution. Who, when, how, why. Then you can start to point to try and find positive evidence. Mark Frank
kairosfocus, thanks for the link and the log function explanation; could you try to answer one more question for me? Are they actually getting a pure measure of functionality in information her?e i.e. is it a, across the board approximation of 3-D functionality to information? I ask this because this anomaly caught my eye in this paper: “Measuring the functional sequence complexity of proteins” “although we might expect larger proteins to have a higher FSC, that is not always the case. For example, 342-residue SecY has a FSC of 688 Fits, but the smaller 240-residue RecA actually has a larger FSC of 832 Fits. The Fit density (Fits/amino acid) is, therefore, lower in SecY than in RecA. This indicates that RecA is likely more functionally complex than SecY. Thus as you can see, me being unfamiliar with the math as I am, that this appears that they may actually being getting a (somewhat?) true measure of functionality of a 3 dimensional structure translated into bits that may be used to firmly establish the principle of Genetic Entropy. Thanks again for your help bornagain77
Seversky, I will, but I doubt I can say anything interesting or meaningful. I usually assume that it is reasonable to think of genes as carrying information. Prof_P.Olofsson
Patrick[190], You write
So, in regards to Kirk’s argument essentially your position comes down to asserting that “intelligence” as a foundational starting point is not qualified to your satisfaction?
Not at all. Where do you get that from? My fundamental criticism is outlined in my post 150(f), and there is some basic reading material in post 185. Kirk writes in boldface that he compares an "empirical probability" to a "Bayesian probability." This statement is unclear until we learn what type of inference he intends to use. There is a discrepancy between what he says in the video and what he explains in this thread. As for answering questions, I usually try to answer all that are serious and expressed in a civil manner. I may occasionally overlook some, so remind me please. Participants who have previously bombarded me with insincere questions or personal insults will no longer get replies. Not that I'm overly sensitive but it gives me a criterion to decide how to spend my time! Prof_P.Olofsson
Prof_P.Olofsson, We can be daft together and go watch Daffy Duck to get inspiration. jerry
I realize this is tangential to the subject of this discussion but I would be interested in Professor Olofsson's comments on Australian philosopher John Wilkin's argument here and here that it is misleading to think of biological systems like genes as containing information. Seversky
Pattick[190], I didn't mean to ignore you, sorry about that. There's a lot going on. I did actually address your questions in my post 185, in response to Robbie[181]. OK then, here is Olofsson the ID prononent: I would use strict elimintion, but not the explanatory filter as it is too strict (we can NEVER rule out all chance hypotheses). I would formulate a collection of chance hypotheses that would be acceptable to the biological community, taking into account reproduction, mutation, and natural selection. Next, I would form a reasonable rejection region consiting of observed outomes together with other possible outcomes (think flagellum here which is not the only conceivable motility device). Then I would compute the probabilities of the rejection region under the chance hypotheses and show that they are all ridiculously small. If anybody would object that I cannot assess the likelihood of the hypotheses themselves, I would have to agree but argue that there is nothing else we can do. Prof_P.Olofsson
PO [188] I agree. I should have phrased this as "it means something to estimate the probability of an outcome given current evolutionary theory". The results will be incredibly unreliable, and for many (most?) outcomes the task is impossible, but where it can be done the estimate will at least pose the question "why is this estimate wrong?" which is useful for directing research. To put it another way - the difficulty of doing this type of estimate is solved through biology - not metaphysics. Mark Frank
Prof O. (and others) So, in regards to Kirk's argument essentially your position comes down to asserting that "intelligence" as a foundational starting point is not qualified to your satisfaction? That seems more like a philosophical objection rather than mathematical. Personally, when it comes to "intelligence" and the sciences I've always treated it like gravity. We can observe it functioning but we do not know exactly how it works. I'd also like to highlight this:
(Professor Oloffson, Mark Frank, or whomever), assume you’re an ID advocate, what would you do differently than Dembski, Behe or Durston when it comes to calculating the probabilities or supporting a design inference?
I've actually asked this myself many times of various people (and at least once for Prof O.) and I've been repeatedly ignored. Patrick
Footnote to [185]: You can use Bayes' formula without doing a Bayesian analysis. The former is a formula about probabilities in general; the latter is a particular way of doing inference. Prof_P.Olofsson
Mark[186], Now I must perhaps slightly disagree with you: I think it is very difficult to estimate probabilities given current evolutionary theory as this theory is often not quantitative in nature. For example, I think Dembski's "shopping cart model" for the flagellum is unreasonable and not motivated by biology, but I do not know with what to replace it. I re-read my post and what I mean is that P(E|C) is the only probability we can even hope to compute. Prof_P.Olofsson
As for "daft," yes, I jested in honor of Jerry the Celt. After consulting various online dictionaries, I conclude that "daft" does not describe my opinion of probabilistic analysis. Prof_P.Olofsson
Re #182 and #183 I agree (as usual) with PO. The logic of probability should be applicable everywhere. However, when you start to ask questions about the prior probability of ID or CHANCE then this is so meaningless it is useless. In the Caputo case, for example, it is quite meaningful to ask ourselves how likely was he to have cheated even before we saw the evidence (knowing politicians - quite likely). I still find Bayesian and comparative likelihood logic useful for thinking about ID in a structured way. But you can't do the actual calculations - except - as PO says - you can estimate the probability of an outcome given current evolutionary theory. Mark Frank
Robbie[181], Sure, ID is possible, but if we want to base our inference on probabilities, we must quantify it. How do we quantify "possible"? As for the gentlemen you mention, Behe argues mostly qualitatively from the field of biochemistry. I cannot judge those arguments. His forays into probability and statistics have been fairly rudimentary. If I were an ID proponent and were to choose a general inference strategy, I would probably go along the lines of Dembski's eliminative approach, but without making arbitrary uniformity assumptions that no biologist would support. For a simple probabilistic analysis (returning to the big questions about life, the universe, and evolution) we have two competing hypotheses: C for chance and D for design, and some evidence E. There are then the following probabilities involved, in the usual notation for conditional probability [P(A|B) means the probability of A if we have access to the information given by B]: P( C) and P(D) -- how likely are the hypotheses without any prior knowledge? P(E|D) and P(E|C) -- how likely is the evidence to occur assuming each hypothesis? P(D|E) and P(C|E) -- how likely is each hypothesis given the evidence? These probabilities relate to each other through Bayes' formula. In a likelihood comparison, you compare P(E|D) to P(E|C), and in a Bayesian analysis you compare P(D|E) to P(C|E). In a strictly eliminative analysis, you compare P(E|C) to some preset number that you think is small enough to warrant rejection. There are problems with all 3 approaches but my opinion is that only P(E|C) can be computed or estimated. Dembski's "explanatory filter" is also based on this premise. It is currently unclear whether Durston wishes to do a likelihood or a Bayesian anlysis (see my post 150). I think Mark's point with the planets was to illustrate that you can always bias your analysis in favor of intelligent design by assuming that P(E|D)=1. Such an assumption is tantamount to assuming "guilt" in a court case because that is the one explanation that confers certainty on any circumstantial evidence. Prof_P.Olofsson
----Professor Olofsson: "Daft: (2) Scottish : frivolously merry." Oh come now my good professor. Surely, you jest. Reading Mark's comments in context we can safely conclude that the remark was meant to convey "daft" as "daffy," not frivolously merry. Still, I admire your deft, though daft, foray into damage control. StephenB
tribune[182], Yep. I assume you have been with us when we beat up the Caputo case which is a good example. One major difference between such cases and "life & universe" is that we have empirical data to assess our hypotheses in the first place (cheating politicians have been known to exist). Prof_P.Olofsson
Professor & Mark, Leave aside life & the universe etc. Are probability considerations useful in determining the existence of design in other phenomena? tribune7
Mark Frank (#114) " Assuming random inclinations of planets the probability of such an alignment happening by chance is one in several million (depending how you calculate it). This is nothing compared to the zillions you are working with, but quite big enough to make the point." Isn't the size of the number(s) precisely the point? As Prof Olofsson observes (#132): {...} all we are left with, and all that is hidden in Kirk’s computations, is calculating the probability of chance occurrence of various features in nature. Probably that’s all we can reasonable do, but then we need to be very careful with assumptions." What's reasonable here? Why is, for example, jerry (#130) being unreasonable when he says, {...} ID is possible as an explanation of life. If that is granted the real battle will be easier. Those who defend the extremely unlikelihood of naturalistic methods must by definition deny an intelligence prior to life or else their whole world view falls apart. {...} [Life: designed or not]At what point would the inference be reasonable? Are you (anyone) saing an ID inference is unreasonable (uncalculable) or you saying that the evidence contradicts it? (Professor Oloffson, Mark Frank, or whomever), assume you're an ID advocate, what would you do differently than Dembski, Behe or Durston when it comes to calculating the probabilities or supporting a design inference? Surely these guys deserve some credit; it's not exactly an easy thing to do. I can sometimes appreciate the idea that ID can never be big-T Theory - as Dr. Dembski has noted elsewhere, it's not mechanistic - but certainly the inference is sound or reasonable. No? Why not? Is the idea that life and its origins are the result of blind processes reasonable? Why? Robbie
jerry[179], Daft: (2) Scottish : frivolously merry. Prof_P.Olofsson
"I actually think the whole probability approach is daft " One would have to if one denies ID. That is because the logic and reasoning is so obvious and there seems no defense against it except to call it "daft." How is one to explain these incredibly complex and incredibly rare functional proteins? Probability is one obvious route to assess the difficulty by which all paths have to be evaluated. Whether it is being applies in the best possible way can be an argument but to call it daft is daft. Maybe you should suggest some other ways since modern evolutionary biology hasn't a clue. So step up and earn your Nobel prize because that is what is awaiting anyone who can do it. jerry
JayM, It seems like one has to dot every i and cross every t in these discussion when the comments are made in a hurry and you think the meaning is obvious. I was talking about non life and the inability of nature to create FCSI. There has never been a case. That is why I delineated it that way. Both life and humans as part of life and are also part of nature. As far as life is concerned this thread and the other thread on Dembski's two papers are essentially about the ability of nature to create new FCSI in life or from scratch. All you are pointing to are slight modifications of current genomes and no really new FCSI. If you ask some of the regulars here they will point out that I am probably one of the most vigorous proponents of micro evolution on this site and its power to provide new variants of life and eventually new species. But these new organisms are not very different from their predecessor or original gene pool no matter how long the time period is. jerry
Mark[169],
I actually think the whole probability approach is daft
I agree, as far as the Bayesian or likelihood analyses are concerned. The only probability we can ever reasonably try to estimate is that of the evidence assuming a chance hypothesis, P(E|C). For this reason, I have repeatedly pointed out in this thread that I agree with Dembski: only elimination is at all possible. Not that P(E|C) is easy to estimate either, but at least it is conceptually clear. Thanks for your very clear and accurate explanations of basic probability theory. If we are to discuss these issues at all, we should all learn the basics. Prof_P.Olofsson
kairosfocus[174], Hello again my insular friend! I gave a very brief answer to bornagain[25] in my post [27], but I think it may have been overlooked. At any rate, your explanation is more complete and the key issue is the desirable additivity. Welcome to this thread. We are awaiting Kirk's reply. I hope all is well in the worst soccer nation in the world! Prof_P.Olofsson
DaveScot[170],
According to the laws of physics there could be any number of elementary particles in a universe ranging from zero to infinity.
OK, but what probability distribution do the laws of physics give us? There is no information about that distribution in the numbers 10^80 and 10^20. Prof_P.Olofsson
Jerry @161
The first example is in all life and the other two examples are of human activity. As stated above this phenomena exists no where else in nature and this comment itself is an example of functional complex specified information.
So far we're in agreement. The observations are that human intelligence generates CSI and that CSI exists in biological systems. There is one immediate problem, however. So far no ID researcher has demonstrated how to calculate CSI for a real world biological organism, or even a component of a cell. That's got to be the next step.
Now since no one can never say never there might be a time sometime in the future where someone demonstrates that nature can produce functional complex specified information.
This does not follow from your observations. Biologists have observed information being channeled into the genomes of populations of organisms via MET mechanisms. Unless you're denying the evidence for microevolution, this can't be disputed. That means that some level of CSI can be created by natural processes. The question becomes, how much? In order to answer that, we need the ability to compute CSI rigorously, an understanding of the limits of MET mechanisms, and an understanding of the topology of viable genome space. As Robert Heinlein said: "If it can't be expressed in figures, it is not science; it is opinion." JJ JayM
BA 77: Been busy on other things, so I didn't notice this thread. I see your, @ 25 : I am disappointed that no one here at UD has elucidated why the -log function was necessary. I am fascinated that this particular function is required. --> Why not look at my hopefully more or less 101 level discussion here? --> In a nutshell, - log2[X] converts X from a probability metric to an information one in bits [base e = 2.7182818 . . . would be in "nats," etc], with the properties we want such a metric to have, e.g. additivity so Info A plus Info B is Info (A plus B) --> Probabilities are inherent in information measures [and that is why there is so much selectively hyperskeptical noise against them above . . . but since for instance coding in DNA strands is independent of the chaining chemistry, we can in fact do a very simple calc as the presenter envisions(MF I have a calculation here on Dembski's related metric . . . )] --> What was done in the presented paper is to simply assess the fraction of a contingent space that exhibits function [net area of the archipelago of function], and take the ratio to the whole space [area of space as a whole] as a probability metric, per Laplacian indifference. --> By definition, information can only be stored in a contingent system [think states of alphanumerical characters: no state variability from a program or a law, no info storage; and random arrays are for sufficiently complex funciton, maximally unlikely to hit on function. Monkeys hitting keys at random do not Windows 7 make -- and MS is proof positive that less than perfect design is still design.] --> So, if you want to say some version of pre-life NS and some sort of pre-life chemical ladder leads up to life in some sort5 of pre-biotic soup [realistic empirical evidence, please . . . ?], you are in effect saying that the contingency was displaced from the pre-life chemistry to the underlying physics that makes for the chemistry. --> that info is not going to come out of lucky noise, all you have done is say the keys are pressed to get the characters on the screen and once the keys are pressed [physics is set up], the characters will form [pre-biotic chem ladder]. --> Thus, you are now looking at having implied that the cosmos is fine-tuned for life to emerge. And, as John Leslie has shown, that leads into the sort of local fine-tuning that makes a multiverse hypothesis irrelevant as an escape. --> And that is what the presenter raised and implied. Information, per massive experience, comes form somewhere; and that ain't from lucky noise. Trust that helps. GEM of TKI kairosfocus
# 171 Dave - probabilites are always numbers between 0 and 1 - consult any text book. You seem to be talking about odds which are different. In any case if you were talking about odds then there is no relationship between the odds of e given ID and the odds of e given non-ID. The odds of a black swan dying before its 10th birthday are very high. The odds of non-black swan doing the same are almost identical. Mark Frank
Mark Frank No, I was correct in using 'reciprocal'. If the probability of NID is 10:1 the reciprocal is 1:10 Before correcting someone it's advisable to make doubly sure a mistake was made in the first place. DaveScot
Prof O Yes, as I stated, we can make a probability distribution. According to the laws of physics there could be any number of elementary particles in a universe ranging from zero to infinity. There is no law that prefers any particular number or range of numbers. The current estimated number of solutions to 'string theory' is 10^500. Each solution is a universe with unique parameters. Among that set no one has yet discovered a solution that defines a feasible universe to say nothing of one in which stars and galaxies can form. Unless and until a law is discovered that limits a universe to some range of particle numbers we have a very, very low probability of having just the right number for us to be here talking about it. DaveScot
Re #163 Dave You ask for a probability of design for a certain thing P(e/ID) Do you mean the probability of design given the result - which is P(ID|e) - or do you mean P(e|ID)? I am guessing you mean P(e|ID) but this confusion runs through your comment. For the same certain thing can you provide a probability for non-design P(e/NID) ?. Well Kirk certainly made an estimate! But see my general comment below. Neither of them are 1, by the way. Agreed If you can give me the probability for your non-design theory then just take the reciprocal and you have the probability for mine. :-p I doubt it. I guess you mean the complement (the reciprocal of any probability less than 1 is greater than 1). But even then P(e|X) and P(e|-X) do not have to add up to 1. If you can’t give me a probability with calculations shown for how you reached it then that puts both theories on equal footing so then explain to me why yours should be taught while mine is excluded? Either way you answer, my point is made. Isn’t that just precious? I am not asking for a precise estimate for P(e|ID) or (e|Darwinism) - which is not the same as NID. I was just asking Kirk to (a) Define what his ID hypothesis actually is (b) Accept that P(e|ID) is less than 1 I actually think the whole probability approach is daft - just too many unknowns and meaningless numbers. But Kirk and much of the ID community make this the foundation of their argument - so I am trying to meet them on their own terms. My reasons for rejecting ID are not to do with probabilities. Mark Frank
DaveScot[167], Thanks for the info and the link. It is fascinating stuff indeed. Not to belabor the point, but I still wonder how we can do the probability calculation that the filter requires. We must find the probability that the initial number of particles is in the interval 10^80 plus/minus 10^20. But from what probability distribution was this number drawn? Can we really make any resonable assumptions here? What does it even mean that this number was drawn from a probability distribution? Assuming an infinite number of universes would explain everything. Every event, no matter how improbable, occurs infinitely many times. Although I'm on Team Darwin, I wouldn't use it! Getting late, goodnight for real! Prof_P.Olofsson
Professor O The explanatory filter applied to the fine tuning of the universe. The number of elementary particles in the universe is approximated to be 10^80. There's no known law which says it has to be that number. It could be any larger or smaller number. This establishes the complexity of the number since it one specific one among an infinite or nearly infinite set. Physicists tell us that this specific amount of mass energy, plus or minus about 10^20 particles (about the number of particles in a grain of sand), is required to balance the gravitational force in the universe such that it doesn't prematurely collapse before stars and galaxies could form or fly apart so quickly that stars and galaxies could not form. Thus the number has a specification - it allows stars and galaxies to form. Now that we have established specified complexity we must assess whether or not law & chance could reasonably be responsible for the specified complexity. There is no known physical law that demands any specific mass energy of the universe but that's not to say there is no physical law, just that no one knows of it. There is a chance hypothesis generally called the multiverse which postulates an infinite or nearly infinite number of universes that exist either serially or in parallel and we just happen to be in one which allows stars and galaxies to form. For further reading on this see my article below which describes the current thinking by cosmologists on this issue. Cosmologists and physicists don't like to say it but when pressed they acknowledge that ID is still one explanation on a very short list of things that might explain the fine tuning problem. After 40 years of silence Analog Magazine finally tackles Intelligent Design DaveScot
DaveScot[164], Luck or not, you suggested that the filter can be applied and I cannot see how. For the Caputo case and the flagellum, we can compute probailities (at least try to) but here I don't see what we can do. Prof_P.Olofsson
jerry[162], Thanks. Weinberg is great. In the late 70's I read his book about the "first 3 minutes" and also watched him on TV when he came to Sweden to get the price. I also saw him gave a talk at Rice University, being critical of Thomas Kuhn. Alas he was also critical of the great Swede Hannes Alfven but I suppose I can accept that. Anyways, I still don't understand how one would apply the filter to this fine-tuning problem. It's very different from other problems we have discussed such as the Caputo case or the bacterial flagellum. I can't see how we could possibly compute a probability. It's good that there is accuracy to the 120th decimal place though, otherwise we wouldn't be here to wonder why it is so. Prof_P.Olofsson
Jerry You're correct. That's the fine tuning problem. If the universe began with an amount of mass energy different that what it was by a single grain of sand then it would have either collapsed under its own weight without forming stars & galaxies or it would have inflated too fast for stars & galaxies to form. There's enough mass in the universe for approximately 10^60 grains of sand. If one more or less we wouldn't be here to number them. What luck! DaveScot
Mark Frank You ask for a probability of design for a certain thing P(e/ID) For the same certain thing can you provide a probability for non-design P(e/NID) ? Neither of them are 1, by the way. If you can give me the probability for your non-design theory then just take the reciprocal and you have the probability for mine. :-p If you can't give me a probability with calculations shown for how you reached it then that puts both theories on equal footing so then explain to me why yours should be taught while mine is excluded? Either way you answer, my point is made. Isn't that just precious? DaveScot
Prof_P.Olofsson, This is supposedly from a Scientific American article that is not online so I cannot verify it. It is from a Nobel Laureate named Steven Weinberg. One constant does seem to require an incredible fine-tuning -- The existence of life of any kind seems to require a cancellation between different contributions to the vacuum energy, accurate to about 120 decimal places. This means that if the energies of the Big Bang were, in arbitrary units, not: 100000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000 000000000000000000, but instead: 100000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000 000000000000000001, there would be no life of any sort in the entire universe because as Weinberg states: the universe either would go through a complete cycle of expansion and contraction before life could arise, or would expand so rapidly that no galaxies or stars could form. Maybe someone else can verify the accuracy of this and whether it is still current or if Dave was referring to something else. jerry
Just to clarify my previous comment which I wrote quickly while at a friend's house. Functional complex specified information is 1) in all living things. 2) is in human activity every day. Any written activity counts as functional complex specified activity. Speech is also an example. Now for animals which make signs and communicate there is a similar pattern and the main question I see is if it is complex enough. That could take a whole thread I am sure. 3) is not found anywhere in nature outside of life. 4) human intelligence does not yet have the ability to create life from scratch but has the ability to created additional functional complex specified information within a genome primarily using other proteins and DNA patterns as a template. Now as best as I know none of this additional FSCI created by humans is new but are copies of other sequences or similar sequences. I do not know if anyone has created a new functional protein from scratch. Maybe someone like Kirk would know the answer to that. These are all observations. So if one wants to assign probabilities to anything then these are some of the facts that one could use. Another is 1) the size of the functional complexity of the proteins specified by the DNA for even the simplest cell or 2) the size of the functional complexity for any protein not in these simplest cells and for which no naturalistic path is likely from those proteins in the simplest cell. Maybe before we are finished here we can learn more about assigning probabilities to various proteins that make sense. Anyway I am looking forward to learning more about this sans mathematical notation. Just in case people want to say that they do not know what use functional complex specified information is then here is a brief explanation. It is information that is complex such as DNA that specifies something else such as RNA or a protein and this RNA or protein has a function. Another example is the alphabet and words which form the average sentence that is complex information that communicates (the specification) some thing, action or quality and this communication of a series of inter-related concepts has a function to inform others. Another is a computer code that is also complex information that specifies a series of actions by the computer that has a function such as printing. The first example is in all life and the other two examples are of human activity. As stated above this phenomena exists no where else in nature and this comment itself is an example of functional complex specified information. Now since no one can never say never there might be a time sometime in the future where someone demonstrates that nature can produce functional complex specified information. However till that time the most likely answer is that nature cannot do it. It is not clear that nature has ever added additional functional complex specified information to already existing FCSI. This last proposition might make an interesting discussion. I do not think anything I have said here is circular. jerry
Jerry @154
You’re assuming your conclusion here. What we observe is human intelligence generating CSI and CSI ostensibly in biological organisms.
I have asserted nothing. I am making observations.
I beg to differ. In 141 you said:
The fact that no one has ever given an example of this happening anywhere else except for human activity establishes the fact that it is unique and that nature does not have the power to do it.
Your conclusion that "nature does not have the power to do it", where "it" is the generation of CSI, does not follow from the observations. It is, in fact, the issue under discussion. MET mechanisms have been shown to be capable of transmitting information from the environment to the population of genomes that arises from the population subject to that environment. It is not obvious where the limits of these mechanisms lie. When we can identify those limits, we can then make the claim you do. Until then, it is one of the ID hypotheses, not by any means a proven fact. JJ JayM
I'd say that intelligent design can be established without a priori understanding of intelligence. The logic is a simple NOT conditional, what can't be produced by one method must be produced by another. If no knowledge of intelligence exists other then chance and law then the other side of the coin logically fits with NOT itself. We can attribute the other side of the coin to design by intelligence, or simply design by x unknown feature. In this case (in the case of KD, WD etc...) we do have knowledge and understanding of what Intelligence can produce and we do have knowledge what chance and law by itself can produce. His conclusions are thus very logical as all ID proponents have been repeating over and over again. ab
StephenB "Once again, I will go for broke and claim with “virtual certainty” that in no case did law and chance cause the formation." However, since we were created by law and chance, and we created the sand castles, are they therefore not created by law and chance? :) I know my conclusion is resting on an unproven first statement, but I thought I would have a little fun and maybe learn something trying to think like a Darwinist. Peter
----Rob: "The assumptions that that human design activity does not reduce to law+chance, that specified complexity is a coherent concept, and that humans create it and nature does not are not established facts in science. It seems that establishing those facts, which would involve some empirical work, would be a good first step for ID proponents that want to base their arguments on them." Let’s take two quick examples and work our way down from there: I begin with the one-hundred fifty-five written posts on this thread. Most of them contain over 1000 information bits, and each one is specified. Now I am going to go out on a limb and assert that each instance was generated by human activity. Further, I will argue that not one instance occurred as a result of law and chance. Are you prepared to take up the other side of that argument? It would seem that you are. Would you care to make your case and explain why I have no right to make my claim? Here is another one just for fun: I point now to the numerous sand castles, (2,000,000 grains per construction formed to specificity) that have been observed on the oceans’ shorelines. Once again, I will go for broke and claim with "virtual certainty" that in no case did law and chance cause the formation. Are you ready to argue against that proposition? Will you seriously contend that the formulation is not "coherent." StephenB
By the way, while we're waiting for Kirk, does anybody know to what DaveScot is referring in his post [119]? How does one apply the filter to the fine-tuning of the universe? Prof_P.Olofsson
Adel[152], You are very welcome! I hope Mr. Durstan can catch up on his work and his sleep, and be back here to continue the discussion. Perhaps we can sort out a few things regarding the inference and then we'll all disagree in the end! Goodnight y'all. Prof_P.Olofsson
"You’re assuming your conclusion here. What we observe is human intelligence generating CSI and CSI ostensibly in biological organisms." I have asserted nothing. I am making observations. jerry
KD Thank you for perservering though these questions. Them must seem very elementary to you considering how much you simplify your arguments and repeat yourself. And yet they still don't get it, or refuse to get it. I have recently read (signficant portions) of Yockey's book "Information theory and molecular biology," a great book. He had already convinced me that it was statistically extremely unlikely for even a small protein to be created by natural methods, so forget about the first life form. So my question to you is, given the improbability of the simplest life being created by natural means, what is the likelihood that all the life forms that emerged in the Cambrian explosion were created from natural processes, given of course that life had already existed. If I understand you correctly then it would be significantly less then 1/10^80,000 since you have already included all trials for the potential natural cause and in this case the amount of functional information required is infinitely greater. Peter
Professor Olofsson, Thank you, too! Adel DiBagno
KD, thank you for your great input! It's much appreciated! :D skynetx
KD[135], Mr. Dyston, let me join Mark in thanking you for taking the time to explain your statements and results. Let me respond to the posts in which my remarks were addressed. e) Your proposed "intelligence hypothesis" is
An attribute that distinguishes intelligence from mindless natural processes is the ability of intelligence to produce effects requiring significant levels of functional information.
This sounds more like a definition of intelligence than a hypothesis. A hypothesis is a statement that can be true or false, and to which we can assign probabilities. Your hypothesis can only be deemed true or false if we have a separate definition of intelligence. At any rate, it sounds as though your hypothesis really is "ID was required." f) Regarding your claim "It was about 10^80000 times more probable that ID was required," you say
I do not agree that the statement is inaccurately formulated. This is not a probability statement about ID; it is a probability statement about whether ID is likely to be required.
So the intelligence hypothesis is "ID is required." To avoid semantics, let me label it hypothesis "A." In the video you are making a conditional probability statement about A of the type P(A|E) where E represents any evidence under consideration, be it empirical data or something else. However, there is no such calculation to be found in your latest post. Your probabilities are all conditional in the other direction. Same issue further down:
It is not the probability of ID that I am dealing with, but the probability that ID is required.
that is, the probability of A. Yet, nowhere do you compute the probability of A. With your definition of "ID," I don't see the relevance of noticing that P(ID)=1 and P(e|ID)=1. when you really should deal with probabilities P(A), P(E|A), and P(A|E). [Note: I use "E" here to denote generic "evidence" as I don't want to misuse your more specific "e."] Stating that P(ID)=1 merely establishes that we all agree that ingelligence exists. As an event with probability 1 is statistically independent of any other event, what is the point of P(e|ID)? It is equal to the unconditional probability P(e) which is already 1, as "e" demonstrably exists. [Parenthesis: Your bird analogy only illustrates that we can conclude that a set is non-empty by observing one of its elements. It doesn't let us draw conclusions about the individual elements such as "every black bird likes acorns.] Further down you say
...how much more probable is it that ID was required than nature? The answer is P(33 Kbytes|ID)/P(33 Kbytes/B) = 10^80,000 In other words, ID is 10^80,000 more likely to be required...
so you are again making statements about the probability of A: "ID was required," yet, all your probabilities are conditional in the other direction. I am honestly trying to follow your argument. I think my original criticism stands unopposed, but I am certainly willing to listen. I think a first question that must be asked and that I really would like to get answered is: Are you doing Bayesian inference or likelihood comparison? In the video you are making claims that suggests the former but your response above suggests the latter. I understand that my comment will be perceived by many as technical or pedantic, but it isn't. There are huge differences between different types of statistical analyses both in terms of assumpions and conclusions. If you don't believe me, read Dr D's "Elimination vs. Comparison"! Prof_P.Olofsson
jerry @ 141
You are asking us to establish that 2 + 2 = 4.
I realize that the fundamental claims wrt specified complexity are as obvious as 2+2=4 to you, but after 10+ years of making those claims, the ID community still hasn't gotten any traction with them, at least not as far as mainstream science is concerned. I think that operationalizing and scientifically testing these claims would do a lot in this regard. R0b
KD @ 135:
If Rob wants to talk about carbon dust, then a very large number of configurations are possible (N=a large number), but almost any one of those piles of dust has the same function (the basic properties of piles of carbon dust), so M(Ex) = a large number and M(Ex) is approximately equal to N, with the result that the amount of functional information required is close to zero.
I'm talking about all possible configurations of the 10^23 atoms of carbon, which includes chunks of crystal, dust, and free-floating atoms. As you said, the number of configurations is large, and that's putting it mildly. If we're defining functionality in terms of hardness, the diamond lattice configuration marks a functionality threshold, below which are all but an infinitesimal fraction of the possible configurations. How does that not constitute a huge amount of functional information? R0b
jerry @ 102:
Diamonds don’t specify anything. They might be functional in some contexts and so may be a rock if you are defending yourself or build a wall.
I didn't say that diamonds specify anything. And I agree that both diamonds and rocks can be functional. Kirk Durston certainly hasn't disputed the functionality of diamonds. Dave @ 121:
A diamond does not rise to the level of specified complexity. While one might make a case that it has specification, as an abrasive for instance, it is not complex as the atoms are in a simple repetitive arrangement i.e. law and chance is a perfectly valid explanation for the structure of a diamond.
I explicitly said in the comment you referenced that I'm not talking about specified complexity, but rather Durston's functional information. And saying that a diamond is not complex because of its simple repetitive structure is a non sequitur. Many of Dembski's examples of complexity are simple and repetitive. Your "i.e." makes more sense. R0b
Jerry[140], I didn't ask about the numbers, I asked about the logic. Is Kirk assessing how likely an outcome is under the ID assumption or how likely ID is given the outcome? Prof_P.Olofsson
JayM: there has been a little discussion here on the subject of calculating CSI, mostly by gpuccio and kairosfocus. The formula gpuccio suggested is in fact Kirk's one we're discussing here, once the specification has been given. (however he seems to equate complexity with improbability, which as Davescot has already shown us is wrong; stuff can be improbable and simple at the same time.) Venus Mousetrap
KD @ 135 Again thanks for putting so much work into your comments. There is so much that I would like to say in response but this thread is already excessively long. So I am going to concentrate on one issue. What exactly is the "ID" hypothesis for the presence of proteins with high functional information in living things? You say: Intelligence hypothesis: An attribute that distinguishes intelligence from mindless natural processes is the ability of intelligence to produce effects requiring significant levels of functional information. That is a hypothesis (or possibly a definition) about intelligence but it is not a hypothesis about how functional proteins came to exist. What is it that you are proposing lead to the presence functional proteins in living things (it certainly wasn't Craig Venter or any other human). Without a hypothesis to explain the outcome we can't begin to talk about the probability of the outcome given the hypothesis or vice versa. Mark Frank
Jerry @141
The fact that no one has ever given an example of this happening anywhere else except for human activity establishes the fact that it is unique and that nature does not have the power to do it.
You're assuming your conclusion here. What we observe is human intelligence generating CSI and CSI ostensibly in biological organisms. The ID hypothesis is that that CSI is also of intelligent origin. We need to prove that, not simply assert it. One step toward doing so is to clearly describe how to compute CSI in biological systems, knowing what we do of MET mechanisms. As someone pointed out earlier in this thread, that hasn't yet been done (at least, I haven't been able to find such documentation after considerable searching). JJ JayM
----Professor Olofsson: "There are certainly mathematicians on “our side” who work on applying math to evolutionary biology (such as Rick Durrett at Cornell) but I agree that much more can be done." This is all I have been asking for from "your" side. Provide me with your numbers and allow us to scrutinize them with the same rigor that you scrutinize ours. I say that the Darwinistic paradigm does not even come close to ID in lending itself to mathematical models. You don't seem willing to admit or even comment on the fact that this reciprocal point is cricital to the overall debate. Rather, you seem to exempt yourself from that side of the issue even though your mathematical credentials are sufficient to address it. Are you now saying that the Darwinist formulation is precise enough to be measured mathematically? Isn't it time for you or someone in your camp to provide some semblance of an argument on this subject? StephenB
R0b, You are asking us to establish that 2 + 2 = 4. The basic biology textbooks describes the transcription and translation process. Any good biology textbook will give the history of the discovery process. That establishes the claim that DNA is complex, information and specifies a function. The fact that no one has ever given an example of this happening anywhere else except for human activity establishes the fact that it is unique and that nature does not have the power to do it. Or else there would be an example of it shoved down our throats. The fact that you use diamonds and others use thunder storms shows that no example has been found. So in my two previous paragraphs I have established that DNA is functional complex specified information and that the only other place it exists it with humans. QED. jerry
Prof_P.Olofsson, "As for logic, what do you think about Kirk’s claims that “ID is 10^80000 times more probable?” " You missed my comment that all that objectors can seem to come up with is to quibble over a zero or two. I will settle for 10^800 or if push comes to shove how about 10^8. Or how about plain old 10. I can just see it now, "World Famous Statistician say ID is 10 times more likely than naturalistic methods." The next headline would be "Police still have no clues as to what happened to World Famous Statistician." jerry
KD @135
A distinguishing feature of intelligence is its ability to produce significant levels of functional information. Natural processes can produce very low levels of functional information within the ‘noise’ of random events. Intelligence, however, can produce effects that show up as extremely large anomalies within the background noise of natural processes. This artifact of intelligence can be used as an identifier for ID and the anomalies can be quantified using measures of functional complexity and/or functional information (the two are equivalent).
KD, thank you for coming here to clarify your presentation. I do have one question related to the above. From your paper and presentation, I'm unable to find any support for this claim. You appear to be assuming your conclusion. The whole point of design detection is to determine if life is designed. That means we need to prove that CSI (or functional information, in your case) cannot arise from natural processes, in particular the mechanisms of modern evolutionary theory (MET). Without this proof, your argument is susceptible to claims that it is circular. JJ JayM
R0b:
I’m open to correction. What studies have been done that involve specified complexity? Specified complexity is purportedly a rigorous metric, so the claim that humans create it and law+chance doesn’t should be empirically testable. Where are those tests published?
jerry:
Apparently you need a basic biology course.
That's certainly true, but I'm still hoping for answers to my questions above, which would necessarily include some references. The point is this: If ID proponents want ID to be a part of mainstream science, with all of the benefits that entails, then they must start the discussion with assumptions that are already established. The assumptions that that human design activity does not reduce to law+chance, that specified complexity is a coherent concept, and that humans create it and nature does not are not established facts in science. It seems that establishing those facts, which would involve some empirical work, would be a good first step for ID proponents that want to base their arguments on them. R0b
Kirk: thank you for responding to my question, but the difficulty I was really having is that the calculation you perform for evolution (-log2[1/10^42]) didn't seem to make sense. With the information you've provided outside of the lecture I see now what the calculation means (a random walk in search space to find a 10^-42 sized island of it), but once more, this kind of random walk is a very unrealistic view of evolution. Natural selection does guide searches in spaces with smooth gradients, and nature does have such slopes. I really doubt the tornado-in-junkyard style is going to impress scientists. Venus Mousetrap
Typo problem: I notice that the symbol '?' appears in place of 'greater or equal to' and less than or equal to'. Hopefully, you can interpret the proper meaning give the context. In the preview, the proper symbol appeared, but when I posted it, the ? appeared instead. KD
I want to respond to a few of the comments made since my post. To help focus the discussion, I will break my response into sections a, b, c, … so that if someone has a concern about what I say below, he/she can refer to the appropriate section. a) Regarding function: Some have proposed an ad hoc function and then argued that functional information is ad hoc. It certainly will be if we just make up functions or let functionality be ad hoc. We can apply the equations for functional complexity and functional information to planetary orbits (as Mark Frank suggests) but before we do that, we must first decide what function must be satisfied and the function cannot be merely ad hoc or made up. For Mark Frank's example, no objective function was first proposed. Even if it turned out that planetary orbits in the same plane was necessary for life on earth, the probability of obtaining that, given the laws of physics and accretion disks around stars, would be very high with the result that little, if any functional information would be required to produce the effect. In other words, the functional state would deviate very little from the ground state provided by physics. Regarding Rob's worry about diamonds and functional carbon arrangements and crystal lattices, again function needs to be objective. Carbon has only two possible crystal lattice structures, graphite with sp^2 bonding and diamond with sp^3 bonding, so using Hazen's formula, N=2. Since there is only one option for diamonds (sp^3 bonding), M(ex) =1 in Hazen's formula and we see that diamonds require only 1 bit of functional information if we start with the null state. However, under certain boundary conditions, the sp^3 structure becomes the ground state and zero bits of functional information are required to form diamonds. If Rob wants to talk about carbon dust, then a very large number of configurations are possible (N=a large number), but almost any one of those piles of dust has the same function (the basic properties of piles of carbon dust), so M(Ex) = a large number and M(Ex) is approximately equal to N, with the result that the amount of functional information required is close to zero. When it comes to Venus Mousetrap's concern about defining biological function, we are in very good shape indeed. This has already been discussed in the literature and the bottom line is that the functions are any one of a host of biological processes. For a protein family, the sequences downloaded from Pfam represent a set of sequences that have actually been filtered for functionality by natural selection. When it comes to the minimal genome, the scientists involved in that research have a set of functional requirements that are a priori and objective. They then investigate different proteins and their functions to see what minimal set of proteins will meet the basic functionality defined by the objective requirements. b)Using the findings of Glass et al, the minimal genome will require at least 382 protein-coding genes. c) The average protein requires approximately 700 bits of functional information to encode (see my paper referenced in the previous post). Therefore, the minimal genome will require about 267,000 Fits of information (33 Kbytes). d) A distinguishing feature of intelligence is its ability to produce significant levels of functional information. Natural processes can produce very low levels of functional information within the 'noise' of random events. Intelligence, however, can produce effects that show up as extremely large anomalies within the background noise of natural processes. This artifact of intelligence can be used as an identifier for ID and the anomalies can be quantified using measures of functional complexity and/or functional information (the two are equivalent). e) Regarding Prof Olafson's latest post (126) where he states, "What we need is of course a specific ID hypothesis and one that does not relate to humans" and Mark Frank's request that "the ID hypothesis needs to be clarified." The ID hypothesis was presented earlier in the video, but not shown in the brief video clip posted above. In the full video it is included in slide number 6. I propose the 'Intelligence hypothesis' as follows: Intelligence hypothesis: An attribute that distinguishes intelligence from mindless natural processes is the ability of intelligence to produce effects requiring significant levels of functional information. Given the above hypothesis, we have a method to detect whether ID is likely to be required (slide 7 in the lecture). We need two things: First, we need a method to measure functional information and, second, a method to estimate what a 'significant' level of functional information is. The significant level will be contingent upon the power of the search engine that intelligence is competing with. f) Regarding Prof Olafson's concern about my statement " It was about 10^80000 times more probable that ID was required…” (to produce the minimal genome). I do not agree that the statement is inaccurately formulated. This is not a probability statement about ID; it is a probability statement about whether ID is likely to be required. Mark Frank makes the same oversight when he states, 'It does not follow that ID is a zillion times more probable than B.' It is not the probability of ID that I am dealing with, but the probability that ID is required. We already know with empirical certainty (ignoring philosophical skepticism) that ID exists (we have at least one example in humans, so the empirical probability of ID = 1) and we know that ID is capable of producing 33 Kbytes of functional information (again, we have more than one example) so we do not apply Bayes' theorem to figure out the empirical probability P(e|ID); the empirical probability P(e|ID)=1. I wonder if both Mark Frank and Prof Olafson are getting stuck on my using empirical observations of human intelligence as an indication of what we know ID can do. Consider the question, 'When we consider all the possible birds on all the possible planets in the universe, can some be black?' All I have to do is to find one example of a black bird (e.g., a Raven) and if we can agree, upon observing the bird, that the empirical probability that at least one bird is black is 1 (ignoring philosophical skepticism), then we have also proved with certainty the affirmative to the question of whether some birds can be black. In the same way, if we observe even one example of an intelligent agent producing an effect that requires 33 Kbytes of functional information, then even though we do not have a complete survey of all intelligent agents, we do know with certainty that some intelligent agents can produce 33 Kbytes of functional information, since it is an empirical fact in our case. In other words, when it comes to the questions of whether birds can be black, or whether ID can produce 33 Kbytes of functional information, black birds and intelligent agents producing 33 Kbytes of functional information are both empirical facts. If we can grant that the ability of ID to produce 33 Kbytes of functional information is an empirical fact, then the empirical probability P(33Kbytes|ID) = 1. What we do not know with certainty is whether ID will be required to form the minimal genome. We can only estimate how much more likely it is than natural processes. Natural processes can produce e?140 Fits, albeit with decreasing probability as e increases above 140 Fits. At or below 140 Fits, I've generously given that natural processes can produce e?140 bits with a probability of 1. Notice that there is no hard cut-off in my method to detect whether ID is required. Instead of an either/or result, the method merely estimates the likelihood that ID is required with the underlying assumption that once the probability that ID was required becomes high enough, a rational person will abandon natural processes and go with ID as the more probable explanation. So since ID can produce 33 Kbytes of functional information with demonstrable certainty (i.e., P(33 Kbytes|ID) =1) and if it is given that nature can produce 140 bits with certainty (B generously represents what nature is capable of doing with certainty in all searches), and can produce 33 Kbytes with a probability of 10^-80,000, then whenever we observe that 33 Kbytes of functional information occurs, how much more probable is it that ID was required than nature? The answer is P(33 Kbytes|ID)/P(33 Kbytes/B) = 10^80,000 In other words, ID is 10^80,000 more likely to be required to achieve 33 Kbytes of functional information than natural processes, given that natural processes can only reach 140 Fits with certainty and ID can reach 33 Kbytes with certainty. We are comparing an empirical probability of what ID can do with a Bayesian probability of what natural processes are likely to do, for any case where e?140 Fits. Important point to re-emphasize: I am not interested in any particular data set. Rather, I am interested in the measure of Fits required to produce any data set. Step one is to see if ID is likely to be required by looking at how many Fits are required for the data set and comparing the results with what nature can produce with certainty and with decreasing probability. Once that has been established, we can then proceed to the question of who did it. In SETI, archeology, and forensic science, step one is to see if ID was likely required and by what degree. If it is thought to be highly likely, then step two is to go with the ID option and learn more about who produced the signal/artifact/crime, how it was done and what can be learned from it. KD
DaveScot[119], I know this is a tangent to the Durstan discussion but as we're waiting to hear form him and you brought it up, I wonder about your statement
...the fine tuning of the universe, which if it was different by one part in 10^60 the universe would not be gravitationally stable (if the universe was heavier or lighter by as much one grain of sand it would not be stable), comes up with a positive result.
How do you get a positive result? In order to apply the filter you need to compute a probability. Of what are you computing a probability and what is it? Prof_P.Olofsson
[131], Hey, Prof. Shouldn't that be "things" and "are"? Don't you have plural in Swedish??? Prof_P.Olofsson
jerry[129], Of course ID is a possible explanation. But, assuming a particular ID hypothesis, what is the probability that life would look the way it does? We can't say. We have no data, no information. And what is the a priori probability of ID? Again, we can't say. So all we are left with, and all that is hidden in Kirk's computations, is calculating the probability of chance occurrence of various features in nature. Probably that's all we can reasonable do, but then we need to be very careful with assumptions. As has been said many times on this blog, "chance" is not the same as "uniform distribution." As for logic, what do you think about Kirk's claims that "ID is 10^80000 times more probable?" Forget about probabilities, what is the logic here? I wouldn't call it "bogus" but I think it is obscure at best. And how do you relate this to Dembski's stance regarding elimination vs comparison? I've already said that I'm with Dr D on this one. Now for the real stuff:
the whole world was afraid of them in the 11th - 13th centuries.
Ahh, those were the days! The scariest thing we've come up with in modern times is IKEA furniture and Ace of Base... Prof_P.Olofsson
Jerry[129], Yeah, I also hate those math-related domestic homicides. Prof_P.Olofsson
Prof_P.Olofsson, Your statement should be "If I assume ID, life is possible; If I assume nature, life is extremely unlikely; hence life given ID is more than possible it is probable." If you read all my comment to Mark Frank you will know that all we ask for is that ID is possible as an explanation of life. If that is granted the real battle will be easier. Those who defend the extremely unlikelihood of naturalistic methods must by definition deny an intelligence prior to life or else their whole world view falls apart. I was very good at math at one time and enjoyed it very much and could get up to speed again on all the math necessary for these discussion in about 6 months if I went back to acting like a student. But I won't and my wife would kill me if I tried. So I'll stay out of the details of the math but the logic stays with me. And I can spot bogus logic a mile away. Jerry the Viking and also a Celtic so I probably have a lot of Scandinavian blood in me. I realize the Celts came originally from Switzerland via France but the Vikings paid a visit to Ireland starting in the 800's and so did the Normans later. The Vikings that went to France ended up being called the Normans and the whole world was afraid of them in the 11th - 13th centuries. jerry
Re #116 Me: In this case I think most readers will recognise that the intelligence explanation is “ad hoc” Bornagain77 Well I’m a reader and I don’t think it is ad hoc: I suspect you are not like most readers :-) Mark Frank
Jerry the ABBA-fan[125], Hehe, there is that Scandinavian modesty! Prof_P.Olofsson
MarkFrank[114], You are absolutely correct. I was surprised by Kirk's explanation and I expected you to cook up something before the day dawned here in Texas! Very well said. As I said before, the only probability he can even attempt to compute is the one that is now labeled P(e|B) and that is precisely what he does. In his video he makes a statement about the probability of ID so I expected him to back it up with the relevant Bayesian inference. Now, there is no Bayesian inference, only a comparison of likelihoods, one of which is based on the assumptions that P(ID)=1 and P(e|ID)=1 where "ID" apparently means the event that any kind of intelligent design exists so of course it has probability 1. What we need is of course a specific ID hypothesis and one that does not relate to humans. His calculation of P(e|ID) rests upon a uniformity assumption that I have not looked closer into. At this stage, I think he needs to come up with a much better explanation for his claims before I do so. Those of you who discuss the "explanatory filter," keep in mind that the filter is strictly eliminatory, whereas Kirk's analysis is based on comparison. It feels good to be on Dembski's blog and also on Dembski's side on this issue! :) Jerry the Viking[117], one thing that Mark is explaining here is that there is a no logical foundation of conclusions like "If I assume ID, life is very probable; hence ID is very probable." As this post was initiated by a video with lots of math in it, this discussion must be about math, probability, and logic. Mark understands all of these very well. Prof_P.Olofsson
tribune7, Thank you but I bet some others here could say it better. jerry
DaveScot @ 121 (assuming this comment makes it): are you saying that functional complexity measurements require both Shannon-style uncertainty (for the improbability calculation) and Kolmorogov-style complexity (for eliminating natural law?). I never got that impression from what I've heard about ID, but it'd go a little way toward furthering understanding of it. If that is true, does the law of conservation of information act on one or both of these kinds of information (which, if I understand, are not equivalent)? Venus Mousetrap
Jerry, another great post. You have been on a roll. tribune7
Mark --because it can be used to make a case for design for any very unlikely outcome. You can say the same thing about chance. This debate basically concerns the point at which design becomes a more reasonable explanation than chance. but what does it add to the science to hypothesise intelligence? The presumption of design in no science stopper. All the great scientists presumed design, even Einstein. Meth nat has its place but the problem is that for many it has become the arbiter of all truth. Meth nat is very good for the mundane such as building a bridge but it cannot even come close to answering the big questions. The big problem in the West is that is is used to attempt to answer the big "why are we here" questions. Just look at bioethics and evolutionary psychology. What ID does is get science out of the business of setting morals, which would be of great service to science. tribune7
Seversky: Put very simply, the Explanatory Filter seems to proceed by a process of elimination. For any highly-improbable event, if you can rule out both law and chance as sufficient causes then what remains must be design.
In order to reach a design inference TWO criteria must be met- the elimination of bt cahnce and regularity PLUS a specification must be met. If those TWO are not met then we say "WE don't know (but design remains a p[ossibility)" Joseph
Rob @94 A diamond does not rise to the level of specified complexity. While one might make a case that it has specification, as an abrasive for instance, it is not complex as the atoms are in a simple repetitive arrangement i.e. law and chance is a perfectly valid explanation for the structure of a diamond. DaveScot
For those who missed it, gpuccio made a similar argument to Kirk's in a recent thread. Although very long I'd suggest reading through the entire discussion since it largely encompasses the ID debate as it stands. Patrick
Seversky @90 You ask how can we use the explanatory filter to distinguish design from non-design if the entire universe and everything in it is designed. A tenet of the explanatory filter is that it may produce false negatives i.e. a designer can make something appear to be the result of law and chance. The bar for a positive result is ostensibly set high enough so that false positives are not produced. Thus the result of a coin flip, even though it may be designed, comes up negative and the fine tuning of the universe, which if it was different by one part in 10^60 the universe would not be gravitationally stable (if the universe was heavier or lighter by as much one grain of sand it would not be stable), comes up with a positive result. DaveScot
Professor O -- and here I thought I stumped you :-) ToE invokes randomness (as in undirected mutations) as integral hence probability calculations are necessary in determining its reasonableness. So the factors its seems are something with a function and the number of opportunities, which are obviously bounded by physical limitations, for this function to come about by chance. Obviously laws can decrease chance dramatically but the only law invoked by ToE is natural selection. Few, if any, of us here deny that RM+NS is a real influence on life. What is strongly doubted is the claim that it can explain all biodiversity (or life itself). What is interesting is the search for the edge of RM+NS (which, or course, requires the recognition that it has an edge) And yes there is a positive aspect to ID too (design has inherent characteristics and DNA, proteins etc. match show positive for them). tribune7
Mark Frank, You do not seem to understand what this debate is all about. No where does ID deny that there may be naturalistic processes that could explain the origin of new functional proteins or the even more arduous task for nature that these proteins then are able to act in concert to produce amazing results. What ID says is that the odds, probability, likelihood etc. is so small that it reasonably didn't happen that way. There is no evidence of some unknown law that would accomplish this as Schroedinger thought in "What is Life?" Chance cannot generate the functionality on its own. Intelligence then falls out as a then likely explanation. All of ID is really a logical exercise while its opposition clings to irrational arguments that appeal to some unknown process that has only happened in the distant past. But the real crux of the problem is not scientific but ideology. Currently the conventional wisdom is that the process proceeded naturally and any attempt to say otherwise is squelched and squelched harshly. So the student taking a biology course is given an explanation that has no scientific basis and told the that those who have alternative hypotheses spout gibberish and are religious fundamentalists. ID is rarely portrayed correctly by scientists or the popular press. As one recent episode will show clearly, read Behe's book, "The Edge of Evolution" and then go to amazon.com and his blog and read the reviews of the book by the best and brightest of the evolutionary biologist in the world and see the substance of their claims. Biology textbooks have false claims about evolution portrayed as facts proven by science when no such proof exists and there is good scientific reasoning to show that they may be false. At the same time you have people openly admitting that these false claims have led them to a life change in terms of their beliefs about the world and now they openly proselytize their world view to all who are led to believe that their claims are based on settled science. So here we have people like yourself desperately trying to sure up these bogus claims anyway they can. When presented with the arguments of ID, all people do is wishfully hope that the conclusions they so desperately want will somehow be supported or that the people who deny the claims are first some misguided rubes who do not know what they are talking about. When that fails they hope desperately that the ID claims are off by some undetermined number of zeros in their probability estimates and that they didn't dot all the "i's" and cross all the "t's" so that they can arbitrarily be dismissed. This is not a discussion of learning but desperately trying to find one last gotcha or small flaw so one can go away comfortably and say that the ID people really do not have anything that holds up under scrutiny. Then the new headline will be that ID latest nonsense is shown just that, as nonsense, and all you people who were considering it as an explanation, pay attention to calmer wiser minds who know the real truth on this. It is a game and right now the deck is stacked against any ID argument but the game will continue to play out mainly via the internet till the actual truth is realized. And what is that truth. It is not that ID is absolutely proven fact but that the alternative is highly suspect and that current evidence lines up more behind ID than behind a naturalistic explanation. If you deny the last sentence then I suggest you put up or remember the famous religious statement "Forever hold your peace." jerry
Mark Frank states: "Therefore intelligence is several million more times likely to be the cause of the alignment of the planets than chance. In this case I think most readers will recognise that the intelligence explanation is “ad hoc” and the rational things to do is to ignore and continue to look for other explanations – even if we don’t know what they are at the moment." Well I'm a reader and I don't think it is ad hoc: Yet In Newton’s Principia, Newton concluded that humans know God only by examining the evidences of His creations: “This most beautiful system of the sun, planets, and comets could only proceed from the counsel and dominion of an intelligent and powerful Being. He is eternal and infinite, omnipotent and omniscient; that is his duration reaches from eternity to eternity; his presence from infinity to infinity; he governs all things, and knows all things that are or can be done. We know him only by his most wise and excellent contrivances of things, and final causes; we admire him for his perfection; but we reverence and adore him on account of his dominion; for we adore him as his servants. Maybe this ad hoc assumption is yours alone? bornagain77
Here is second, more off the wall, thought for Kirk. Assuming your biochemistry is correct, you have posed a problem for evolutionary biology, but what does it add to the science to hypothesise intelligence? I assume that you accept that life started as a relatively limited set of proteins and that all new proteins are created from existing DNA through replication, mutation, combination etc. There must be something about this process that makes it plausible to get from one island of folding protein to another in the time available (you have posed intelligent intervention as the mechanism). There is no question that life does succeed in bridging the gap between islands of useful proteins. We are also getting better and better at observing the replication process. So we will have increasing opportunity to observe successful bridging taking place. What will we see, what can we possibly see? Unless we get an explicit statement from the designer, we will either observe some mechanism for biasing the replication towards a useful protein or we will simply see that for some unexplained reason DNA leaps to new useful places. In the first case scientists will have found what they consider to be a natural explanation, in the second case they will still have an unresolved problem and will continue to work on it. But in the absence of a signed affidavit from the designer they will continue to seek a natural solution (maybe without ever succeeding) because there is no way to pursue the teleological solution. Newton assumed the laws of motion and gravity were God’s will – but an atheist can work just as well with his laws. It makes little difference to the scientist whether intelligence is involved or not. Mark Frank
Kirk Thanks for your clear and interesting comment. I am not qualified to comment on the biochemistry but I would like to comment on the logic of your reasoning. I have already said all of this in various comments above and elsewhere – I have just brought them together and perhaps expressed them a bit more clearly. First, as PO has pointed out your comment says something different from your presentation. In the comment you have estimated the relative likelihood of e given B compared to e given ID and estimated it to be a zillion times greater for ID than B. It does not follow that that ID is a zillion times more probable than B – (which is what you said and put on your slides, but hopefully not in your paper). This may seem like pedantry to some readers – but it is not. You can tell there is something odd about this argument (the one you used in your presentation) because it can be used to make a case for design for any very unlikely outcome. For example, on first sight it seems extremely odd that the orbits of all the planets are aligned in much the same plane. Assuming random inclinations of planets the probability of such an alignment happening by chance is one in several million (depending how you calculate it). This is nothing compared to the zillions you are working with, but quite big enough to make the point. Now we know that humans are capable of aligning the orbits of large numbers of spheres using intelligence. So, using your argument, the probability of such an alignment given intelligence is 1. Therefore intelligence is several million more times likely to be the cause of the alignment of the planets than chance. In this case I think most readers will recognise that the intelligence explanation is “ad hoc” and the rational things to do is to ignore and continue to look for other explanations – even if we don’t know what they are at the moment. (I know that other explanations spring to mind rather easily in the case of planets) I tried to turn your argument into Bayesian logic – but it just disappears into a mire of unknowns and metaphysical speculation: First, the ID hypothesis needs to be clarified. We know that a human is capable of creating RecA. We also know that a human did not create the RecA that is found in living things. So the “ID” hypothesis is actually: “something other than a human had the intelligence, powers and motivation to create proteins with the fits of RecA in living things”. Having clarified what ID is you then attempt to estimate P(e|ID). It isn’t 1. There is no reason to suppose that this thing was bound to succeed. It all depends how it is hypothesised to have worked. Did it directly intervene and line up the DNA? Or did it just create a fitness function which somewhat biased the odds? Or did it perhaps generate 10^250 trials somewhere else and insert the ones that matched via a virus? So P(e|ID) is only 1 if you assume the designer was competent enough to create RecA for certain. It is actually a complete unknown. To complete the Bayesian logic you then have to estimate the prior probability of such a thing existing. As the thing is defined as “that intelligence which is capable of producing the result” and in no other terms – this is not so much low as meaningless. Mark Frank
tyharris[111,112], Thanks! Those are very nice words to hear in a debate that often gets heated and reduces to personal attacks and insults (I mean the general debate, the exchanges I've been involved in have been mostly civil). I've gotten to know some decent and respectful people in the ID camp and I have no problems with disagreeing on issues. I also thank Kirk and wish him the best of luck with his graduate studies and his research. I have problems with the types of probability claims he makes but, as I said before, I do not dismiss his actual research based on those claims. There are certainly mathematicians on "our side" who work on applying math to evolutionary biology (such as Rick Durrett at Cornell) but I agree that much more can be done. Prof_P.Olofsson
Also thanks to Mr. Dunston for contributing to a very relevant field of study- ie. the application of math to specifying and quantifying biological information. I dont claim to come close to understanding you OR dembski comlpetely as regards your calculations, but I do think that even a layman can see that the more we learn about how ridiculously, unbelievably complex life is, the lower the probability becomes that it all came about independent of design. At any rate, attempting to quantify both the complexity of biological information, and taking a stab at the probability of it all just writing itself is important work and I commend you and all who undertake it. Better you than me. Personally, I would rather take a bullet than go through high-school trigonometry again- that's how I feel about math. tyharris
Prof. Olofsson- To be perfectly honest, the math is above my head. But having spent a couple of years lurking and listening to both sides of the debate, I still come down on the side of ID. I dont know if the odds are actually twenty gazillion to one against, or eighty-five buzillion-ding-dong-dillion to one against human biological information complexity coming about without any intelligent design involved, but I think I am pretty confident that A. ) it can be pretty fairly charachterized as a pretty darned unlikely thing, and B.) that a purely naturalistic process is a lot less likely than design. I'll let you and Dembski decide how many zeroes to add to the X-to-one against figure, and maybe someday that number will tell us whether or not this theory jibes with a finite universe or not. Can we not say this though... that the burden of proof should properly lie on those making the affirmative claim of a purely naturalistic explanation for life, or on anybody adocating for ANY theory so unlikely, to show how it actually happened? We can guess and infer until the cows come home, but if life happened via purely naturalistic processes then why has nobody been able to specify, observe, or demonstrate that event/process? I hear a lot of theoretical talk, but the actual specific method or set of exact steps by which spontaneous abiogenesis supposedly occurred seems to be totally insubstantiated or laid out- even in theory, never mind being actually observed to ever occur in nature. Am I wrong about that? And if I am not wrong, then I suppose ID is just as reasonable a school of thought as Darwinism correct? At any rate, I just wanted to say how very much I appreciate your serious and thoughtful contributions to uncommon descent. It's really refreshing to hear somebody coming from a darwin-based standpoint debating using their wits and fact-based arguments as opposed to just condescending to all ID advocates as a bunch of creationist nuts and hurling insults as many of your tenured colleagues do elsewhere on the web. So I just wanted to say thanks for that. The manner in which you conduct yourself does you and your viewpoint much credit, and listening to people like you debate from a standpoint of intellectual honesty instead of hate is really, really nice. I feel like I can try to learn from you because you dont seem to be ideaology or agenda-driven. It's that open-minded willingness to see what one sees as opposed to what one wants to see that landed a lot of us here at UD looking for truth as opposed to dogma to begin with. tyharris
jerry @ 93
You are assuming that law and chance are not part of the design. A designer does not have to design every detail but could very well allow chance to operate within a framework of initial and boundary conditions. And theoretically change some of these conditions over time.
As I said, my understanding of the Explanatory Filter was that it it was claimed to be able to identify design regardless of the nature of the designer. That would mean that if, as you say, a designer chose to incorporate law and chance into a design the EF would still be able to detect the design element. Law and chance were not assumed to be excluded.
So I guess what the EF is doing is separating out those phenomenon that are allowed to proceed by chance and law. Now I am not an expert or even well read on the EF but your objection seems to be irrelevant as far as I know.
Put very simply, the Explanatory Filter seems to proceed by a process of elimination. For any highly-improbable event, if you can rule out both law and chance as sufficient causes then what remains must be design. Obviously, the trick is going to be to exclude law and chance with any degree of certainty.
By the way no one is stopping you from investigating the nature of the designer. People have been doing that for several thousand years and you are welcome to join them.
I was not suggesting in any way that people should stop investigating the nature of the designer. I was just pointing out that ID proponents say the nature of a designer is not a necessary consideration for the detection of design. By all means continue to look for evidence of a designer. Seversky
tribune[74], I didn't forget you...thanks for your nice words! I enjoy your presence too, you're always a good sport. The answer to yuour question: quite the contrary, we almost always have to base decisions on probabilities. Prof_P.Olofsson
Kirk, I thought your comment was fantastic up to the math and then it lost me. I will have to read the math part again and it may sink it. But your description of islands of functional proteins was one of the best I have seen anywhere of this topic. Thank you. You have helped us a lot here and we will be able to use this information in the future in trying to describe the issues. jerry
R0b said: "I’m open to correction. What studies have been done that involve specified complexity? Specified complexity is purportedly a rigorous metric, so the claim that humans create it and law+chance doesn’t should be empirically testable. Where are those tests published?" Apparently you need a basic biology course. Functional specification of this complex data (DNA) was established in the early 1960's through what is known as the transcription process and then the translation process. Individual DNA string specify protein polymers through these two processes and these proteins have function. I believe Francis Crick of double helix fame first came up with the three DNA codon code and was proven right and then one by one the relationships between DNA and a specific amino acid was discovered along with the stop codons. Since that time there has been an immense amount of research on the functionality of these proteins found in life. And as Kirk Durston said above these proteins are very rare. But I should stop here and recommend you take any high school biology book and start there. It will explain it for you better than I can. As far as law and chance creating it, there is as of now not one example of it in the natural world outside of life and human activity. A diamond doesn't specify anything. You are maybe confusing complexity with functional complex specified information. They are not necessarily the same thing. As far as humans doing it, your comments here are an example of it so that is not an issue. DNA does it, humans do it, but nothing else does it and up in Boston unfortunately beans don't do it. jerry
KD...if you have the time, I'd be curious to know your thoughts on frontloading, as opposed to intermittent/continual design. Given the hurdles for NS to create "ordinary" proteins, how could a frontloaded algorithm encapsulate all that complexity? WeaselSpotting
R0b:
No studies have ever been done involving specified complexity. There’s no consensus that specified complexity is even a coherent concept, or that law+chance doesn’t include human behavior.
jerry:
This statement is nonsense.
I'm open to correction. What studies have been done that involve specified complexity? Specified complexity is purportedly a rigorous metric, so the claim that humans create it and law+chance doesn't should be empirically testable. Where are those tests published? And when I speak of consensus, I'm not talking about just among ID proponents. Can you provide any evidence of a larger consensus on your claims? R0b
KD, thanks for the pointer to your paper. I certainly need to read it, although it appears to consist mostly of biological applications, which are way over my head. Your paper includes a crucial ingredient that your presentation does not, namely probabilities. Of course, those very probabilities are at issue in the ID debate. Best of luck in that arena. With regards to the diamond example, there seems to be an awful lot of ways that 3 grams of carbon dust could be configured without violating the laws of physics. Is your measure intended to be applied only to genetic sequences? Also, I assume that the function of Venter's watermarks is identification. When non-watermarked DNA is used for identification (which, of course, happens all the time), is it likewise functional? R0b
I ask about Genetic Entropy because you stated in: “Measuring the functional sequence complexity of proteins” "although we might expect larger proteins to have a higher FSC, that is not always the case. For example, 342-residue SecY has a FSC of 688 Fits, but the smaller 240-residue RecA actually has a larger FSC of 832 Fits. The Fit density (Fits/amino acid) is, therefore, lower in SecY than in RecA. This indicates that RecA is likely more functionally complex than SecY. Thus from what I can gather from your paper this looks like it may be sufficient to establish the principle of Genetic Entropy: bornagain77
R0b said. "It’s interesting how many ID proponents take these claims as a given, but the fact is that they aren’t established at all. No studies have ever been done involving specified complexity. There’s no consensus that specified complexity is even a coherent concept, or that law+chance doesn’t include human behavior." This statement is nonsense. They are most definitely established and you are new here and are making assertions without reading all that has been said. DNA is functional complex specified information. This has come up at least a half dozen times in the last 2 weeks. So you are not reading everything. DNA is complex, information that specifies something else, RNA and proteins, that are functional. If you deny that then I suggest a beginning biology course. Diamonds don't specify anything. They might be functional in some contexts and so may be a rock if you are defending yourself or build a wall. No where else on the planet does such functional complex specified information happen except with humans and there it happens all the time. Some might stretch a point and say some animal constructions might be so but I don't think so. Now you can make up your own definitions and play with them but what I just described briefly is what we are dealing with here. Now I haven't read Kirk Durston's long reply and do not know what he says about this but the simple explanation up above can do till I read what he says. jerry
KD, Thanks for the talk as it has certainly generated a lot of interest here as well as on youtube and Godtube; (forgive my editing as I had to edit for the 10 min. limit on youtube) One question I had for you is, Do you think this approach is sufficient, or will be sufficient, to establish the principle of Genetic Entropy to the molecular level of biology? A principle which many lines of empirical evidence are already overwhelmingly pointing to on a semi-macro level(J.C. Sanford, M. Behe, etc..) bornagain77
Kirk: thank you for trying to clarify this lecture. I have to say, however, that I'm still puzzled - in your lecture, you're using the functional information equation I = -log2[M/N] which is fine for examples like your safe, but didn't make sense to me when applied to evolution, because you defined no function. I understand you're modelling evolution as a random walk to find one needle in a 10^42 sized haystack, but in my opinion arguments from improbability like this always miss out the details of the evolutionary process, even if they have matured from the old and laughably simplistic creationist 'whole cell forming by chance' notions. But every time I try to argue this it eventually turns into 'but what does sequence space really look like' and doesn't go anywhere, so I'll let other people fuss over that. Venus Mousetrap
KD[95], Thanks for joining the discussion and explaining your thoughts in more detail. I hope to contribute more later, but I just wanted to point out my initial objection to your talk. You claim that "It was about 10^80000 times more probable that ID was required..." which is a probability statement about ID. Yet, in your explanation, there are no such probability statements; ID only shows up to the right of the conditioning bar. Do you agree with my original criticism that your statement, as presented in the video, is inaccurately formulated? Prof_P.Olofsson
Peter[81], Thanks for the assessment of my mental faculties. On a factual note, the type of probability you mention is the conditional probability P(data|innocence). Without reasonable estimates of the other relevant probabilities in Bayes' rule, the values of this probaility alone is not enough to convict. Just google and read about the case of Sally Clark and you will see what I mean. Prof_P.Olofsson
Rob (94), I just noticed your discussion of diamonds. In my paper, I discuss the null state, the ground state and the functional state. The functional complexity/functional information is measured as the difference in function entropy from the ground state to the functional state, not the null state as you assume in your diamond example. Essentially, the ground state is a state determined by the laws of physics. The null state is completely random and is a special case of the ground state where physics imposes no constraints on the initial conditions. In the case of the diamond crystal lattice, the null state is not an option, as physics imposes a priori constraints on the crystal lattice structure. In other words, the ground state represents the possibilities permitted by nature before any additional functional constraints are imposed. KD
R0b (94),
If, by “functional complexity”, you mean Durston’s functional information (which is crucially different from Dembski’s specified complexity), then nature certainly can produce it [in diamonds].
But isn't that functional information already encoded into the "fitness function" of nature? Crystals form due to the properties of atoms--not due to a blind search. Timothy
As I stated in my brief prior post, this particular talk was given about a year ago at the University of Edinburgh, Scotland, to a general student audience, so it was fairly non-technical and time constraints made it difficult enough to even give an overview of my thinking on this subject, forget about expanding on technical points. At present, I am thinking through a new ID presentation that will be less broad, but go deeper on what I feel are key technical points. With regard to the presentation in the video, I've briefly outlined below some of the probability factors. I only skim the surface here, but hopefully, you it will clarify some concerns that have been raised here (I have not been able to take the time to read more than half the comments posted above). Before the reader can adequately understand my approach, I must briefly discuss some key concepts. I welcome any constructive criticism from members here, especially those who may not be convinced of the need for any role for intelligence in the origin of the protein families. Regarding stable proteins as 'targets' in sequence space: A key piece of background information has to do with treating folding, functional proteins as 'targets'. I get the impression from evolutionary biologists that the stable folding proteins are the products of biological life. Virtually any combination of amino acids confer various levels of fitness upon the organism so that natural selection can direct the search toward better proteins. In this manner, evolution can 'climb Mount Improbable'. This is not so. It is physics that determines which combinations of amino acids can fold into stable folds. Function is an additional requirement and is a joint relationship between the system, in this case a life form, and the stable proteins permitted by physics. In other words, it is biology that must search amino acid sequence space to find these stable folds that are determined by physics. Thus, the stable, folding proteins represent very real, objective targets that are 'out there' in amino acid sequence space. Biological life does not make them up, it must find them and physics holds the combinations. The role of natural selection in searching sequence space: All papers I've read on this subject, that actually deal with experimental results, indicate that most amino acid combinations do not yield a stable, folded functional protein. This is confirmed by my own research as well. Both published research, and my own as well, seem to indicate that virtually all of sequence space codes for non-stable proteins, and are of no use to life. There is an infinitesimal subset of amino acid sequences that physics determines to be stable folded proteins. For simplicity, you can regard this subset as being made up of fold-set islands in an ocean of non-folding sequence space. I say 'fold-set' because some sequences may be able to provide more than one fold. To find a novel protein family by mutating an existing gene, the evolutionary track must cross non-folding sequence space. Because non-folding proteins are not useful to biological life, indeed they can be lethal, they have no phenotypic effect and, thus, natural selection cannot help navigate the evolving gene through non-folding sequence space. (I am aware than about 30 percent of proteins are intrinsically disordered, but they tend to achieve some structure after binding, completing the folding process in most cases.) The only place natural selection can work is within a fold-set island, where the protein already exists, and can be fine-tuned through selection. This is not to be confused with locating a novel protein family. Bottom Line: Physics determines which amino acid combinations produce stable folding proteins. These stable folding sequences appear to be extremely rare in sequence space and can be regarded as targets. The regions between the fold-set islands are non-folding, produce no phenotypic effect (except for harmful effects if the non-folding proteins begin to clump) and natural selection is of no use whatsoever in guiding the evolutionary trajectory as it random walks across non-folding sequence space. If anyone thinks natural selection will be useful to find a novel protein family, that is a huge assumption which flies in the face of the consensus of experimental results. The onus would be on such a person to provide experimental support. There is none at present and plenty that says quite the opposite. Thus, the search for novel protein families is very much a random search. I cannot emphasize this enough ….. natural selection is of no help in discovering a novel protein family. Assumptions by those who believe Darwinian evolution did it are in a position where their assumption is not only without any experimental support, but experimental results falsify that assumption. With the above in mind, the below was my thinking as I put together the presentation shown in the video. Let e = a variable that represents a value of functional information. I noticed that some were talking about P(data), but I am not interested in any particular data set. I am only interested in e, the level of functional information for any data set or effect. Given 10^42 possible trials let B represent a target that occupies 10^-42 of sequence space. We will make the extremely generous assumption that B can be found with a probability that approaches 1, for 10^42 trials (i.e., P(B|10^42 trials) ? 1 and P(B) = 10^-42). In reality, a random walk of only 10^42 moves would be much less efficient, as there could be numerous instances of the evolutionary pathway producing a sequence more than once in its random walk. Then P(B) = 10^-42. Let 10^42 trials be a constant in any search for a novel protein family. In other words, I am making the very generous assumption that the full 10^42 trials were available and actually carried out. In other words, P(B|e) = 1. To clarify, for any e required by a protein family, a full search of 10^42 trials is necessarily carried out (again a generous assumption to make an evolutionary success more likely). Also, since we know that intelligence already exists, and is capable in the case of humans of easily producing the levels of e that we observe in the protein families (e.g., one page of an essay typically contains a level of functional information that exceeds the level required to code for the average protein family), the existence of intelligent design is an a posteriori empirical fact. Therefore, P(ID) = 1 and P(e|ID) = 1 for values of e typically found in biopolymers. If there is a limit to the level of e that known intelligence can achieve, it is certainly larger than what is contained in a typical university library. Reminder, do not confuse data with e. It is e that is key in identifying effects that require intelligence, not data sets. P(e)=target size/size of sequence space. This is an a posteriori probability computationally computed from a set of sequences for a protein family, where the number of sequences is preferably greater than 1,000 to give an adequate sampling. (this requires its own discussion to show how it is done, for more info, see my paper. Recall Bayes' Theorem where P(e|B) = (P(B|e)*P(e))/P(B). Therefore, since P(e|ID) = 1 P(e|ID)/P(e|B) = P(B)/ (P(B|e)*P(e)) Example: Given that RecA has a Fit value of about 832 Fits, P(e) ? 10^-250. Therefore, P(e|ID)/P(e|B) ? 10^-42/ 10^-250 P(e|ID)/P(e|B) ? 10^208 In other words, if we had to choose the most likely option for an effect such as RecA that required 832 Fits of information to produce, intelligence would be 10^208 more likely than biological life with its 10^42 trials to achieve that level of functional information. This gives you the rationale behind the probability numbers I used in my presentation a year ago. However, I've been too generous with what 10^42 trials can achieve, so in my next presentation I will be more conservative, possibly using Demski's approach in his forthcoming paper, although I haven't read it yet, but intend to do so in the next few days. I've not expanded upon Fits and its relation to Hazen et al. method to measure functional info. However, if you look at my paper and equate my measure of functional complexity to his equation, you will see that it is straight forward to arrive at an estimation for his M(Ex). KD
jerry @ 83:
Specified complexity arises all the time in human activity ... Now take law and chance - there is not one single instance where these forces have ever produced functional complexity.
It's interesting how many ID proponents take these claims as a given, but the fact is that they aren't established at all. No studies have ever been done involving specified complexity. There's no consensus that specified complexity is even a coherent concept, or that law+chance doesn't include human behavior. If, by "functional complexity", you mean Durston's functional information (which is crucially different from Dembski's specified complexity), then nature certainly can produce it. Let's take diamonds, which are naturally occurring and quite functional due to their hardness. A diamond's hardness is due to its diamond lattice structure. We don't know of any other configuration of carbon atoms that would result in this level of hardness. Given a 14K diamond, which has on the order of 10^23 atoms, how many configurations are conceivable? (Not just regular lattice structures, but any way that those 10^23 atoms could be configured.) The fraction of those configurations that represent the diamond lattice structure is infinitesimal, so we're talking an enormous amount of functional information in that structure. R0b
Seversky, You are assuming that law and chance are not part of the design. A designer does not have to design every detail but could very well allow chance to operate within a framework of initial and boundary conditions. And theoretically change some of these conditions over time. So I guess what the EF is doing is separating out those phenomenon that are allowed to proceed by chance and law. Now I am not an expert or even well read on the EF but your objection seems to be irrelevant as far as I know. By the way no one is stopping you from investigating the nature of the designer. People have been doing that for several thousand years and you are welcome to join them. jerry
Peter @ 81
You are confused. The use of DNA in court cases is always based on the probability of a match between a sample and the defendant. Lawyers will always tell you that the probability of a match is say 1 in 50 million (P1). In a case where there are only two possible outcomes, the second outcome must have a probability 1 - P1. On this basis people go to jail. Also, DNA match is not circumstantial. It is the most respected form of evidence on which many wrongfully convicted people were released.
I think I am confused, too. My understanding was that, when a probability of something like 1 in 50 million is quoted in the context of DNA evidence, it means that there is only a 1 in 50 million chance that the sample could belong to someone other than the defendant. That is a very low probability but not an impossibility. It would not necessarily, on its own, make the guilt of the defendant certain, since highly improbable events happen all the time, but taken in combination with other evidence it could take the question of guilt out of the realm of reasonable doubt. Equally, where convictions have been set aside following the submission of new DNA evidence it is not always that innocence has been established, it is sometimes just that the new evidence has made the original verdict unsafe and so the benefit of the doubt is granted. Seversky
Seversky[90]. But keep in mind that the explanatory filter is very explicitly non-Bayesian. Also note ROb's insightfull comment #77. Prof_P.Olofsson
Peter @ 86
How bout: The designer existed before matter, time, and space and was intelligent enough to design a universe with numerous constants which are extremely fine tuned for life. Such a designer in my opinion would find creating life on earth as an almost trivial task compared to creating the universe.
Possibly, but how could you ever detect the imprint of such a designer? Any method of reliably detecting design, regardless of the nature of the designer, such as is claimed for the Explanatory Filter, must be premised on the possibility of being able to distinguish what is designed from what is not designed. But if the entire Universe is the product of some Original Intelligent Designer then the Explanatory Filter - if it works as claimed - should throw up nothing but positive results since there is nothing not-designed for it to filter out as a negative result. The problem is, how could we ever know whether or not the filter is working reliably given that, again, there is nothing not-designed on which to test it? The other consequence of assuming an Original Intelligent Designer is that, if the nature of the Designer is ruled out of consideration and if generic design itself cannot be reliably detected from within a fully-designed Universe then the Intelligent Design project becomes pointless and, hence, uninteresting from a scientific perspective. Seversky
P(life|intelligence)=) 0 and P(life|law and chance)=0. Interesting predicament we have here. The first assessment is based on ideology which arbitrarily asserts no intelligence existed before life and the second assessment is based on data which confirms that law and chance does not have the power to produce the complexity of life. So where should we go with this. Is it time to fetch the Tooth Fairy to settle this? And for those of you who doubt the Tooth Fairy, my niece just got a $5 gift certificate from Dunkin Donuts for her first tooth. In my day I only got a quarter. jerry
# 86 "How bout: The designer existed before matter, time, and space and was intelligent enough to design a universe with numerous constants which are extremely fine tuned for life. Such a designer in my opinion would find creating life on earth as an almost trivial task compared to creating the universe." Well it is a start. At least we can distinguish this from rival hypotheses based on aliens or less talented (but still very impressive) deities. Care to give a basis for estimating the prior probability of this particular designer (a) existing (b) creating life? Bear in mind when estimating (b) that you should not dismiss a priori all rival hypotheses based on other forms of intelligence. Mark Frank
Prof. a-priori states; bornagain[75], I don’t think you’re getting my point which is that there is no empirical meat with which we can cook up priors for “design” or “chance.” to which i refer: Scientific Evidence For God Creating The Universe - Establishing The Theistic postulation and scientific validity Of John 1:1:, “In the beginning was the Word, and the Word was with God, and the Word was God.”, By showing “transcendent informations” complete specific dominion of a photon of energy as well as its integral relationship with the definition of a photon qubit. http://www.godtube.com/view_video.php?viewkey=f61c0e8fb707e76b0e20 excerpt of description from these findings, we can now draw this firm Conclusion; A infinite amount of transcendent information is necessary for the photon qubit to have a specific reality, thus infinite transcendent information must exist for the photon qubit to be real. Since photons were created at the Big Bang, this infinite transcendent information must, of logical necessity, precede the light and "command" the light to "become real", thus demonstrating intent and purpose for the infinite transcendent information. Thus a single photon qubit, coupled with the Big Bang, provides compelling evidence for the existence of the infinite and perfect (omniscient) mind of God Almighty. (God is postulated to be infinite and perfect in knowledge in Theism)) Quantum teleportation, coupled with the First Law of Thermodynamics (Conservation of Energy; i.e. energy cannot be created or destroyed, energy can only be transformed from one state to another), provides another compelling and corroborating line of evidence for the existence of infinite transcendent information, by demonstrating the complete transcendence of information to any underlying material basis, or even any underlying natural law, as well as demonstrating the complete, specific, and direct dominion of infinite transcendent information over a single photon qubit of energy. (Since energy cannot be created or destroyed by any known material means, then any transcendent entity which demonstrates direct dominion of energy, of logical necessity, cannot be created or destroyed also: (This is the establishment of the Law of Conservation of Information i.e. Information cannot be created or destroyed; i.e. all information that can possibly exist for all physical events in this universe already does exist) The main objection, would be that you can have infinite information for the photon qubit yet still not complete and total infinite information(infinite odd number vs. infinite even number hotel rooms enigma). (I think this objection, though reasonable to the overall principle that needs to be established for Theism, is superfluous to the main point of this proof in establishing infinite transcendent informations' primacy over energy/material in the first place and thus validating the Theistic postulation of John 1:1.) This should not be surprising to most people. Most people would agree that transcendent truths are discovered by man and are never "invented" by man. The surprise comes when we are forced by this evidence to realize that all transcendent information or "truths" about all past, present, and future physical events already exist. This is verification of the omniscient quality of God when we also realize that a choice must be made for a temporal reality to arise from a timeless reality. i.e. why should we expect a timeless reality do anything at all otherwise. bornagain77
Mark Frank [85] "Until ID is prepared to say just something about the nature of the designer then there is no basis for a prior probability." How bout: The designer existed before matter, time, and space and was intelligent enough to design a universe with numerous constants which are extremely fine tuned for life. Such a designer in my opinion would find creating life on earth as an almost trivial task compared to creating the universe. Peter
Re Bayesian #64 I enjoy your comments. The foundations of statistics and probability are a passionate interest of mine (sad I suppose). I will try to curb my enthusiasm and be concise but it will be hard. First. I am a great fan of Bayesian approaches but my concern with the short video was not particularly Bayesian. The logic: “Observed outcome B is highly improbable given hypothesis A. Therefore A is highly improbable.” is fallacious using almost any approach to hypothesis testing. If you use a Bayesian approach then the equation is of course: P(A|B) = P(B|A) * P(A)/P(B) In our terms: A = chance, B = (something like) there exists a large number of proteins which support life after a billion years of earth’s existence P(B) is the prior probability of B. It is always tricky with priors to agree how much is known. In one sense P(B) is 1. But that is clearly not what we mean. I guess it something like the chances of B given the initial conditions on earth. P(chance|functional protein) = P(functional protein|chance)*P(chance)/P(functional protein) Let us assume that P(A|B) is incredibly small. The trouble is we have no idea if P(B) is even smaller! You force P(B) to be relatively high by assuming P(ID) is relatively high and that P(B|ID) is 1. To me P(B|ID) is not so much low as meaningless. Let me explain - on P(ID) you wrote So taking a very low P(ID) is very reasonable, perhaps P(ID)=10^-9 or even P(ID)=10^-150, which is Dembski’s universal probability bound. And, as far as I’m concerned you may assume P(ID) as small as you like, provided you can give a reasonable explanation for your choice. For instance “Because I want P(data|chance) to be larger than 10^-150? does not seem reasonable to me. And taking P(ID)=0 is entirely unreasonable, because then we are excluding ID a priori. Durston said that intelligence, e.g. human intelligence, is capable of producing proteins. In effect this means P(data|ID)=1. In other word: If someone or something with sufficient intelligence sets out to create the stuff (proteins/primitive organism), it will succeed. I find this an entirely reasonable assumption. It is true that a human has produced a protein. It is also true that given a human with sufficient intelligence, equipment and motivation they are very likely (but not certain!) to produce a specific protein given enough time. But I hope you agree the prior probability of humans producing the current set of functional proteins found in life is zero. So your hypothesis is – there is (or was) another form of intelligence with sufficient intelligence equipment and motivation to certainly (P=1) produce a functional protein. Of course this is only one of infinitely many hypotheses that involve intelligence. There are infinitely many hypotheses that posit intelligences that just increases the chances of a functional protein without certainly producing one. So it cannot really be called the ID hypothesis – it just one that falls under that broad umbrella. But it is also a very odd one. Odd because of the word “sufficient”. It is similar to saying – my hypothesis is that magic exists that can always do the job. And then demanding the sceptic to give a prior probability for magic and justify. It you define your hypothesis in terms of “that which gives rise to the data” you really haven’t defined a hypothesis at all. Until ID is prepared to say just something about the nature of the designer then there is no basis for a prior probability. Mark Frank
----Laminar: "That isn’t always true, you can be certain that an explanation for a particular event is wrong without having to postulate a correct explanation." Think of it this way. Darwinism can either be measured or it can't. A mathematician should be able to know that and comment one way or the other. With regard to intelligent design, one cannot recognize an allegedly flawed approach to applying mathematical models without having some idea about that which would be appropriate. StephenB
Since I abandoned my probability and mathematical endeavors in the distant past, I do not have the time to go back an relearn them. But if we are going to assign prior probabilities based on something concrete, we know two things. Specified complexity arises all the time in human activity and there are potential instances of humans creating specified complexity in living organisms by manipulating the DNA. So based on that, the prior probability of intelligence created specified complexity has to be higher than zero. The one problem is that we cannot identify any prior intelligence to the origin of life which is the crux of the argument. If there was one solitary concrete fact that showed there was a prior intelligence to humans then the game would be over. Now take law and chance - there is not one single instance where these forces have ever produced functional complexity. You cannot say life because that it begging the question. Life is the issue under scrutiny. So it would be reasonable to assign a probability near zero for this case since maybe it could happen but it has never been witnessed. What law and chance has over intelligence is that we know they existed prior to life. That is all they have going for them. Not logic, not science, now empirical data, nothing but the hope of some people that it might have happened. And not just one instance of functional complexity but millions of them and a large subset of them are complementary to each other so they produce an even greater effect. So those who choose law and necessity have the burden of believing all this without the support of even one simple example. So if I was going to assign prior probabilities it would be P(ID) very close to 1 and P(chance) very close to zero. It is the only logical assignment based on what we know today. Tomorrow may be different but today we can only use what we know. jerry
KD Your findings seem to be very significant. Are they published? I am doing a presentation for a class I am taking on evolution and theology at TST. I would like to be able to quote your findings. Peter
Prof_P.Olofsson [27] "If there is a 1-in-a-million chance that something happens by chance versus some other explanation, you cannot say that “chance is a million times less likely.” You’d put a lot of innocent people in jail with such a logic! " You are confused. The use of DNA in court cases is always based on the probability of a match between a sample and the defendant. Lawyers will always tell you that the probability of a match is say 1 in 50 million (P1). In a case where there are only two possible outcomes, the second outcome must have a probability 1 - P1. On this basis people go to jail. Also, DNA match is not circumstantial. It is the most respected form of evidence on which many wrongfully convicted people were released. Peter
bornagain[75], I don't think you're getting my point which is that there is no empirical meat with which we can cook up priors for "design" or "chance." Prof_P.Olofsson
StephenB:77 "If you are so sure about what is wrong, it seems reasonable that you should be able to affirm what is right." That isn't always true, you can be certain that an explanation for a particular event is wrong without having to postulate a correct explanation. Take for example something like gravity and motion, you could come up with a mathematical description for the way gravity affects a mass which produces results that are inconsistent with observation, someone else could therefore be certain that your hypothesis is wrong without ever having to propose a corrected alternative. Laminar
Professor Olofsson, as a mathematician, you must have some notion about the soundness of the Darwninian model, and, in a reciprocal sense, about providing a sound mathematical model for intelligent design. So, first, what is your estimate of the probability that incrementalism can do what Darwinists say it can do? There would seem to be only two possible alternatives [A] Darwinism is not really a scientific theory because it's paradigm is too vaguely defined to be measured, or [B] It really is a valid scientific theory, meaning that there is a way to measure the probability that incrementalism can do what Darwinists say it can do. Are you willing to either acknowledge [A] or provide a mathematical answer for [B]? Second, since you think Dembski's formulations are not supported by sound mathematics, you must have a better idea. What is it? It is one thing to kibbitz from the sidelines, it is something else to actually come up with something. If you are so sure about what is wrong, it seems reasonable that you should be able to affirm what is right. StephenB
CJYman @ 67:
it first measures P(T|H) which implies measuring chance hypothesis against ID hypothesis [Bayesian]
Actually, the ID hypothesis plays no role in Dembski's method. Dembski in TDI: Because the design inference is eliminative, there is no "design hypothesis" against which the relevant chance hypotheses compete, and which must be compared within a Bayesian confirmation scheme. R0b
Bayesian@63, I'm having a hard time seeing how the hypothesis that "someone or something with sufficient intelligence sets out to create the stuff" constitutes "all possible hypotheses that involve intelligence". How about the hypothesis that "someone with possibly insufficient intelligence sets out to create the stuff", or "someone intelligent set out to create something better, with the possibility of coming up short", or "someone intelligent chose what to create and created it". In each case, the conditional probability of the outcome would be less than 1. If your stated design hypothesis is all we need, couldn't we also define the chance hypothesis as "an unintelligent cause that is sufficient to the data was operating", in which case P(data|chance)=1? Interestingly, Dembski says that design hypotheses don't confer probabilities, and, in fact, that there is no design hypothesis, at least not one that can be compared to a chance hypotheses in a Bayesian analyses. R0b
I'm sorry then for thinking you thought to highly of yourself, but my challenge to your lack of empirical substance stands as I have seen nothing of empirical merit on your part to back up your claims of extreme favoritism to the Darwinian perspective as far as the interpreting of the probability mathematics is concerned. bornagain77
Professor O -- I enjoy your comments, as always, and I'm glad you are posting. Are you saying probability is never something we should base a decision upon? tribune7
CJYman[67], Specification is Dembski's concept and it is entirely frequentist. It attempts to generalize the concept of rejection region. I have to rush to class now but we can talk more later, if you wish. Prof_P.Olofsson
bornagain[70], Thanks for the kind words. My comment was for Bayesian though. In [63] he asked if he should "abandon the argument solely on his authority" (referring to me). My answer to him is "no." Prof_P.Olofsson
Prof you state: You should not base anything on my authority but you might also refrain from accusing me of “bluffing.” Buddy you haven't even earned my respect for your "authority", and I damn sure ain't anybody special as you seem to think you are, furthermore until you put some actual empirical meat to all your high sounding chalk board rhetoric, you ain't gonna earn my respect. bornagain77
Bayesian[63], General comment to keep in mind: Even if "chance" as such is well-defined, there are uncountably many chance hypotheses depending on what probability distribution one uses. Prof_P.Olofsson
Bayesian[63], On this issue I agree with Dembski. There is no way we can make "reasonable assumptions" to come up with a prior for ID in examples relating to evolutionary biology. At any rate, such a prior is only a thought construction unless we want to claim that there was an initial random experiment that decided "ID" or "chance." As you are aware, there are those who are opposed to Bayesian methods in general, and this might be a case where they have a point. Let us now wait for Mr. Durston to present his argument. You should not base anything on my authority but you might also refrain from accusing me of "bluffing." Prof_P.Olofsson
Can someone give me their two cents on a question I have. Here it is: Doesn't the measurement for a specification (and the above measurement for functional information -- both equations being extremely similar) seem to combine both bayesian analysis and frequentist analysis, since it first measures P(T|H) which implies measuring chance hypothesis against ID hypothesis [Bayesian] and then measures this against all probabilistic resources (M*N) as a "cut off point" [Frequentist]. As such doesn't this provide an even stronger measure of functional information and detection of intelligence than either solely Frequentist or Bayesian analysis, since frequentist merely creates an arbitrary cut of point if not utilizing probabilistic resources and Bayesian probability of chance vs intelligence is based on an arbitrary definition of specification so that the probability of intelligence is also vague. Is this correct or how far off am I? I hope I've made sense here, and I would much appreciate it if someone with a deeper knowledge of these fields could put in their two cents regarding this question. CJYman
WeaselSpotting, Money is additive, biological fitness is multiplicative. Prof_P.Olofsson
Thanks for the link benkeshet OK please someone who can decipher all the math in the paper; Bottom line Will they/we finally be able to mathematically demonstrate the empirically established principle of Genetic Entropy and unseat the current evolutionary speculation at the level of molecular biology with the math they demonstrate in the paper? From what I can gather in my very limited understanding this prospect looks very promising; for instance in this excerpt of the paper they delineated functional information between a small and large protein: although we might expect larger proteins to have a higher FSC, that is not always the case. For example, 342-residue SecY has a FSC of 688 Fits, but the smaller 240-residue RecA actually has a larger FSC of 832 Fits. The Fit density (Fits/amino acid) is, therefore, lower in SecY than in RecA. This indicates that RecA is likely more functionally complex than SecY. Thus from what I can gather this looks like it may be sufficient to establish Genetic Entropy: i.e. “But in all the reading I’ve done in the life-sciences literature, I’ve never found a mutation that added information… All point mutations that have been studied on the molecular level turn out to reduce the genetic information and not increase it.” Lee Spetner (Ph.D. Physics - MIT) and Commenting on a "Fitness" test, which compared the 30 million year old ancient amber sealed bacteria to its modern descendants, Dr. Cano stated: "We performed such a test, a long time ago, using a panel of substrates (the old gram positive biolog panel) on B. sphaericus. From the results we surmised that the putative "ancient" B. sphaericus isolate was capable of utilizing a broader scope of substrates. Additionally, we looked at the fatty acid profile and here, again, the profiles were similar but more diverse in the amber isolate." RJ Cano and MK Borucki thus loss in functionality and information as well as conforming to Genetic Entropy bornagain77
This looks interesting. "Measuring the functional sequence complexity of proteins" Background Abel and Trevors have delineated three aspects of sequence complexity, Random Sequence Complexity (RSC), Ordered Sequence Complexity (OSC) and Functional Sequence Complexity (FSC) observed in biosequences such as proteins. In this paper, we provide a method to measure functional sequence complexity. Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors benkeshet
@Mark en PO, PO could be the incarnation of Thomas Bayes himself, but if he doesn't provide us an example that shows that, under reasonable assumptions, the probablisitic argument is nullified, then what can I do? Abandon the argument solely on his authority? Now, Mark has tried that, which is commendable, and gives me something to work with. First PO: P(ID)=P(chance)=0.5 is simply stating that ID is as likely as materialistic darwinism. Personally I agree that, because of Occam's razor, it is entirely reasonable to be biased in favor of a materialistic origin of life. So taking a very low P(ID) is very reasonable, perhaps P(ID)=10^-9 or even P(ID)=10^-150, which is Dembski's universal probability bound. And, as far as I'm concerned you may assume P(ID) as small as you like, provided you can give a reasonable explanation for your choice. For instance "Because I want P(data|chance) to be larger than 10^-150" does not seem reasonable to me. And taking P(ID)=0 is entirely unreasonable, because then we are excluding ID a priori. @Mark: Durston said that intelligence, e.g. human intelligence, is capable of producing proteins. In effect this means P(data|ID)=1. In other word: If someone or something with sufficient intelligence sets out to create the stuff (proteins/primitive organism), it will succeed. I find this an entirely reasonable assumption. It has been often stated, on this blog and elsewhere, that "ID includes all possible hypotheses that involve intelligence." And we all know what that means: Somewhere there is or was an intelligent person, alien, god or force that put somehow put information in living things. That force could even be programmed culling force, such as in Dawkin's weasel program. ID is well defined and you are right that 'chance' isn't, but the easy solution to that is to define chance as anything that does not involve intelligence. Then it is well defined. Bayesian
I've never really enjoyed the Lottery analogy for evolution: Firstly, nobody/thing knows that they have entered the lottery; secondly, you don't supposedly win but your offspring does (and they don't share!); when you win, you don't know ... yet you've already collected; and lastly, it doesn't really mean anything until you have reproduced and passed on your winnings. Is this a fair assumption? Great to also see the author on board! AussieID
Paul, Thanks for the 'peer review' :) I was quite a zombi yesterday evening, as illustrated. At least I have evidence that someone read what I wrote. Later I realized that I should have written 4^2 at the first place and refer to a 2 nucleotid long RNA, then it would have been fine, but never mind. Alex73
I want to apologise to Kirk for the strength of my language in #44 and elsewhere. It makes quite a difference when you find the author is reading what you write! And is a lesson for me that I should always assume that will happen.. I do stand by my criticisms of what I saw in the video and look forward to a more detailed explanation. Mark Frank
The payback is 1 million in both cases. Only if you wager your entire, initial $1,000 winnings. But that's not how evolution is supposed to work, is it? WeaselSpotting
Guys, The lottery example is not helpful. What evolutionary theory actually posits is that the $1,000 winners "earn" enough to produce thousands of babies, who then grow up to play the lottery again, and then on the second (or is it 20th?) generation all of them can play again, making the second step virtually certain. This is all fine if the first step is in fact advantageous. But if it is not, Prof_P.Olofsson is right, there is really no advantage, and if anyone winning only one lottery is killed, then if two lottery tickets are not bought at the same time, the two-step method is worse off than the single try. (Actually, to be very technical, in the example given, if the $1,000 "invested" in the original lottery is "reinvested", the chance of winning $1,000,000 on the second round is less, because it depends on winning $1,000 a thousand times in a row, which is roughly 1 in 10^3000. But if the example is of a "fair" lottery, the probability of winning 1 million dollars in one fell swoop is the same as winning $1,000 twice in a row). The real question is not the mathematics. That is rather straightforward (although sometimes this seems to be a challenge also, as when someone describes 4^4 as first 16, then 64, instead of 256 :) ). The real question is which mathematics corresponds to the usual biologic situation. That is no longer in Prof's field of expertise, although I'm sure that, like the rest of us, he is an intelligent observer. I don't know of any 3-mutation payoffs that have been observed. The most I have seen are with 2-mutation payoffs, such as chloroquine, and now apparently citrate transport and nylonase. What we need is not statistics. It is biochemical/genetic data. Paul Giem
WeaselSpotting[56], I don't think you read my post [52] carefully enough. The payback is 1 million in both cases. Prof_P.Olofsson
Excellent point, Gil. Using Prof Olaffson's example, the odds of winning two 1000's (in a row) is the same as winning a million once. The odds of the two events occuring is the same, but the payback is hugely different...in one case you get 2000, in the other 1,000,000. WeaselSpotting
KD, I want to hear your impute. Please come back soon. :-) Domoman
Professor Olofsson, what is your estimate of the probability that incrementalism can do what Darwinists say it can do? StephenB
bornagain[52], You say
My respect for your debating style is plummeting the longer I see you misleading the presentation of facts
which I think you ought to back up with some examples. When did I "mislead the presentation"? I have blogged on UD a few times before so I am used to such sweeping allegations but once in a while it would be nice to learn if there is any substance to them. I would like to point out that my comment about learning "last week" was a joke (Mr. Dodgen sounded a bit angry so I thought I would lighten the mood). Of course I've known it for a long time, ever since second year of graduate school. I would also like to assure you that my comment[38] about Mr. Bayesian was a joke. I am sure that Mr. Bayesian is a handsome man who smells nicely of lavender and pomegranate. My comments about probabilities are serious, factual and accurate to the best of my knowledge. Prof_P.Olofsson
bornagain[51], My point was that I don't see how the kind of additivity that Mr Dodgen assumes is relevant in biology. What is it that adds up the way money does? Far more relevant, it seems to me, is multiplicativity. So instead of buying fixed-price, fixed-payout lottery tickets, think of games where you win in proportion to your wager. As you rightly point out, the chance to win twice in a 1/1000-probability game is the same as once in a 1/1000000-probability game, and if you wager your 1000-dollar win, you have your million. In this sense, the two scenarios are probabilisticallhy identical. In evolutionary biology (about which I know far, far less than about probability), you can only multiply probabilities for neutral mutations; those that remain in a roughly fixed proportion in the population. Favorable mutations will appear in an increasing proportion so in that case, you have a better chance to do it in two small steps than in one large step. I suppose we could do some computations here with reproducing individuals, mutations rates etc. Prof_P.Olofsson
So professor , did you also learn that winning two 1000 to 1 lotteries is the same odds as winning only one million to 1 lottery, thus Gil's rightful stress that the odds get far worse! Please Tell me you aren't deliberately being deceptive (then again if you were deliberate would you tell me?), Others express respect for you and I'm sure in some areas you may merit it yet My respect for your debating style is plummeting the longer I see you misleading the presentation of facts. Slightly off topic video I just loaded: A Few Hundred Thousand Computers vs. A Single Protein Molecule http://www.youtube.com/watch?v=G-6mVr6vJJQ bornagain77
So professor , did you also learn that winning two 1000 to 1 lotteries is the same odds as winning 1 million to one lottery, thus Gils rightful stress that the odds get far worse! Please Tell me you aren't deliberately being deceptive (then again if you were deliberate would you tell me?), for my respect for you is plummeting the longer I see your misleading debating style. Slightly off topic video I just loaded: A Few Hundred Thousand Computers vs. A Single Protein Molecule http://www.youtube.com/watch?v=G-6mVr6vJJQ bornagain77
GilDodgen[47], I learned last week that 1000 times 1000 equals a million. Prof_P.Olofsson
Gil: what math is that? Venus Mousetrap
Robbie[43], Thanks for your kind words! Thanks also for providing the link because it reminded me it needed updating. Prof_P.Olofsson
Darwinists insist that the obvious probabilistic barriers presented by living systems can be breached through incrementalism. But this is probabilistic nonsense on its face. When boiled down to the basic argument, the Darwinist thesis is: It might be highly unlikely that you will win the million-dollar lottery, so just win the thousand-dollar lottery a thousand times! Problem solved. Not! The problem becomes astronomically greater. Do the math. My conclusion is that Darwinist philosophers don't have a clue about basic mathematics like this, which I learned from my father when I was in grade school. GilDodgen
Now this is going to get very interesting!!!! bornagain77
Hello Kirk, Good to have you here to answer questions about your lecture! DaveScot
Re #35 I didn't pick up the assumption that P(data|ID) is one. This certainly makes a difference. Then for very small values of P(data|chance) and the assumption P(chance) = P(ID) = 0.5 then it is true that effectively P(chance|data) = P(data|chance). However, this is a hell of a set of assumptions. Consider P(ID) = 1 - P(chance). So in this case ID includes all possible hypotheses that involve intelligence. I am not sure that it possible to attach a meaning to that. But to then go on and assume that the observed data is certain given this megahypothesis! We are truly into la la land here. Actually we have no idea what the prior hypotheses: chance and design mean. They are just too vague. We have a very broad estimate of the probability of the outcome given one of the chance hypotheses. And no idea of the probability of the outcome given the meaningless set of design hypotheses. Given that, it is an error to confuse p(data|chance) with p(chance|data). Actually this is a minor error compared to some of the big ones.The real big one is the weird statement about the number of fits required for a fitness function. This is the only time he addresses natural selection but it is stated without proof or reference. In fact it is nonsense. I guess what he is trying to say is that given a target, what are the chances of creating a fitness function leading to that target. There may be some research that supports this. But evolution is not seeking a target so it doesn't need a fitness function that matches the target. Rather it reacts to whatever fitness functions happen to be out there and is lead by them. And we know there are highly complicated fitness functions out there - they are the environment. Mark Frank
Bayesian: Prof P., and all those that bring up Bayesian probability, you’re bluffing. Do the math, or shut up. You do know who Prof_P.Olofsson is, no? I may be going out on a limb here but I doubt 'doing the math' is his problem. Call me crazy... but, perhaps some respect is in order. Seeing as he's an expert in the field and is kind enough to contribute at all. Dr. Dembski rarely, if ever, contributes to the discussions and debates around here so I'm thankful for Prof. Olofsson's contributions on the topic of ID as it relates to mathematics. Regardless of his opinion of ID in general. Thanks, Professor Olofsson. Robbie
Hello folks. I've been away for 6 days and just got back last night. There is a backlog of work that has piled up, some of which is pressing, but I'll post a comment in the next day or so. The video you see is a portion of a lecture I gave to a general audience of students at the University of Edinburgh about a year ago, so it is quite simplified and there is very little technical detail. Those who are concerned about how I did the probabilities can relax; I'll post more about that when I get a little more time in a day or so. Also, I noticed that there are some who are assuming natural selection is relevant in assisting the search for folding functional proteins in sequence space. It isn't. Finding a folding functional sequence is not a hill-climbing problem. I'll look forward to posting a little more detail in a day or so. Cheers, Kirk KD
bornagain[25], In fairness, you made the conditional statement
Do you want to argue that the Venter watermark was not intelligently designed? If so there is no point in trying to persuade you since you are unreasonable.
and as the answer is "no" I suppose I am still conditionally reasonble, please? Prof_P.Olofsson
Bayesian[21,35], Mark[32] Bayesian, Mark is right and you and I are both wrong. Your claims are wrong and I was wrong is assuming that you know how to do the calculations. If we have P(ID)=1/2 P(chance)=1/2 P(data given chance)=p P(data given ID)=1   then Bayes' rule gives P(chance given data)=p/(1+p)  &nbsp and P(ID given data)=1/(1+p) so your claim in [21] is only true if p=0. In other words, Mark's comment [32] is correct. Prof_P.Olofsson
I think I want to stand in the dining room for the heat in the kitchen is getting intense,,LOL, but just a thought before I scamper off, given the infinite universe conjecture postulate by Koonin and others to evade the overwhelming evidence for design that is being found by all levels of science, what is the probability of a "infinitely powerful and transcendent being" "evolving" in one of this other infinite universes. I would say the probability is 1. bornagain77
Bayesian[36], I can stand the heat of a rational and respectful discussion. Claiming that I am bluffing respresents neither. But OK then, let's go personal: You are ugly and smell slightly of boiled potatoes! You obviously know how to do the calculations but I still agree with Bill Dembski; there is no way we can compute, or estimate, P(ID), P(chance), or P(data|ID). Sure, we can say that an intelligent designer would always get precisely the result we see and claim that P(data|ID)=1 but that's already a biased assumption. And even if we agree on it, if you choose the "unbiased" prior of 50-50, the entire setup is biased in favor of ID. If, in a court case, you replace "chance" by "innocent" and "ID" by "guilty, and consider some piece of circumstantial evidence such as a DNA or finger print match, you would always convict. Prof_P.Olofsson
Well, I can't seem to draw any conclusions from the discussion going on between all these people. It's interesting though, very interesting! :) PO, you should come back. How am I supposed to draw a conclusion without hearing more from both sides? While I very seriously hope that Durston is accurate in his calculations, I'm not going to jump the gun. Although, I will admit that I'm assuming that the case for ID is going to turn out much, much better than chance and/or natural selection. Domoman
Do what you must, PO. At the very least, you could give us a link. But if you can't stand the heat, you may as well get out of the kitchen. Bayesian
#32 (Mark) No. As a matter of fact a in Durstons calculations P(data|ID) =1 and P(data|chance) is the low non-zero probability he calculated. If you think I'm wrong, then please show me a different calculation. If I work out the math, and believe me, I'd love to, the critics will simply accuse me of demolishing a straw man and/or misapplying bayesian probabilities. So let the critics show us exactly how Bayesian probabilities nullify or weaken the probabilistic arguments and then we'll see if they are right. Otherwise, stop yelling "Bayes!" and engage the meat of the arguments. Bayesian
Bayesian and bornagain, Well then, as I have now been exposed as (a) bluffing and (b) unreasonable, I'd better stay away from the discussion to leave it to the honest and reasonable. Cheers, PO Prof_P.Olofsson
The -log2 arises because functional information is based on the Shannon notion of uncertainty, and indeed this is the problem I had with it. It is not a meaningful measure of information. Durston has an example with a dodgy safe, but I have a better one: keys. Suppose there exist a billion keys in the world, and you are testing them on their ability to open one specific lock. If it's such a badly made lock that all keys will open it, the functional information of the lock is -log2[one billion / one billion], which is zero bits. If half the keys will open it, the information is -log2[half a billion / one billion], which is one bit. If only one key will open it, the information is -log2[1 / one billion], which is about 30 bits. All these bits just tell you how much information you'd need to distinguish a functional entity from a non-functional one. That's why, when he does the calculation for Venter's watermark, he gets such high information; there is only one Venter's watermark out of quadrillions of possible sequences. That only happens because he knows there is only one. It's not a very interesting result. If you define a function as 'be this precise sequence of nucleotides', of course it has high information. Even a sequence of hundreds of G nucleotides has high information by that definition, yet you know it's not very complex. If we were getting messages sent to us in English in our DNA, then I'd be impressed, and I'd have to admit that some intelligence had put it there. But we're not, and falling back on functional information is less impressive when it's defined in terms of ability to perform functions - something that natural selection is known to effect. Venus Mousetrap
Re #21 But if we start as agnostics and assume that P(ID) = P(chance) = 0.5. and apply bayes rules, then we find P(data given ID) = P(ID given data) and P(data given chance) = P(chance given data). Surely this is only true if P(data|ID) = 1 - P(data|Chance)? Mark Frank
Bornagain77, thanks for the full link. Durkston's argument seems much more complete now. That said, I still detect a major flaw in his neo-darwinan case. Durkston argues that one can determine the necessasary information content of the fitness function must be equivelant to the amount of change that has taken place. He argues that there is a large, quantifiable amount of information change between a fish genome and an amphibian genome. Therefore there must be a fitness function that itself contains this amount of information. I think his logic is weak. Consider the amount of difference between the genome of the fish and the genome of the reptile. These genomes are more different than that of the fish and the amphibian. His logic suggests that the necessary fitness function now be greater than the fitness function required to get from fish to amphibian. I hold that two fitness functions, one with the information content to get from fish to amphibian and another with the information content to get from amphiban to reptile would be fully adequate for the job. So, if there are intermediate steps, each intermediate step requres a fitness function with sufficient info to get to that step. The total information of all of these fitness functions must be sufficent to get to the final step. However, no individual fitness function need have all of the information in it. Now, Darwinists know of only one fitness function. This fitness function, natural selection, has possibly 2 bits of information in it -- 4 states as rendered in any Darwinian conflit: 1 - Organism A is better suited for this specific environmmental event. 2 - Organism B is better suited. 3 - Both are adequately suited. 4 - Neither are adequately suited. That is the total information content of the known fitness function. However, the neo-Darwinists also propose a whole whap of intermediates. In fact, for each mutational event (could include: insertions, deletions, HGT etc., not just point mutations), the fitness function is applied. The fitness function, then, is applied at the level of a single bit of increased information (even a multi-nucleotide mutation, as a single mutational event only produces one bit of information -- "this organism is changed.") As such, the 2 bit fitness function is fully adequate to handle the single bit of new information that it is applied against. Now, each time natural selection is applied, it is applied under slightly different circomstances. As such, each time the function is applied, it adds its 2 bits to the total information content of the sum of the fitness functions. The number of bits of information in the sum of the fitness functions then equals the two times number of times the function has been applied in the transition between fish and amphibian. I therefore contend that Durkston's case is not by any means strong enough to defeat neo-Darwinism. bFast
Sorry, it is getting late here. 4^4=64 64/8=8 log2(8)=3 So in my example it would need 3 bits to specify the functional ones out of the entire lot. Alex73
bornagain77: Thanks for the video and the link to the article. The role of the log2 function is obvious: The information in the formula is measured in bits. The figure N/M(Ex) tells the ratio between all possible sequences and the ones with a function. log2(N/M(Ex)) tells you hoy many bits you need to specify a functional sequence out of all possible sequences. For example, if you have 4 nucleotids in your RNA, then you have 4^4=16 possible sequences. If 8 of these exhibit a function, then you need log2(16/8)=1 bit of functional information, which somehow relates to that half of the lot that exhibit the function. Am I right? Alex73
jerry[17], It wasn't clear to me what I said that you disagree with. Prof_P.Olofsson
bornagain[21], I have no problems with the equation, or the attempts to use information theory per se. In my post [7] I criticize his confusion of conditional probabilities. If there is a 1-in-a-million chance that something happens by chance versus some other explanation, you cannot say that "chance is a million times less likely." You'd put a lot of innocent people in jail with such a logic! I don't want to belabor this point further. I also don't want to brush aside his entire argument just because he misstates his conclusions; that would not be fair. Hopefully we will get to hear from Mr Durston himself at some point. As for the logarithm, the main reason for using it is to "turn multiplicatio into addition" by the logarithm law log(ab)=log(a)+log(b). It is easier mathematically to deal with additivity. Prof_P.Olofsson
tribune[13], Ever the nonsequiturian! Prof_P.Olofsson
bfast, Prof_P.Olofsson, I agree with Jerry. The equation is universal, as far as I know, and can be applied in forensics, archaeology, SETI and most intriguing with the Venter watermark: Do you want to argue that the Venter watermark was not intelligently designed? If so there is no point in trying to persuade you since you are unreasonable. As well this is not an argument from what we do not know, This is an argument from what we do know. I find his case for the functional information of the highly isolated islands of structural integrity for proteins, based on D. Axe's work and the universal protein's, to be especially compelling. Do you want to argue that proteins are not highly isolated in structural integrity or that universal proteins are not required in evolutionary theory? If so provide the empirical evidence to the contrary but do not hand wave as that line of reasoning is "not even wrong". Though I am not at all that well mathematically literate, the equation itself is, as far as I can tell, a brute force measure that will get us into the ballpark as to determining the probability that this functional information was implemented from the primary "transcendent information dimension" of reality. I am disappointed that no one here at UD has elucidated why the -log function was necessary. I am fascinated that this particular function is required. That some would suggest it is possible to divorce probability from a measure of functional information is a very unrealistic scenario for biology. To completely eliminate probability from any scientific measure of functional information in biology would require that we know every possible configuration of every possible protein that could ever possibly exist in this universe. This is, of course, so as to have complete knowledge of functional information possible in this universe! Do you claim to know every possible configuration of proteins that will produce any function being studied? Shoot, it takes a entire year for the Blue Gene supercomputer to figure out how one 300 amino acid protein will fold into its 3D structure. Thus it seems apparent to me, since we clearly are at such a disadvantage to obtaining complete knowledge of total functional information possible in this universe, then we are stuck with using the brute force measure illustrated in this equation. For we are indeed operating within what we do know as to the isolation of islands of functional information within protein structures and indeed within all evidence that is coming in for the complexity required for the first self replicating molecule. (in fact 382 genes is seen as generous by some, since even at that number robustness of adaptability and thus longevity is questioned) As well, Behe's work in "The Edge" supported Genetic Entropy, with no concrete exceptions to this foundational rule of decay that I could see save for the exception that was required for the HIV binding site. Yet even there the HIV deteriorated the complexity of its host in order to gain its trivial binding site of functional complexity. Why should we expect molecules to self organize into "islands of extreme functional irreducible complexity when all research thus far indicates that molecular degeneration is the overiding rule? Evolution absolutely requires the ability to randomly generate the functional complexity and yet no evidence exists that this ability is present in nature. as tribune7 stated; we won, now its time for someone to wake the evolutionists up and let them know they can stop dreaming evolutionary fairy tales to tell our children. bornagain77
Jerry (17 & 18) very well said. tribune7
Figures, I'm only an hour away from there. Didn't even know about it. IRQ Conflict
minor edit, in the second paragraph "then P(ID given chance) will be one." should obviously read "then P(chance given data) will be one." And in the second to last paragraph: "So if (chance given data) = 10^-80, it is now 10^71." should read: "So if P(data given chance) = 10^-80, then we now have P(chance given data) = 10^-71." Bayesian
Prof P., and all those that bring up Bayesian probability, you're bluffing. Do the math, or shut up. You are technically right, in that we do need to make assumptions about the a priori probabilities. Obviously if we assume that the probability of ID is zero, and that of chance is one, then P(ID given chance) will be one. But if we start as agnostics and assume that P(ID) = P(chance) = 0.5. and apply bayes rules, then we find P(data given ID) = P(ID given data) and P(data given chance) = P(chance given data). This is why Bayesian probabilities are irrelevant to those who approach the subject neutrally. Now if you assume, a priori, that either chance or ID is more likely, then, to the degree of your skew, the posterior probabilites will also be skewed. So if you say, a priori, chance is a billion times more likely than ID, then the posterior probabilities will also be a billion times higher in favor of chance. And this is where the margins come into play: a billion is merely 10^9. So if (chance given data) = 10^-80, it is now 10^71. To change the odds in favor of chance, you need not only be prejudiced in favor of chance, but prejudiced to an almost infinite degree. p.s. In the above I assumed that chance and ID are mutually exclusive and that there are no other possibilities, that is there is a probability of one for chance or ID. AFAIK nobody in either camp is seriously questioning these assumptions. Bayesian
Looks like Durston wiped the floor with PZ in their recent debate at the University of Alberta, Edmonton. You should hear him whine about it over at his blog. Good stuff. Robbie
I saw this a while back on the Edinburgh Creationist Group's website, and was disappointed. His functional information simply gives the bits required to distinguish entities capable of performing a function well, from other entities which perform the function (ie - if the functions so easy any random blob can perform it, the information is pretty much zero, since one can pick out any one blob to do it). But he spoils it, as all people making these arguments seem to, by messing up on evolution, by simply defining it away as a random process. In fact, what he does makes little sense. He first defines functional information in a way we're all familiar with from William Dembski's work: it's the improbability of selecting an entity capable of performing the function, or -log2[M/N] where M is the num of entities which can do it and N is the total num of entities. But for evolution, he does something weird; he says that M is 1, and N is the total trials evolution does over 4 billion years, which he gives as 10^42. He doesn't define the function! He's saying that an unknown function exists, there are 10^42 possibilities which won't work, and 1 which will, and evolution can pick it out over four billion years. If that's not a tornado-in-the-junkyard then I don't know what is. What does it have to do with a process whereby multiple replicators with multiple functions undergo natural selection which may add, remove, or change these functions to improve local fitness? I notice, also he's allowed to add up the improbabilities of all functions in a genome to get his 10^80000 figure - but he cripples evolution by giving it only one function. He's doing it wrong. Venus Mousetrap
I should have said at the end of my last comment: "One has to show that life is not just one of these few perfect hands but just any old good bridge hand to justify waiving away the numbers." jerry
bfast, Prof_P.Olofsson, I strongly disagree and that numbers are the essence of this discussion. To blow this off is to just it blow it off because one wants to and is nothing but hand waiving. Some how this functional complexity had to be built and whether it was piecemeal or in gross steps, it arrived in this world and its structure is an extremely high level of improbability given the number of potential structures of equal complexity. The way this high level of improbability is dismissed is to say that the arrived at solution is just one of a zillion of other possible structures that are equally functional. If it is unique or a member of a very small subset then one has a real job of explaining how this particular needle was found in the hay stack. It's one thing to say there are millions of needles in the hay stack and it won't take much to find one. But if there are this number of needles then find some others. If one cannot find another needle then one has to maybe admit that this is the only needle or that there may be other needles but they are few and hard to find. If there were steps along the way then one has to show that there is a series of subsets that are incredibly large in number and functional. In one of the videos I believe Durston referred to the number of potential proteins that could be functional. This seems like a testable hypothesis either now or in the future whether an arbitrary protein of any length is functional or not. If the numbers are so small at any length then the likelihood of a large number of smaller subset forming are nil let alone the final product which is by definition of many orders of magnitude more complicated. So I am sorry but numbers are extremely important and arbitrary hand waiving is not going to do it. First show that more than one sub set can exist and be functional and is not related to anything in the world and you have only started. You must show that zillions of equally functional subsets must also exist of which all are not related to what exist in the world. Because there must be an almost infinite number of subset to exist in order to justify a slow build up to the one lucky final set that made it through. That is the only way to hand waive away the tyranny of these numbers. To show that it is just one of a near unlimited other possibilities. There are lots of good bridge hands in the deck so a good bridge hand is not hard to get. But there are only a few bridge hands that have 37 high card points and are perfect hands. One has to show that life is not just one of these few good hands but just any old good bridge hand to justify waiving away the numbers. jerry
If you want to read about a problem with the functional information measure, as well as the active information measure, I recommend the work of a talented mathematician named William Dembski. Google "how we measure information needs to be independent of whatever procedure we use to individuate the possibilities". R0b
According to Durston, he did bring natural selection into account. If such is the case, then he was doing his calculations right. Domoman
tribune7 @1, Yeah, I think it's definitely safe to say we won, by a margin of 10 to the 80 thousandth power. =) Awesome video, thanks for sharing! PaulN
Professor O, now w/regard to the point Kirk Durston made regarding Venter's watermarks. If some other lab came up with a genome and said hey we beat you to it neener, neener. And that genome had all of Venter's watermarks which includes Craig Venter's name. What methodology could we use to show that other lab stole the information from Venter? Is there a methodology we can use to show the other lab's genome did not occur naturally? tribune7
As well I agree with Henry Schaefer III, in that you will never find a hypothetical "simpler origin of life". On The Origin Of Life And God - Henry F. Schaefer, III PhD. http://www.godtube.com/view_video.php?viewkey=d305934f3a43dd87e4e8 ================== as well, "information" is now shown to be its own independent physical entity with complete transcendent specific dominion over matter/energy in quantum teleportation experiments: "It will ultimately emerge that information is a vital and basic element of the world. Information is not a derivative, but something primary that determines reality. Put dramatically, reality and information is (sic) the same thing." Anton Zeilinger, Leading Expert In Quantum Teleportation ======================================= Scientific Evidence For God Creating The Universe - Establishing The Theistic postulation and scientific validity Of John 1:1:, "In the beginning was the Word, and the Word was with God, and the Word was God.", By showing "transcendent informations" complete specific dominion of a photon of energy as well as its integral relationship with the definition of a photon qubit. http://www.godtube.com/view_video.php?viewkey=f61c0e8fb707e76b0e20 ======================== Conservation Of Information - establishing the overriding law of Conservation of Information using Quantum Teleportation and The First Law Of Thermodynamics - http://www.godtube.com/view_video.php?viewkey=08979112b6474524fbf3 ============================ as well: Evolution Is Not Even A Proper Scientific Theory - The Crushing Critique Against Genetic Reductionism - Dr. Arthur Jones Cortical Inheritance proves the "semi-holistic" nature of information in a cell and its impact on heredity (The Body Plan of living organisms is actually shown to be encoded separate from the DNA code in the membrane area of a cell), Dr. Arthur Jones delivers a crushing blow against the Neo-Darwinian paradigm of Genetic (DNA) Reductionism with Coritical inheritance. http://www.godtube.com/view_video.php?viewkey=26e0ee51239e23041484 bornagain77
He talks of natural selection in the lecture at a little over the halfway point (I edited for the youtube 10 minute limit): You can see it here: Intelligent Design - Kirk Durston http://www.seraphmedia.org.uk/ID.xml bornagain77
I'm with Prof_P on this one. The general biological community has long abandoned the idea that even the simplest known life-form just popped into existance. The assumption today is that something vastly simpler preceeded it. If so, then the "random chance" calculation of the likelihood of a 380 protein lifeform popping into existance is irrelevant. Durston totally did not discuss the mathematical challenge of a very simple life-form, using natural selection, could have developed into the simplest life-form we know today. As such, most of his lecture was simply irrelevant. I do find some value in the mathematical concept of functional information and "fits" . However, our calculations must be based upon natural selection, not the blind search. bFast
Domoman, What numbers? Prof_P.Olofsson
Prof_P.Olofsson, How do the numbers come out then, if you calculate it? Domoman
The claims in his examples show classic confusion of conditional probabilities, for example: “It was about 10^80000 times more probable that ID was required…” In other words, he claims that P(ID given data) >> P(Chance given data) when all that he can attempt to compute is P(Data given Chance). For anything else, he would need prior probabilities, apply Bayes' rule, etc. In other words, he is making a Bayesian claim without applying Bayesian methods. But don't take my word for it; just read Dembski's "Elimination vs Comparison" paper, page 7, (1) and (2) with which I completely agree. It feels like this issue should have been sorted out and dismissed once and for all long ago. Dear tribune, there is no way that this talk represents PhD work at Guelph! Prof_P.Olofsson
Sorry, I meant *Take a look at*. Must have walked away and then come back to finish my sentence haha. Such is the burden of work getting in the way of my UD reading. =P PaulN
Hahah, good one Enezio. I'll definitely have to take a watch this when I get home after class tonight. Reading the few responses alone is quite exciting. PaulN
My goodness! Is Kirk wearing a bullet-proof jacket? Enezio E. De Almeida Filho
Might be about time that Dawkins concedes to those aliens... lol Domoman
Interesting to note this paragraph from his conclusion: " The formalism also points to strategies, such as increasing the concentration and/or diversity of molecular agents, that might maximize the effectiveness of chemical experiments that attempt to replicate steps in the origin of life." Laminar
I would say, we won. Far less certain than the intelligent design of life, however, is will Kirk Durston get his PhD? tribune7

Leave a Reply