Uncommon Descent Serving The Intelligent Design Community

Chance, Law, Agency or Other?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Suppose you come across this tree:

Tree Chair

You know nothing else about the tree other than what you can infer from a visual inspection.

Multiple Choice:

A.  The tree probably obtained this shape through chance.

B.  The tree probably obtained this shape through mechanical necessity.

C.  The tree probably obtained this shape through a combination of chance and mechanical necessity.

D.  The tree probably obtained this shape as the result of the purposeful efforts of an intelligent agent.

E.  Other.

Select your answer and give supporting reasons.

Comments
Bob O'H: If you’re going to be a hypocrite, you’re not worth dealing with. By the way, I "introduced" cooption because cooption is tha basisi of the whole argument of the change from T3SS to flagellum. I have to remind you that the function of T3SS is definitely not the same as the function of the flagellum, so you can in no way have a direct path which simply increases the first function magically arriving at the second. Anyway, even if I suggested the possibility of intermediate cooptions, just to "increase the credibility of your model, I also took into consideration the simple split between the two existing functions, as you should know if you had read my posts, or if there were no hypocrisy in the world. Here, again, is the relevant part: "At this point, probably not knowing what else to say, you go back to the concept of gradual fixation. But what are you speaking of? Please explain how you pass from a function (T3SS) to another one (the flagellum) through 490 or so single aminoacid mutations, each one fixed in name of I don’t know what function increase. Are you thinking of, say, 245 gradual mutations improving at single steps the function of the T3SS, and then 245 more single mutations, building gradually the function of the flagellum? What a pity that, of those 490 or so internediaries, there is not a trace, either in existing bacteria, or in fossils, or in general models of protein functions. Or, if we want to exercise our fantasy (why not? in time I could become a good darwinist) we could split the cake in three: 150 mutations improving T3SS, 150 more to build some new, and then lost, intermediary function (double cooption!), and then 190 to build up the flagellum." It's a good thing that posts are saved in a blog, and we can always look again at what has been said. In spoken language, I have found often people who simply deny what has just happened in the discussion. It's interesting to find them even in written discussion, but at least here the evidence is under the eyes of all. Well, I hope this is it at last. Good by, and I really wish you the best.gpuccio
June 7, 2008
June
06
Jun
7
07
2008
12:33 AM
12
12
33
AM
PDT
gpuccio @ 201 -
I am afraid there is no purpose in going on with this discussion with Bob. He goes on changing arguments.
And then how, in your very next post, did you respond to my repeating that you're assuming no fitness gain for intermediates? You discussed co-option, something which is not part of the argument I was making, and which I wasn't assuming (I had mentioned gene duplication earlier, but had dropped that line of inquiry because there were more urgent things). It is more than possible for intermediates to have a higher fitness and be carrying out the same function, i.e. for co-option not to be involved. Yep, thank you for, um, changing the argument. If you're going to be a hypocrite, you're not worth dealing with. I'd still be interested in seeing how F2XL responds to my criticisms of his maths, but I think he has disappeared.Bob O'H
June 6, 2008
June
06
Jun
6
06
2008
10:26 PM
10
10
26
PM
PDT
Bob O'H: By the way, while I happily admit that I don't know the statistical applications you mention (I will try to study something about them, just to know), I still don't think there is any doubt that statistics is the science which studies random events. Obviously, it is perfectly possible to apply the study of random events to models which make causal assumptions. The fisherian model of hypothesis testing is a good example. There is no doubt that a caual assumtpion is made about the test hypothesis, and that it is evaluated by the statistical anlysis, but that is done indirectly, testing the probability of the nul hypothesis, that is of a randomness hypothesis. Whatever the sophisticated applications of statistics, there is no doubt that statistics works with statistical distributions, and that statistical distributions are distributions of random events. Anyway, all my calculations are merely of random probabilities. There is no causal assumption there. I have easily admitted (see my post #192) that if anybody can give a reasonable scenario where all the steps are no more than 2-3 nucleotides far away, and each step can be fixed by NS, then the probability calculations do not apply. I paste here again the relevant part: "I think that you obviously understand that there is no hope of splitting the transition into 245 or 163 functional intermediates (2-3 mutations steps). In that case, if you imagine that you can split the transition in such a way, and that each mutation can be fixed quickly, we would be in a scenario similar to the “Methinks it’s like a weasel” example. Indeed, a 2-3 aminoacids coordinated mutation is perfectly accessible to very rapidly dividing beings, like bacteria or malaria parasite, as Behe has clearly shown in his TEOE. We can accept that with prokatyotes, single mutations can be attained easily enough, double ones with a certain ease, given some time, and triple ones probably imply some difficulty, but can be done (especially if you imagine a whole earth covered with proliferating bacteria)." So, you see, I am admitting that you can find a pathway from T3SS to the flagellum, provided that you can give a model of that kind, and not just imagine it. I have already said many times the reasons, both logical and empirical, why such a model, IMO, can never exist. You have never commented about that.gpuccio
June 6, 2008
June
06
Jun
6
06
2008
03:52 PM
3
03
52
PM
PDT
Bob O'H: I have almost renounced discussing with you, but just a last attempt. Both I and F2XL assume that there is no fitness gain during the passage from the original state (T3SS) and the final state. Indeed, I added my calculations to F2XL's model, because I thought (and still think) that he made an error in the mathematical development. I must remind you that the T3SS to flagellum scenario has been invented by darwinists to discredit Behe's concept of IC. Therefore, the T3SS system shiould be the already selected function which is "coopted" to easily pass to the flagellum. Indeed, Miller and Co. just try to convince everybody that, as the proteins for the T3SS are already there, building up the flagellum must certainly be a joke. F2XL's model (and mine) are aimed at proving that that is not true. Even if you start from the T3SS, to get the flagellum you still have to traverse an evolutionary path which is practically infinite, and which is equivalent to building up from scratch a new gene of at least 490 nucleotides. So, your concept of "cooption" is completely flawed. You insist on function gain as though it were a religious observance. But I must remind you that you have no possible path from T3SS to flagellum, that the argument of the IC of the flagellum stays untouched, and that, if you want to find evidence of hundreds of different coopted intermediate functions which can give a start of a model to traverse that landscape you are free to try. Simply, the "evidence" given by Miller about the T3SS does not solve any problem, because even if we acceppt the T3SS as ancestor of the flagellum (which I am not at all ready to do), there remains a huge information gap (at least 490 nucleotides, but again that's a very generous underestimate) which requires the demonstration of humdreds of new coopted functions, of which there is obviously no trace. That said, it is obvious that both F2XL's calculations and mine had only one purpose: to demonstrate that it is absolutely impossible that such an information gap may be traversed by random means alone. Is that clear? The calculations are calculations of the probability that the result be obtained by random variation, nothing else. F2XL's calculations are (IMO) wrong, but mine (IMO) are not. So, you have to give an answer: do you think that my calculations are right, in the sense that I have correctly calculate the probability that the "traversing" of the 490 nucleotides difference could happen by mere chance? Please, answer. If your answer is no, please show where is the mathematical error. Don't avoid the question citing again the problem of function gain. That will come after. If your answer is yes, then let us be clear: you are admitting that the passage from T3SS to flagellum could never happen by chance alone, because its probability is vastly below Dembski's UPB. That's the first question. Once you have admitted that my calculations are right (or, in alternative, shown why they are wrong), then we can speak of the assumptions of function gain or not gain. Indeed, I have discussed that in detail in the cited posts, and you have never commented. So, for you comfort, I paste here the more relevant parts. From post 155: When we ask for a path, we are asking for a path, not a single (or double) jump from here to almost here. I will be more clear: we need a model for at least two scenarios: 1) A de novo protein gene. See for that my detailed discussion in the relevant thread. De novo protein genes, which bear no recognizable homology to other proteins, are being increasingly recognized. They are an empirical fact, and they must be explained by some model. The length of these genes is conspicous (130 aminoacids in the example discussed on the thread). The search space huge. Where is the traversing apparatus? What form could it take? 2) The transition from a protein with a function to one other with another function, where the functions are distinctly different, and the proteins are too. Let’s say that they present some homology, say 30%, which lets darwinist boast that one is the ancestor of the other. That’s more or less the scenario for some proteins in the flagellum, isn’t it? Well we still have a 70% difference to explain. That’s quite a landscape to traverse, and the same questions as at point 1) apply. You cannot explain away these problems with examples of one or two muations bearing very similar proteins, indeed a same protein with slightly different recognition code. It is obvious hat even a single aminoacid can deeply affect recognition. You must explain different protein folding, different function (not just the same function on slightly different ligands), different protein assembly. That’s the kind of problems ID has always pointed out. Behe is not just “shifting the goalposts”. The goalposts have never been there. One or two aminoacid jumps inside the same island of functionality have never been denied by anyone, either logically or empirically. They are exactly the basic steps which you should use to build your model pathway: they are not the pathway itself. Let’s remember that Behe, in TEOE, places exactly at two coordnated aminoacid mutations the empirical “edge”, according to his reasonings about malaria parasite mutations. You can agree or not, but that is exactly his view. He is not shifting anything. From post 184: At this point, probably not knowing what else to say, you go back to the concept of gradual fixation. But what are you speaking of? Please explain how you pass from a function (T3SS) to another one (the flagellum) through 490 or so single aminoacid mutations, each one fixed in name of I don’t know what function increase. Are you thinking of, say, 245 gradual mutations improving at single steps the function of the T3SS, and then 245 more single mutations, building gradually the function of the flagellum? What a pity that, of those 490 or so internediaries, there is not a trace, either in existing bacteria, or in fossils, or in general models of protein functions. Or, if we want to exercise our fantasy (why not? in time I could become a good darwinist) we could split the cake in three: 150 mutations improving T3SS, 150 more to build some new, and then lost, intermediary function (double cooption!), and then 190 to build up the flagellum. Obviously, all that would happen in hundreds of special “niches”, each of it with a special fitness landscape, so that we can explain the total diappearance of all those intermediary “functions” form the surface of our planet! Do you really believe all that? From post 192: Patrick: I think that you obviously understand that there is no hope of splitting the transition into 245 or 163 functional intermediates (2-3 mutations steps). In that case, if you imagine that you can split the transition in such a way, and that each mutation can be fixed quickly, we would be in a scenario similar to the “Methinks it’s like a weasel” example. Indeed, a 2-3 aminoacids coordinated mutation is perfectly accessible to very rapidly dividing beings, like bacteria or malaria parasite, as Behe has clearly shown in his TEOE. We can accept that with prokatyotes, single mutations can be attained easily enough, double ones with a certain ease, given some time, and triple ones probably imply some difficulty, but can be done (especially if you imagine a whole earth covered with proliferating bacteria). The scenario would change dramatically, in the sense of impossibility, for bigger and slower animals, like mammals. But let’s stick to bacteria. The fact is, there are only two ways of splitting the path in 2-3 nucleotide changes: a) You need hundreds of intermediates, all of them with progressively increasing function of some type, all neatly ordered in the pathway which brings to the final flagellum. Each of them must have reproductive advantage enough so that it can be fixed. b) You know which are the ciorrect nucleotides, and you artificially fix them when they appear. That’s the case of “Methinks it’s like a weasel”. You already have the information, and you just select it. As the option b) is obviously design, let’s discuss option a). Again, it is not an option, for 3 different reasons: 1) Logical: there is no reason that in a search space functions can be so strictly and magically realated. There is no mathematical model which can anticipate such a structure. Indeed, if it existed, that would be heavy evidence for tha existence of God. Moreover, protein functions derive from totally empirical phenomena, like protein folding in a number of very different domains, and there is really no logical reasons that a ladder of intermidiate functions can “lead” from one function to a different one, and not only in a single lucky case, but practically in all cases. 2) Empirical: That process, or model, has never been onserved. The examples cited by Bob are definitely not examples of the splitting of a function in intermediate steps, but rather of single steps leading from one function to a slight variation of the same function. 3) Empirical (again): if the final function is reached through hundreds of intermediate functions, all of them selected and fixed, where are all those intermediates now? Why do we observe only the starting function (bacteria with T3SS) and the final function (bacteria with flagella)? Where are the intermediates? If graduality and fixation are what happens, why can’t we observe any evidence of that? In other words, we nedd literally billions of billions of molecular intermediates, which do not exist. Remember that the premise is that each successive step has greater fitness than the previous. Where are those steps? Where are those successful intermediates? were they erased by the final winner, the bacterium with flagellum? But then, why can we still easily observe the ancestor without flagella (and , obviously, without any of the amazing intermediate and never observed functions)? So, where are your detailed answers and comments to all that? If you don't want to discuss, just say that. If you have your model of function gain in this specific scenarion, please describe it. Do whatever you please. I have said all that I could reasonably say.gpuccio
June 6, 2008
June
06
Jun
6
06
2008
03:40 PM
3
03
40
PM
PDT
gpuccio - you're right. It's not fair. F2XL tries one derivation of CSI, so I criticise it. You give another, so I criticise yours. And then you complain that I give different criticisms! The fitness gain assumption is important, because otherwise you're just calculating the proportion of the parameter space that could be explored. But, if there is a fitness gain of intermediates, this is an irrelevant calculation, because it will say little to nothing about the probability of the target being reached. It is, I believe, because of this that Dr. Dembski has been working so much on evolutionary informatics.
...statistics is the science of random events, not of causal assumptions.
Sorry, this is just wrong. A lot of modern-day statistics is about causal models. Look up topics like structural equation modelling, path analysis, graphical models etc.Bob O'H
June 6, 2008
June
06
Jun
6
06
2008
11:45 AM
11
11
45
AM
PDT
Patrick: I am afraid there is no purpose in going on with this discussion with Bob. He goes on changing arguments. F2XL's calculations are wrong because they are statistically not correct. My calculations are wrong because I assume no fitness gain. I really think he is not fair at all. He has never, never said if he thinks that my calculations are right or wrong from a statistical point of view: that has obviously nothing to do with the fitness gain assumption, because, as I have tried to explain to him (although he probably already knows), statistics is the science of random events, not of causal assumptions. But everything is useless. Instead, I would really appreciate F2XL's feedback on the problem of the calculations, because I feel that it is not right to leave it open. So, I will stay tuned on this old thread, in case someone has to say something pertinent.gpuccio
June 6, 2008
June
06
Jun
6
06
2008
10:33 AM
10
10
33
AM
PDT
you assume with absolutely no evidence whatsoever that intermediates have no fitness gain.
Name the functional intermediates in the indirect pathway. Also, since when did "intermediates" and "fitness gain" have anything to do with calculating informational bits? You seem to have your own definition of CSI. Or you seem to be asserting that the casual history must be known. F2XL and gpuccio are calculating the probability of an indirect pathway...NOT the informational bits of the flagellum! Bob, that is one giant basic misunderstanding. I cover calculating CSI over at Overwhelming Evidence. But I'll make it short here. In order to calculate the informational bits used to represent my name "Patrick" you consider that each letter has an information capacity of 8 bits. So the calculation is simple: 8 bits X 7 letters = 56 informational bits. In the same way, each "letter" of the DNA has an information capacity of 2 bits. The DNA sequence in the genome encodes to 42 proteins in the flagellum. So in order to calculate the informational bits for the flagellum: 2 bits X xxx letters = xxx informational bits. I did not look up the number of letters in the sequence, but that gives the basic idea. I should state the caveat that no one currently knows the exact information content and that this is the MINIMUM information since my estimate would only include genes. We're still attempting to comprehend the language conventions of the biological code. MicroRNA, etc. would certainly add to the information content for all biological systems. An analogy to computers is how XML, meta tags, and Dynamic Linking Libraries add to the core code (and thus informational content) of a program. I would also refer people back to comment #70 that covers other issues. This is perhaps why ID proponents are shy to provide this calculation since it's not final and since a lower estimate "might" be less than 500 (Dembski's UPB). But as I noted I did not look up the sequence although I remember there being at least 50 genes involved.Patrick
June 6, 2008
June
06
Jun
6
06
2008
09:49 AM
9
09
49
AM
PDT
gpuccio - the reason I'm concentrating on F2XL is simply that he claimed that he could calculate CSI, and I've been trying to show why his calculations are wrong. Your own calculation are also wrong for the obvious reason that you assume with absolutely no evidence whatsoever that intermediates have no fitness gain.Bob O'H
June 6, 2008
June
06
Jun
6
06
2008
08:53 AM
8
08
53
AM
PDT
Bob O'H: "If I’m wrong, you would do better to work through my argument and show where and why it’s wrong. At the moment you’re reduced to repeating yourself because you’re not applying yourself to my arguments." Excuse me, but I am repeating myself because, in the last few days, you gave me not only no argument, but indeed no answer. In the old post #183 you just cite again the possibility of fixation. I have answered in detail to that (see for instance my post #155, 184 and 192), giving very detailed arguments, about which you have never commented. Your only two more recent posts (#193 and 194) were answers to F2XL, and did not address in any way my calculations. Your strange behaviour seems to be: post to F2XL to say: "Hey, your calculations are wrong!" (true, but you have not understood why); and then post to me and say nothing of ny calculations (should I believe you suspect they are right?), and generically citing again the problem of gradual selection, without supporting it in any way, and without addressing the specific objections I have given at least three times in the recent history of this thread, in posts specifically addressed to you (once to Patrick, to be precise). To be clear, the two problems are completely separate: the possibility of selection has nothing to do with the calculation of a probability. Although many forget that, statistics can be applied only to random phenomena, to analyze them or, sometimes, to exclude them. If there is a non random cause, probabilities don't apply. So, the possibility of gradual selection has to be ruled out or affirmed on a different level (logical and empirical), but not by probabilistic calculations. I have tried to discuss that, and you have not answered. Instead, the possibility of random causes in each specific context or model have to be evaluated by a correct calculation of probabilities. I have tried to do that for our specific model, and you have never been kind enough to say if you believe I am right or wrong, or at least if you can't make up your mind. So, let's comment your last (and only) "argument", which I could not comment before because it's in your last post. I suppose it should be contained in the following phrase: "That would be true if it were the probability that there was at least one mutation at the locus in a given time, and that the final mutation was to the “correct” base. However, what is calculated is clearly not that: it’s lacking some vital constituents (like a mutation rate)." Would that be an argument? I scarcely can understamd what it means. Bu I will comment it just the same, in case you are expecting to accuse me of not replying... First of all, my calculations to which you refer are exactly what is needed in the model we were discussing, that is "the probability that there was at least one mutation at the locus in a given time, and that the final mutation was to the “correct” base.", or, to put it better (your phrase is rather obscure), they calculate the probability that the final mutation, after the minimum number of events (490) which can generate a 490 nucleotide mutation, can give the correct base at each of the 490 sites. That corresponds to p^490. Then you say: "However, what is calculated is clearly not that: it’s lacking some vital constituents (like a mutation rate)." What argument would that be? The calculation of a probability does not need any vital constituents: it just needs qunatitative data and context (model). A mutation rate is part of the model, but it must be taken into account after you have calculated the porbability of the events which the mutations should generate. So, if that can be called an argument, it's just wrong. Anyway, just to repeat myself for the nth time, in my post #190 I have given a simpler and more correct (IMO) way to calculate the probabilities in our discussed model, and I have also taken into account the probabilistic resources implied in the model (including the mutation rate). The result? No comment from you...gpuccio
June 5, 2008
June
06
Jun
5
05
2008
10:20 PM
10
10
20
PM
PDT
The probability p has to be multiplied if we want to have the probability of more combined mutations.
That would be true if it were the probability that there was at least one mutation at the locus in a given time, and that the final mutation was to the "correct" base. However, what is calculated is clearly not that: it's lacking some vital constituents (like a mutation rate). If I'm wrong, you would do better to work through my argument and show where and why it's wrong. At the moment you're reduced to repeating yourself because you're not applying yourself to my arguments.Bob O'H
June 5, 2008
June
06
Jun
5
05
2008
08:52 PM
8
08
52
PM
PDT
Correction of typo: "the probability of having the 490 correct results after the minimal event (490 mutations) if p^490, as corrctly stated by F2XL." should be: "the probability of having the 490 correct results after the minimal event (490 mutations) is p^490, as correctly stated by F2XL."gpuccio
June 4, 2008
June
06
Jun
4
04
2008
03:34 PM
3
03
34
PM
PDT
Bob: At risk of repeating myself: a) 1/4^{4.7×10^6} is wrong. It is not the probability for one correct mutation, it is the whole search space of the whole genome, and is not perinent here. The correct probability for one single correct mutation is 1/(3*(4.7*10^6)), which is a much higher probability. Let's call it p. b) However, the second operation by F2XL is correct, and you are wrong. The probability p has to be multiplied if we want to have the probability of more combined mutations. As I have alredy said a couple of times (or more), the probability of having the 490 correct results after the minimal event (490 mutations) if p^490, as corrctly stated by F2XL. You are wrong about the problem of the order. Let's simplify, and calculate the probability of having two correct specific mutations, let's call them A and B. Each of them has a probability p. The probability of having both mutations is p^2. Although each of the two mutations has to happen at a specific site, so, if you want, in a specific spacial order, there is no difference if A happens before B, or vicversa, or if both happen simultaneously. The probability of the two mutations occurring together as a final result is the same: p^2. Anyway, I have suggested an alternative line of reasoning for that calculation, IMO much simpler, in my post #190.gpuccio
June 4, 2008
June
06
Jun
4
04
2008
03:32 PM
3
03
32
PM
PDT
F2XL - nope, sorry. You still haven't realised that what you calculated was not what you thought you were calculating. If I follow you, you allow for 490 mutations in total (we can tackle this in more detail later). In 117, you obtain a probability of 1/4^{4.7x10^6}. This is the probability that one mutation happen at one specified position, and that it mutates to the correct base (should it be a 3? I think so, if you condition on the base being wrong to start off with). We'll call this p, anyway. You then take p to the 40th power. What are you calculating here? Well, the probability for the first mutation is the probability that one specified position mutates correctly. The probability for the second mutation is the probability that another specified position mutates correctly. Note that the position that mutates has to be specified too. So, if the order is position A, then B, you can't have B mutate first. Hence, the order of your mutations is important: it's not invariant to permutations of the order. You wanted to calculate something that didn't depend on the order, but I'm afraid you failed: you were out by a factor of 490!, which is a rather large number.Bob O'H
June 4, 2008
June
06
Jun
4
04
2008
12:07 PM
12
12
07
PM
PDT
I guess the html code for the following paragraph wasn’t working so I’ll repost it to fix it:
Nope, still not working. :-(Bob O'H
June 4, 2008
June
06
Jun
4
04
2008
09:08 AM
9
09
08
AM
PDT
Patrick: I think that you obviously understand that there is no hope of splitting the transition into 245 or 163 functional intermediates (2-3 mutations steps). In that case, if you imagine that you can split the transition in such a way, and that each mutation can be fixed quickly, we would be in a scenario similar to the "Methinks it's like a weasel" example. Indeed, a 2-3 aminoacids coordinated mutation is perfectly accessible to very rapidly dividing beings, like bacteria or malaria parasite, as Behe has clearly shown in his TEOE. We can accept that with prokatyotes, single mutations can be attained easily enough, double ones with a certain ease, given some time, and triple ones probably imply some difficulty, but can be done (especially if you imagine a whole earth covered with proliferating bacteria). The scenario would change dramatically, in the sense of impossibility, for bigger and slower animals, like mammals. But let's stick to bacteria. The fact is, there are only two ways of splitting the path in 2-3 nucleotide changes: a) You need hundreds of intermediates, all of them with progressively increasing function of some type, all neatly ordered in the pathway which brings to the final flagellum. Each of them must have reproductive advantage enough so that it can be fixed. b) You know which are the ciorrect nucleotides, and you artificially fix them when they appear. That's the case of "Methinks it's like a weasel". You already have the information, and you just select it. As the option b) is obviously design, let's discuss option a). Again, it is not an option, for 3 different reasons: 1) Logical: there is no reason that in a search space functions can be so strictly and magically realated. There is no mathematical model which can anticipate such a structure. Indeed, if it existed, that would be heavy evidence for tha existence of God. Moreover, protein functions derive from totally empirical phenomena, like protein folding in a number of very different domains, and there is really no logical reasons that a ladder of intermidiate functions can "lead" from one function to a different one, and not only in a single lucky case, but practically in all cases. 2) Empirical: That process, or model, has never been onserved. The examples cited by Bob are definitely not examples of the splitting of a function in intermediate steps, but rather of single steps leading from one function to a slight variation of the same function. 3) Empirical (again): if the final function is reached through hundreds of intermediate functions, all of them selected and fixed, where are all those intermediates now? Why do we observe only the starting function (bacteria with T3SS) and the final function (bacteria with flagella)? Where are the intermediates? If graduality and fixation are what happens, why can't we observe any evidence of that? In other words, we nedd literally billions of billions of molecular intermediates, which do not exist. Remember that the premise is that each successive step has greater fitness than the previous. Where are those steps? Where are those successful intermediates? were they erased by the final winner, the bacterium with flagellum? But then, why can we still easily observe the ancestor without flagella (and , obviously, without any of the amazing intermediate and never observed functions)?gpuccio
June 4, 2008
June
06
Jun
4
04
2008
07:25 AM
7
07
25
AM
PDT
To keep this conversation from dying, let's be nice and assume an indirect pathway where there does exist functional intermediate states within the reach of Darwinian processes that do not have the same function as the final flagellum. Homologs flying everwhere like a tornado, HGT, duplications, all the "engines of variation". Bob hasn't deigned to define any functional intermediates, except to assert that "that's how evolution works!", but let's be nice and assume they're all within 2-3 changes. How would that change the math?Patrick
June 4, 2008
June
06
Jun
4
04
2008
06:22 AM
6
06
22
AM
PDT
F2XL, Bob O'H, kairosfocus: While we wait for F2XL's answer, I would like to add some thoughts to the discussion. I realize that calculating the probabilistic resources for series of 490 mutations is rather challenging, so I would suggest a different approach, which IMO can simplify the mathematical reasoning. So, let's forget the whole E. coli genome for a moment, and let's consider just the 490 nucleotides which, according to F2XL's approximate reasoning, have to change in a specific way to allow the transition from the old function to the new, coopted function (the flagellum). Those 490 nucleotides are a specific subset of the whole genome. So, we can reason limiting our calculations to that specific subset. As the original set, the whole genome, is 4.7 * 10^6 bases long, we can say that our subset is about 1 : 10^4 compared to the whole genome. What is the serach space of possible configurations of that subset? That's easy. It's 4^490, that is about 10^295. If we assume that a specific configuration has to be reached, the random probability of achieving it through any random variation event is of 1 : 10^295. Now, let's reason about probabilistic resources. As our subset is smaller than the whole genome, our resources have to be divided by 10^4. In other words, each single mutation in the genome has about 1 : 10^4 probabilities of falling in the chosen subset. And there can be no doubt that any single mutation falling in the specific subset is determining a new state of the subset, and therefore is "exploring" one of the configurations in the search space. But what are our probabilistic resources? For that, please review F2XL's posts #95, 96, 99 and 102. I will copy here his final results: "To inflate the probabilistic resources even more we will round up to 10 to the 55th power. This number will represent the total number of mutations (in terms of base pair changes) that have happened since the origin of life. We now have established our replicational resources for this elaboration." Now, I must say that F2XL has been far too generous in calculating that number of 10^55 mutations in the bacterial genome in our whole planet in all the time of its existence. I do believe the real number is at least 20 orders of magnitude lower, and that is evident if you read his posts. Anyway, let's take it for good. Let's remember that we have to divide that number by 10^4. In other words, only 1 : 10^4 mutations will take place in our specific subset. That does not sound as a big issue at this point, after having conceded at least 20 extra orders of magnitude to the "adversary", but let's do it just the same. One should never be too generous with darwinists! So, our total number of available bacterial mutations in the whole history of earth goes "down" to 10^51. Are we OK here? Now, it's easy: we have a search space of 10^295 configurations which has been explored, in the whole history of our planet, at best with 10^51 independent mutations. What part has been explored? Easy enough. 1 : 10^244. In other words, all the mutations of our world history which took place in our relevant subset of 490 nucleotides have only 1 : 10^244 possibilities to find the correct solution. Let's remember that Dembski's UPB is 1 : 10^150! Are we still doubting the flagellum's CSI? Possible anticipated objections. Only two: 1) What about gradual mutation with functional fixation? That's not an objection. The point here is that we need 4990 changes to get to the new function. Fantasies about traversing landscapes will not do. I have answered in detail to this line of reasoning in various posts, especially in #184. I have received no answer from Bob about that. 2) The functional configurations of our 490 nucleotides can be more than one. That's true. But let's remember how we arrived at that number of 490 different nucleotides: F2XL assumed a very high homology between the 35 proteins of the T3SS and those of the flagellum (which is probably not true), and reduced to only 1% the nucleotides which really have to change in the 35 genes to achieve the new function. Again I think that he was too generous in that assumption. That's why I think that we can confidently assume that the functional island in those 490 nucleotides has to be considered extremely small. But how big should it be to change things? Easy: to go back at least to a probability of 1 : 10^150 (Dembski's UPB), which in itself would be enough to affirm CSI, our island of functional states would have to be as big as 10^94 configurations. In other words, those 490 nucleotides should be able to give the new function in 10^94 different configurations! And we still would be at Dembski's UPB value, which is indeed a generous concession to the adversary, if ever I saw one! I really believe that a value of about 1 : 10^50 is more than enough to exclude chance. In that case, our "functional island" would have to be as big as 10^196 configurations to give chance a reasonable possibility to work! So, to sum up: no possible model of step by step selection, no possible island of functionality to "traverse the landscape". The result? Very simple: the flagellum is pure CSI. It is IC. It is designed.gpuccio
June 4, 2008
June
06
Jun
4
04
2008
03:55 AM
3
03
55
AM
PDT
Gentlefolks: Riding a tiger just now -- little time. I just note F2 that the Pacific drift analogy is originally GP's not mine. [Mine was on searching our configs in 1m^3 vats to get a flyable micro-jet. The same basic issues obtain, cf the always linked, appendix 1 note 6. The example is an adaptation of Sir Fred's famous tornado in a junkyard assembles a 747, scaled down so diffusion etc can be seen at work. Observe the link onward to the issues on 2LOT, especially in light of the usual open systems objections. Bottomline -- config spaces with sparse functional islands and archipelagos cannot be spanned effectively other than by intelligence, per experience and in light of implications of searches constrained by resources, even on the gamut of the observed cosmos.] Evo Mat advocates ALWAYS, in my experience, seek to divert from these issues. The reason: Frankly, the origin of complex, functionally constrained information issue is the mortal, bleeding wound in evo mat thought. GEM of TKIkairosfocus
June 4, 2008
June
06
Jun
4
04
2008
12:43 AM
12
12
43
AM
PDT
F2XL: While completely agreeing with you with almost everything you say, and for addressing the conflicting objections of Bob (which I have tried to do too), I have to point out that you should also address my point that I think there is a serious error in your calculation. That is not intended as a critic at all: my reasoning is exactly the same as yours, and the final conclusions are absolutely the same, but I think we should definitely make clear what the correct mathematical approach is. If you could please find the time to review my posts #158 and #163, you can see that my idea is that you are wrong when you say (as you have repeated also in your last post): "With 3 possible changes that can happen after a point mutation has occurred on a base pair (duh), and with 4.7 million base pairs in the entire genome, the number of possible changes that can happen to the EXISTING INFORMATION (as gpuccio pointed out, I did make a mistake when I originally assumed 4 possible changes) is calculated by taking 3 to the 4.7 millionth power. This gives us a result of 10 to the 2,242,000th power base pair mutations that have the possibility of occurring." As I have already pointed out, you should not take 3 to the 4.7 millionth power, but just multiply 3 by 4.7 millions. Please, refer to my previous posts to see why. That does not change anything, because in the following step you have to take the result to the 490th power, and that gives us a low enough probability (about 1 : 10^3430) to go on safely with our reasonings. As I discussed with Bob, that would represent the probability of attaining that specific coordinated mutation of 490 nucleotides after the minimum mutational event which has the power to cause it: 490 mutations. To be definitely clear, the "chronological order in which the mutations happen" is not important (although the best results are obtained with 490 simultaneous mutations, which is the only scenario which avoids the possibility of a mutation erasing a previous favourable one), while the "order and site of each specific mutation in the genome" is absolutely important. It is important to distinguish between these two different meanings of "order", because many times darwinists have based their critics on that misinterpretation. That's all. As I have said, I could be wrong in that calculation, although at this point I don't think I am, at least in the general approach. Some mathemathical detail could need to be refined. But I would appreciate your feedback on that point, so that we can go on with your reasoning after having reached some agreement on the mathematics involved.gpuccio
June 3, 2008
June
06
Jun
3
03
2008
09:54 PM
9
09
54
PM
PDT
I guess the html code for the following paragraph wasn't working so I'll repost it to fix it: "I guess all someone has to do to see right through that claim is read this comment to find out why the methods I used don't represent what you think they do."F2XL
June 3, 2008
June
06
Jun
3
03
2008
09:03 PM
9
09
03
PM
PDT
I guess I'm drawing a lot of attention here. That would seem a reasonable evolutionary explanation So explain to me how a bacterial flagellum could evolve in a step by step manner that involved the change of only one base pair per step. Explain how each change would provide a functional benefit. No. You're presenting your model, based on your assumptions. Completely false. If you've been paying attention since the beginning of the discussion over the flagellum, you would know were the scenario comes from. https://uncommondescent.com/intelligent-design/chance-law-agency-or-other/#comment-289302 My assumptions are based on MATZKE'S assumptions, which are that ALL the homologs can easily show up in a single E. coli cell (highly unrealistic, but that appears to be what he thinks is possible) and cross any hurdle to becoming a working functioning flagellum much like what we see today. Give use evidence that your assumptions are reasonable. 1. They aren't reasonable (nor are they supposed to be) 2. They aren't my assumptions. 3. All in all the assumptions put forth are biased AGAINST what I'm trying to prove. Especially the idea that all 35 genes that would code for the homologs share a 95% sequence similarity, and that all the homologs would already be present in every last E. coli that has ever existed on earth, and the notion that there have been 10 to the 55th power opportunities to make the changes needed to cross any neutral gaps that we may come across in the process of turning the homologs into the rotary flagellum. Oh, and arguments from incredulity or ignorance won't cut it. I noticed. Read the paper I linked to, please. Or, if don’t want to (and there’s no compulsion), don’t try and make criticisms of something you haven’t read. DOI NOT FOUND isn't what I would call an evolutionary explanation. If you claim it's sufficient to overthrow what I'm claiming then please, feel free to explain and apply it to what Matzke ('xcuse me,) I used as a scenario. In reality, it might make a difference, of course, but assuming the order is irrelevant is a decent first approximation. I agree with you on this one, and yes, if I really was paying close attention to the order in which the 490 base pairs had to change I would have to include an entirely new step into the whole process as well. Are you even aware that your calculation assumed a set order? I sure hope you aren't one of the many people who insists that Dembski is a "pseudo" mathematician... Look at how you set up the calculation - you took the product of the probabilities. Indeed, if you have multiple events which all have to happen then that would be the method you would use. Refer to the information below to see why. This assumes a fixed order, as I showed above. Are you kidding me? I guess all someone has to do to see right through that claim is read this comment to find out why the methods I used don't represent what you think they do. But apparently that wasn't clear enough. Let's see if I can make it any more detailed then I already did. 1. Take 3 pennies (or fair coins). 2. Number each penny with a respective number from 1-3. 3. Now ask yourself the following question: what are the odds that I will flip all tails after I've flipped each coin? 4. With the probability of each individual coin landing tails being roughly 1 chance in 2, the odds that I would get all tails after I flipped all three coins is one chance in 8 (1/8). 5. This is done by taking the individual odds for each coin (1/2) and muliplying it with itself 3 times since there are three coins. 1/2 X 1/2 X 1/2 = 1/8 6. Take note of the fact that it does not matter at all whether you flip the coin numbered "1" or "2" or "3," the odds of getting all tails when you've flipped all coins are the same. We can see this clearly below in a visual portrayal of all the possible outcomes: hhh hht hth thh htt tht tth ttt 7. You claimed that this didn't prove that when you take into account multiple events and put together their probabilities that you must multiply the individual probabilities. The evidence you site for your claim? The difference from the coin toss is that in a coin toss, the first toss has to come first (!), so the order can't be permuted. Yeah no s%#$ the first coin toss HAS to come first, that's why we call it the "first" coin toss! If we applied that kind of logic to mutations then no matter what the outcomes are we would have to argue that they had to happen in a specific order because the first mutation HAS to come first! By your own reasoning mutations HAVE to happen in a specific order(they don't)! 8. After taking a quick skim or your "evidence" that What I did assumed a set order, you presented the following: Suppose you want to calculate the probability of three events (A, B, and C) happening, and the order is irrelevant. Suppose they are independent with probabilities are p_A, p_B, p_C. There are six ways in which this could occur: ABC ACB BAC BCA CAB CBA For the first, the probability is P(A).P(B|A).P(C|A,B) = p_A.p_B.p_C For the second, the probability is P(A).P(C|A).P(B|A,C) = p_A.p_C.p_B etc. The total probability is thus 6p_A.p_C.p_B. Again I'm very sorry if it sounds like I'm making personal attacks here, I'm just frustrated because many Darwinists, or Design critics accuse us of making the same "mistakes" you did. In this case the mistake you made was that you simply used a straw man of what I was trying to calculate. In your demonstration you calculated the odds of 3 events happening in a certain order. Irrelevant to say the least because what I was doing was trying to calculate what the odds would be for ALL those events to occur in the first place. Doesn’t matter which events happen in what order, the odds of ALL the events occurring is still the same. The same applies to the 490 mutations, it’s not the ORDER that they occur but WHETHER they occur. Since the odds of getting tails on a coin flip are one in two, and if you wanted to calculate the odds of getting all tails on X coin flips, then you would take the number of outcomes (2) and take that to a power equal to the number of places/times it must occur (X). 9. With 3 possible changes that can happen after a point mutation has occurred on a base pair (duh), and with 4.7 million base pairs in the entire genome, the number of possible changes that can happen to the EXISTING INFORMATION (as gpuccio pointed out, I did make a mistake when I originally assumed 4 possible changes) is calculated by taking 3 to the 4.7 millionth power. This gives us a result of 10 to the 2,242,000th power base pair mutations that have the possibility of occurring. So within the 35 genes that code for the homologs we have made the assumption that there is only a 5% sequencing difference between the info for the homologs and the info for the actual flagellum. We make an even more hopeful assumption that in order to fulfill 5 criteria previously mentioned and thus be preservable by selection there are only 490 base pairs (1% of the total information for the flagellum) which must all make their respective changes. With the odds of any particular point mutation occurring on the order of 10 to the 2,242,000th power (similar to the 1/2 odds with a coin), and with (AT LEAST) 490 mutations that have to occur (similar to the 3 coins which are to be flipped) you take the odds of any given mutation and take that to the 490th power (similar to taking 1/2 to the 3rd power with the three coins). 10. The final result? When taken to the 490th power, 10 to the 2,242,000th power becomes 10 to the 1,098,580,000th power (similar to 1/2 to the 3rd power becomes 1/8). Like it or not, those are the odds that 490 point mutations will hit all the right base pairs AND make the right changes thereafter. Can we apply the probabilistic resources now or what?F2XL
June 3, 2008
June
06
Jun
3
03
2008
08:54 PM
8
08
54
PM
PDT
F2XL, please explain why I'm wrong, rather accusing me of ignorance about probability theory. I would like to quickly note Bob that I wasn't trying to attack you personally. I just felt like I made my explanation too vague on why the order of events occurring isn't important. I think maybe I should've given a more direct approach to what you said in 129. I first note to you that F2 seems to have very good reason, in an “Expelled” world for not revealing a lot about himself. Good observation. :) I try and keep a lot of this stuff to myself except for in online discussions, I'm just hoping I can get tenure before the system expels me first. ;) I gotta hand it to you KF, your analogy of drifting from island to island pretty much sums it up. Sure there may by many "endpoints" but that doesn't mean that you can realistically reach any of them with limited (probabilistic) resources. I know it sounds crazy but I actually haven't gotten around to reading Meyer's infamous article. I've skimmed through parts of it but I guess I better print it out and read the whole thing some time by the end of the month. gpuccio - quite simply, your assumption that there are only 490 mutations, because you need 490 bases to change, is absurd. You’re assuming that the only mutations that could happen were at those positions. Bob I really do apologize if I sound like I'm attacking you personally but I'm getting to the point where I think maybe you haven't actually read through what I was doing. As gpuccio pointed out he was basing his assumptions off of mine. And just so you and everyone else is aware I pointed out VERY CLEARLY BEFORE that mutations can happen anywhere in the entire 4.7 million base pair genome. The assumption that you only need 490 particular point mutations to occur (equivalent to 1% of the information for the entire flagellum, and a hundredth of a percent (.01%) of the genome for the entire E. coli) doesn't HELP what I'm trying to prove, it dumbs down the problem that selection faces by exaggerating the amount of information those 490 base pairs really code for. gpuccio - sorry, but you are assuming only 490 mutational events. If there are more, you have to calculate the probability that out of the N events, 490 are in the "correct" position. I would highly recommend you follow gpuccio's advice as follows: If, instead, you want to argue how F2XL got the number on which those calculations are made, that is the necessity of 490 specific mutations to the flagellum, then you should discuss F2XL’s previous posts, which detailed that reasoning.F2XL
June 3, 2008
June
06
Jun
3
03
2008
04:02 PM
4
04
02
PM
PDT
Bob O'H: I have not followed in detail all the reasoning by F2XL up to now. If I were to sum it up (but I could be wrong) I would say: 1) Let's suppose that he flagellum is derived form some other functional ensemble by cooption (the usual "argument" against it irreducible complexity in the darwinian field). I suppose the general idea is that it is derived from the T3SS. 2) Let's avoid all the general objections to that relationship (which came first, and so on), and let's assume the derivation model (I think F2XL did that). 3) Obviously, the 35 proetins in the flagellum are not the same as those in the T3SS. Here is where the darwinian discourse is completely unfair. At best, many of them (but not all) present homologies with other proteins in T3SS. That means that there are similarities which are not random, but they could be easily explained on a functional basiand not as evidence of derivation. 4) However, let's assume derivation as hypothesis. I think F2XL has reasoned very simply: we have 35 proteins which have changed. If there were no change, bacteria with T3SS would have the flagellum, but that's not the case. How big is the change? Here F2XL has given an approximate reasoning, very generous indeed towards the other side, assuming high homology between the 35 protein pairs, calculating the number of different aminoacids between them according to that approximation, and then assuming that only a part of the aminoacid changes (10%, if I rememebr well) is really functionally necessary for the change from T3SS to flagellum. That's how that number of 490 specifically necessary mutations comes out. OK, it's an approximation, but do you really think it's wrong? I think, instead, that F2XL has been far too accomodating in his calculations. The real functional difference is probably much higher. We are speaking of 35 proteins which have to change in a coordinated way to realize a new function. That's the concept of cooption. I have always found that concept absolutely stupid, but that's what we are discussing here. 5) That's why we need our coordinated mutation of 490 nucleotides in the whole genome: to get to the new function from the old one. Again, that's the spirit of cooption. That coordinated mutation, according to my calculations, which at this point I think you are accepting, as you have given no specific objection to the mathematics, has a probability of the order of 1 : 10^3000 of being achieved by the minimum necessary mutational event: 490 indipendent mutations. 6) I see that in your last post, instead of saying anything against that reasoning, or against those numbers, you are going back to another kind of objection, which I have already answered in detail earlier in this thread (see post #155), a discussion you interrupted briskly with the following: "Folks, you’re now throwing examples at me that have nothing to do with the problem we were discussing. I don’t want to get side-tracked from F2XL’s problem, so I won’t respond here. I’m sure another post will appear at some point to discuss these matters further. Sorry for this, but I’d like to find out if F2XL’s calculation of CSI is valid. If we wander off onto other topics, he might decide we’re not interested any more, and not reply." Well, now you should find out if my calculation of CSI is valid. I have been extremely detailed. We are in front of a coordinated change which has such a low random probability that, in comparison, the achievement of Dembski's UPB becomes a kid's game. That's obviously CSI at its maximum. And we are only considering changes in the effector proteins, and not in regulation, assemblage, and so on. At this point, probably not knowing what else to say, you go back to the concept of gradual fixation. But what are you speaking of? Please explain how you pass from a function (T3SS) to another one (the flagellum) through 490 or so single aminoacid mutations, each one fixed in name of I don't know what function increase. Are you thinking of, say, 245 gradual mutations improving at single steps the function of the T3SS, and then 245 more single mutations, building gradually the function of the flagellum? What a pity that, of those 490 or so internediaries, there is not a trace, either in existing bacteria, or in fossils, or in general models of protein functions. Or, if we want to exercise our fantasy (why not? in time I could become a good darwinist) we could split the cake in three: 150 mutations improving T3SS, 150 more to build some new, and then lost, intermediary function (double cooption!), and then 190 to build up the flagellum. Obviously, all that would happen in hundreds of special "niches", each of it with a special fitness landscape, so that we can explain the total diappearance of all those intermediary "functions" form the surface of our planet! Do you really believe all that? So, finally, the issue is simple. If: a) My calculations are even grossly right, and we are in front of random probabilities of the order, at best, of 1 : 10^3000 and b) You cannot give any reasonable model of gradual mutation and fixation, least of all any evidence of it then: I think we are finished. What else is there to discuss?gpuccio
June 3, 2008
June
06
Jun
3
03
2008
02:55 PM
2
02
55
PM
PDT
gpuccio - I don't see the point of calculating the probability of having the right result with 490 mutational events. How does that relate to any process we see in evolution? In reality, there is a constant process of mutation and fixation, so surely you should be considering that.Bob O'H
June 3, 2008
June
06
Jun
3
03
2008
09:58 AM
9
09
58
AM
PDT
Bob O'H: That's perfectly correct. That would be the part regarding the computational resources. But calculating the probability of having the right result with 490 mutational events is the first step. Do you agree with my result, at least as order of magnitude? Take notice, moreover, that if you multiply the mutational events, each new event has approximately the same probability of finding a correct solution or of destroying a previous one... Indeed, for my first calculation to be correct, we would have to assume that the 490 mutational events are contemporary, otherwise we woud have to take into account the possibility of such am "interference". While with the first 490 event that possibility is negligible, it could become higher as you multiply the number of mutations, as long as you have no way of "fixing" the positive results already found.gpuccio
June 3, 2008
June
06
Jun
3
03
2008
09:18 AM
9
09
18
AM
PDT
gpuccio - sorry, but you are assuming only 490 mutational events. If there are more, you have to calculate the probability that out of the N events, 490 are in the "correct" position.Bob O'H
June 3, 2008
June
06
Jun
3
03
2008
09:07 AM
9
09
07
AM
PDT
Bob O'H: Wrong. I never assumed that. I just started form a definite point in F2XL reasoning, where he had assumed that, and you had not objected. Please, review the posts, and you will see that I did not comment on those starting numbers, I just tried to correct the following calculations, because I felt that F2XL was wrong in a couple of important points, and that your objections were wrong too. So, if you please want to stick to the problem of those specific calculations, I would like to have your opinion if my calculations are right. If, instead, you want to arguw how F2XL got the number on which those calulations are made, that is the necessity of 490 specific mutations to the flagellum, then you ahould discuss F2XL's previous posts, which detailed that reasoning. Anyway, as you opened the discussion on the calculations themselves (correctly, I would say, because I think F2XL's calculations were wroong, but still IMO with wrong arguments on yout part), I think you ahould now contribute to that specific part of the discussion, even if you don't agree on the premises.gpuccio
June 3, 2008
June
06
Jun
3
03
2008
06:25 AM
6
06
25
AM
PDT
gpuccio - quite simply, your assumption that there are only 490 mutations, because you need 490 bases to change, is absurd. You're assuming that the only mutations that could happen were at those positions.Bob O'H
June 3, 2008
June
06
Jun
3
03
2008
05:59 AM
5
05
59
AM
PDT
Bob O'H, F2XL and kairosfocus: I have posted a detailed, and motivated, version of the coalculations under discussion here, at post #158, and repeated it, with some further comment, at post #163. I would really appreciated if you could comment on that, so that we can try to find an agreement based on facts. I am not assuming that I am right, just suggest that we start discussing on the problem as it is. I think we can find the right approach without any question of authority, although any technical comment from a mathematician or statistician would be appreciated. Statistical problems can be tricky, but they can be solved. There is no reason to argue about something which can certainly be understood correctly by all of us.gpuccio
June 3, 2008
June
06
Jun
3
03
2008
05:24 AM
5
05
24
AM
PDT
OOPS: 4^100 mnskairosfocus
June 3, 2008
June
06
Jun
3
03
2008
03:13 AM
3
03
13
AM
PDT
1 2 3 7

Leave a Reply