Uncommon Descent Serving The Intelligent Design Community

Ken Miller’s Strawman No Threat to ID

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Earlier today the News desk posted a video of Brown University biochemist Ken Miller’s takedown of ID. This is a fascinating video and it is worthwhile to post a transcript for those readers who do not have time to stream it. The video is excerpted from a BBC documentary called, with scintillating journalistic objectivity, The War on Science.

BBC Commenter: In two days of testimony [at the Dover trial] Miller attempted to knock down the arguments for intelligent design one by one. Also on his [i.e., Miller’s] hit list, Dembski’s criticism of evolution, that it was simply too improbable.

Miller: One of the mathematical tricks employed by intelligent design involves taking the present day situation and calculating probabilities that the present would have appeared randomly from events in the past. And the best example I can give is to sit down with four friends, shuffle a deck of 52 cards, and deal them out and keep an exact record of the order in which the cards were dealt. We can then look back and say ‘my goodness, how improbable this is. We can play cards for the rest of our lives and we would never ever deal the cards out in this exact same fashion.’ You know what; that’s absolutely correct. Nonetheless, you dealt them out and nonetheless you got the hand that you did.

BBC Commentator: For Miller, Dembski’s math did not add up. The chances of life evolving just like the chance of getting a particular hand of cards could not be calculated backwards. By doing so the odds were unfairly stacked. Played that way, cards and life would always appear impossible.

Now, to be fair to Miller, in a letter to Panda’s Thumb, he denies that his card comment was a response to Dembski’s work. He says poor BBC editing only made it appear that he was responding to Dembski, when really, “all I was addressing was a general argument one hears from many ID supporters in which one takes something like a particular amino acid sequence, and then calculates the probability of the exact same sequence arising again through mere chance.”

The problem with Miller’s response is that even if one takes it at face value he still appears mendacious, because no ID supporter has ever, as far as I know, argued “X is improbable; therefore X was designed.” Consider the example advanced by Miller, a sequence of 52 cards dealt from a shuffled deck. Miller’s point is that extremely improbable non-designed events occur all the time and therefore it is wrong to say extremely improbable events must be designed. Miller blatently misrepresents ID theory, because, as I noted above, no ID proponent says that mere improbability denotes design.

 
Suppose, however, your friend appeared to shuffle the cards thoroughly and dealt out the following sequence: all hearts in order from 2 to Ace; all spades in order from 2 to Ace; all diamonds in order from 2 to Ace; and then all clubs in order from 2 to Ace.  As a matter of strict mathematical probability analysis, this particular sequence of 52 cards has the exact same probability as any other sequence of 52 cards. But of course you would never attribute that sequence to chance. You would naturally conclude that your friend has performed a card trick where the cards only appeared to be randomized when they were shuffled. In other words, you would make a perfectly reasonable design inference.

What is the difference between Miller’s example and my example? In Miller’s example the sequence of cards was only highly improbable. In my example the sequence of cards is not only highly improbable, but also it conforms to a specification. ID proponents do not argue that mere improbability denotes design. They argue that design is the best explanation where there is a highly improbable event AND that event conforms to an independently designated specification.

Here’s the interesting part. Ken Miller has been debating design proponents all over the country for many years. He knows ID theory very well. Yet instead of choosing to take ID’s arguments headon, he constructs a strawman of ID theory and knocks it down.

I am not a scientist or a mathematician. I am a lawyer, but perhaps my legal training has given me an invaluable tool in the Darwin-ID debate, the tool Phil Johnson calls a “baloney detector.” And my baloney detector tells me that Ken Miller is full of baloney. Miller knows that no reputable ID proponent equates mere “improbability” with “design.” Yet there he is declaring to all the world that it is a “general argument” of “many ID supporters.”

I have to wonder. If, as the Darwinsts say, ID theory is so weak, why don’t they take it on squarely? Why do they feel compelled to attack a strawman caricature instead of the real deal? Indeed, Darwinists’ apparent fear of taking on ID on its own terms is one of the things that gives me great confidence in the theory, and that confidence will be shaken only if Darwinists ever begin to knock down the real ID instead of their ridiculous caricatures of the theory.

Comments
"But if A and B are individually functional and naturally selectable, the scenario is different. Each of the two shorter components must be evaluated for its functional complexity, and we can no more just add the bits of one to those of the other if they recombine to make a new, different functional protein. The system becomes different, because the expansion of A and B because of the reproductive advantage that each of them confers redefines the probabilistic resources, and we should consider separately: - The probability of getting A in a random system - The probability of getting B in a random system - The probability of a functional recombination of A and B, if both A and B are selected functionally and expanded in the population." Excellent. You're starting to get it. ID can't simply go from big protein to big numbers. The calculation of fsci must be for a protein that cannot be deconstructed into simpler, functional components. Could you name me one? Note the databases are FULL of domain fusions, where proteins that work together are fused to a larger protein in another. http://www.biomedcentral.com/1471-2105/5/161 Why do you think repeat proteins abound in nature? Why are proteins built of simpler domains, which in turn are built of simpler motifs? So to make the design inference, you need a protein that has been rigorously demonstrated to be UNABLE to have been evolved from simpler components. Considering the traceability of examples to the contrary, and even de novo genes, I'd love an example and some calculations.DrREC
December 16, 2011
December
12
Dec
16
16
2011
02:05 PM
2
02
05
PM
PDT
DrREC: This doesn’t make you hesitate? No. Again, you seem not to see the difference between compuing a probability in a purely random system, or computing what happnes if NS can be shown to intervene. That's why my argument is always about "basic protein domains", those for which not explicit intervention of NS in precursors is known. Now, let's say that a protein AB contanins two different functional domains: A and B. Now, if no single domain (A or B) can be shown to be individually funtional and naturally selectable, we can still treat the whole protein as one functional object, and compute its total functional complexity, that will be the sum of the bits of funcional complexity in A and B, because the protein is an irreducibly complex object. But if A and B are individually functional and naturally selectable, the scenario is different. Each of the two shorter components must be evaluated for its functional complexity, and we can no more just add the bits of one to those of the other if they recombine to make a new, different functional protein. The system becomes different, because the expansion of A and B because of the reproductive advantage that each of them confers redefines the probabilistic resources, and we should consider separately: - The probability of getting A in a random system - The probability of getting B in a random system - The probability of a functional recombination of A and B, if both A and B are selected functionally and expanded in the population. There is no doubt that tyhe functional natural selection of A and B would represent a valid path to AB, not necessarily a credible path, but one with higher probabilities of success. Moreover, you are not right in equating a protein with less than, say, 150 bits of FI with a "natural" protein, and one with more than that with "the appearance of design". It would be more correct that a protein with less than 150 bits is a model where we cannot explicitly infer design, while the second case is one where design can be inferred. And again, the correct model to infer design is that of basic protein domains. At present, the vast majority of them exhibits much more than 150 bits of FI, and for none of them a gradual naturally selectable path has been shown. Multi domain proteins can certainly be studied too, but as said the analysis becomes more complex.gpuccio
December 16, 2011
December
12
Dec
16
16
2011
08:26 AM
8
08
26
AM
PDT
"What’s hilarious in that? It is perfectly natural that some proteins, especially the shorter ones, have functional complexity below the UPB. And so? Why are you sa amused by something that is perfectly expectable?" Sorry, I stopped short. What is hilarious is that if two domains, say of the 20% below even a 150 bit limit recombine, they are suddenly above it. So two 'natural' proteins undergoing a highly probable natural process, yields a product with the appearance of design. This doesn't make you hesitate?DrREC
December 15, 2011
December
12
Dec
15
15
2011
08:58 PM
8
08
58
PM
PDT
from Kirk K. Durston, David K. Y. Chiu, David L. Abel, Jack T. Trevors, “Measuring the functional sequence complexity of proteins,” Theoretical Biology and Medical Modelling, Vol. 4:47 (2007):
[N]either RSC [Random Sequence Complexity] nor OSC [Ordered Sequence Complexity], or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life. FSC [Functional Sequence Complexity] includes the dimension of functionality. Szostak argued that neither Shannon’s original measure of uncertainty nor the measure of algorithmic complexity are sufficient. Shannon's classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that “different molecular structures may be functionally equivalent.” For this reason, Szostak suggested that a new measure of information—functional information—is required.
Here is a formal way of measuring functional information: Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak, "Functional information and the emergence of biocomplexity," Proceedings of the National Academy of Sciences, USA, Vol. 104:8574–8581 (May 15, 2007). See also: Jack W. Szostak, “Molecular messages,” Nature, Vol. 423:689 (June 12, 2003).Joe
December 15, 2011
December
12
Dec
15
15
2011
06:43 PM
6
06
43
PM
PDT
Is there a theory of archaeology? How about a theory of forensic science? Also ID is NOT anti-evolution and is perfectly OK with organisms evolving by design.Joe
December 15, 2011
December
12
Dec
15
15
2011
06:40 PM
6
06
40
PM
PDT
DrREC (et al.): I apologize for the lack of order in my posting: for a better understanding, post 21.2 should be read before post 21.1.2.gpuccio
December 15, 2011
December
12
Dec
15
15
2011
04:17 PM
4
04
17
PM
PDT
DrREC: Let's go to the papers: The first paper demonstrates finding functional intermediates, which you dispute. No. Read what I have written. The first paper is about finding ecolutionary continuity in protein families, exactly the point on which the Durston method is based. I do believe in neutral evolution of protein families, and maybe in limited functional micro-evolution at the level of a few AAs, especially at active site level. That cab be discussed, as Axe has done in a recent paper. What I do dispute is that any functional naturally selectable path has been presented for the origin of basic protein domains (those considered by Durston). I am sure you can appreciate the difference. The second is a proof of principle, that small peptides can assemble into domains. And so? It is not certainly a path that shows that those small peptides were functional precursors, naturally selectable and naturally selected in natural history. As you say, it is a "proof of principle" that intelligent engineering can build more complex structures from simpler ones. Thank you for the news. What do you expect science to do? Well, let me think a moment... Maybe find in the proteome the precursors that are believed to exist, define thjeir function, explain why they are naturally selectable and in the past have given reproductive advantage to some population, show how they could have expanded, compute the probability that they could anyway assemble into bigger structure by RV... Am I expecting really too much? Can science do something to prove its theories, or must they remain forever fairy tales, accepted only in the name of academic authority? Hilariously, the number of fits for some whole domains and enzymes is well below the universal probability bound. Oops. What's hilarious in that? It is perfectly natural that some proteins, especially the shorter ones, have functional complexity below the UPB. And so? Why are you sa amused by something that is perfectly expectable? Moreover, the UPB is not an appropriate threshold for a realistic biological system. I have discussed that here: https://uncommondescent.com/intelligent-design/id-foundations-11-borels-infinite-monkeys-analysis-and-the-significance-of-the-log-reduced-chi-metric-chi_500-is-500/comment-page-1/#comment-410355 proposing a biological threshold of 150 bits for biological systems. Even so, however, it is perfectrly expectable that some smaller protein are under that threshold too. But, just to sum it up, of 35 protein families analyzed by Durston: 6 (17%) are above the 1000 bits limit 11 (31%) are above the 500 bits limit (Dembski's UPB) 28 (80%) are above my proposed 150 bits limit I find all that extremely interesting and significant, and not hilarious at all. Different sensibilities, maybe.gpuccio
December 15, 2011
December
12
Dec
15
15
2011
04:12 PM
4
04
12
PM
PDT
DrREC: I have been away one day, and you are now famous! :) Anyway, I have not time now to read "your" threads, so I will for the moment just answer a couple of points here. fsc = functionally specified complexity fcsi = functionally complex specified information dFSCI (the term I usually use) = digital funtionally specified complex information CSI = complex specified information. The concept is the same. The letters may vary. The only meaningful differences are, IMO: a) CSI is the widest concept: any information that is complex and specified. That is, usually, Dembski's concept. Various kinds of specifications can apply. b) FSCI is a subset, where the specificationis exclusively functional: the recognition of a function implemented by the information. That is the subset most appropriate for biologcial information. c) dFSCI (my term) is still a subset, where the information is digital. It applies well to biological information in the genome and proteome, and it can be treated more simply. I hope that clarifies. The unit of complexity is always the same: bits, expressed as -log2 of the functional complexity. As they express the bits connected to the function, Durston calls them Fits (functional bits). No difference here. You say: Seems like the calculations are pretty different No. there are two ways to approxiamte the functional spcae of proteins. One is to study the structure function relationship in specific cases, abd to reason on the available data (that has been pursued mainly by Axe). Another one, the Durston method, is to use the existing proteome and compute the reduction in Shannon uncertainty at each AA site. By the way, the method estimates the functional portion of sequence space by known sequences. Correct. Sequences that are the result of billion of years of evolution. Since there are many sequences that have no function And so? Would that be an objection? and thus may share the same function It's not clear what you mean. and since evolution has likely not explored all of sequence space That is probably true, at least in part. That's why the Durston method is an approximation of the measure, not an exact measure. The best approximation available. There are some assumptions, all of them very reasonable. One of them is that the fucntional seqeunce space for the analyzed functions has been reasonably explored. And anyway, the application of Shannon's method gives anyway a very good approximation, if we assume (reasonably) that existing proteins with that fuctnion are a representative sample of all possible proteins with that function. and since most sequences are from evolutionarily related organisms And so? That's eactly what we are looking for: the exploration of a functional space thorugh neutral evolution, that preserves the function. And many of those families are very old. Many of them are LUCA families. At that level, all organisms are evolutionarily related, if we accept common descent (as I do). this is a pretty weak technique I completely disagree, for all the reasons I have given. It is a brilliant techinque, and it really measures what it tries to measure: the functional complexity of protein families. The great differences in mean complexity per site are extremely interesting, pointing to the important fact that not all proteins have the same level of functional complexity in relation to their raw sequence length. The Durston method is brilliant, simple and powerful. It is refuted by darwinists for mere ideological reasons. You started with a simple question: First, you keep acting as though fcsi is actually calculatable, in a meaningful manner. I’ve never seen you do it. Perhaps take human aldolase reducatse, and walk me through the process. It might clarify the process. What is its specificity? What it is complexity? I have given a simple answer, but it seems that it is not comfortable for you. At least, please, admit that fcsi is actually calculatable, and has been calculated, even if in your opinion the method is weak. That would be a more correct position.gpuccio
December 15, 2011
December
12
Dec
15
15
2011
03:53 PM
3
03
53
PM
PDT
Consider pulsars - stellar objects that flash light and radio waves into space with impressive regularity. Pulsars were briefly tagged with the moniker LGM (Little Green Men) upon their discovery in 1967. Of course, these little men didn't have much to say. Regular pulses don't convey any information--no more than the ticking of a clock. But the real kicker is something else: inefficiency. Pulsars flash over the entire spectrum. No matter where you tune your radio telescope, the pulsar can be heard. That's bad design, because if the pulses were intended to convey some sort of message, it would be enormously more efficient (in terms of energy costs) to confine the signal to a very narrow band. Even the most efficient natural radio emitters, interstellar clouds of gas known as masers, are profligate. Their steady signals splash over hundreds of times more radio band than the type of transmissions sought by SETI.- Seth Shostak
Joe
December 15, 2011
December
12
Dec
15
15
2011
12:59 PM
12
12
59
PM
PDT
Ok-we've got an event, say a pulsar transmission Let's assume it is complex. "The next step is to calculate both the probability of that preliminarily identified target against all other possible patterns (calculate specificity) and then compare against the UPB." All other patterns of what? Just all other patterns?DrREC
December 15, 2011
December
12
Dec
15
15
2011
10:44 AM
10
10
44
AM
PDT
That is incorrect. The fact that a "target" (specified event) may exist does not on its own indicate design. Up to this point in design detection there is no assumption of design. It may be designed or it may not be designed. The next step is to calculate both the probability of that preliminarily identified target against all other possible patterns (calculate specificity) and then compare against the UPB. At this point, depending on the calculation and if the pattern is not defined by the physical properties of the medium in which it exists, intelligent design can be determined the most likely explanation.CJYman
December 15, 2011
December
12
Dec
15
15
2011
10:35 AM
10
10
35
AM
PDT
Dude, If you are playing poker then the specification is set by the rules of the game. That said there isn't a design inference if someone gets one royal flush dealt to them. But if someone gets dealt ten pat hands in a row only a moron wouldn't suspect something is wrong.Joe
December 15, 2011
December
12
Dec
15
15
2011
04:24 AM
4
04
24
AM
PDT
Or is is FSCIO? I forget...... And why not use your own units instead of Fits, if this is established and easily calculable?DrREC
December 14, 2011
December
12
Dec
14
14
2011
03:54 PM
3
03
54
PM
PDT
No, they are describing a role in a process. Determining that role is "specified" i.e. target, iis inserting a design assumption into your design detector. Can you provide me a metric of specification that doesn't make this assumption. Try it with the poker analogy.DrREC
December 14, 2011
December
12
Dec
14
14
2011
02:56 PM
2
02
56
PM
PDT
sorry, i guess that is dFSCI. Somehow the calculation in the paper and your description above seem quite at odds....DrREC
December 14, 2011
December
12
Dec
14
14
2011
02:53 PM
2
02
53
PM
PDT
What function does tRNA serve in protein synthesis? When biologists described it as an "adapter molecule" were they choosing its design?Upright BiPed
December 14, 2011
December
12
Dec
14
14
2011
02:49 PM
2
02
49
PM
PDT
Is fsc=to fcsi? Seems like the calculations are pretty different. By the way, the method estimates the functional portion of sequence space by known sequences. Since there are many sequences that have no function (and thus may share the same function, and since evolution has likely not explored all of sequence space, and since most sequences are from evolutionarily related organisms, this is a pretty weak technique. Hilariously, the number of fits for some whole domains and enzymes is well below the universal probability bound. Oops. The first paper demonstrates finding functional intermediates, which you dispute. The second is a proof of principle, that small peptides can assemble into domains. Yes, it uses engineering and design. What do you expect science to do? Wait around and observe for millions of years? This could be the dumbest of all ID arguments-that experiments are designed!DrREC
December 14, 2011
December
12
Dec
14
14
2011
02:42 PM
2
02
42
PM
PDT
Go with this one: A straight flush is an interesting example-out of 2.6 million poker hands, there are 40 straight flushes. Which is the specification-getting one of them, or any of them? Or any hand better than your opponent’s? Choosing the specification inserts a design assumption-that 1 of the flushes, or all of them are what was “specified.” And answer it.DrREC
December 14, 2011
December
12
Dec
14
14
2011
02:35 PM
2
02
35
PM
PDT
"“to detect design, I recognize a function, define it" You choose the design.DrREC
December 14, 2011
December
12
Dec
14
14
2011
02:34 PM
2
02
34
PM
PDT
DrREC: I find this reply a bit unusual. I take that as a compliment. First, you keep acting as though fcsi is actually calculatable, in a meaningful manner. I’ve never seen you do it. Perhaps take human aldolase reducatse, and walk me through the process. It might clarify the process. What is its specificity? What it is complexity? Durston has done it for me (and for you). Look here: http://www.tbiomed.com/content/pdf/1742-4682-4-47.pdf In Table 1, you find the computation of functional complexity in Fits (functional bits) for 35 different protein families. Let's take one as an example: Ribosomal S7. Length: 149 AAs; Number of sequences examined for the computation: 535 Null state (search space): 644 bits Functional complexity: 359 Fits As you can see, it's not me that "keep acting as though fcsi is actually calculatable, in a meaningful manner". It is. Wouldn’t it be necessary to rule them out to make a design inference? Absolutely not. Darwinists have a theory that depends on the existence of those functional intermediates. It's not my theory. Darwinist have the obligation to show that those intermediates exist. We have no obligation to "rule them out", no more than we have obligations to rule out the existence of unicorns. Simple epistemology. So until evolution is demonstrated for something, to your satisfaction, you assume design? Yes, I infer (not "assume"; epistemology, again!) design as the best explanation, because it explains well, and because there is no other explanation. Why not do the actual work, as design scientists, and determine the specificity and complexity? As shown, it has been done. The first paper you quote is completely non pertinent (and even extremely speculative). The reason is simple, and you find it in the methodology part: "We start with a set of homologous proteins connected by a known evolutionary tree T, and the amino acids found at a given location in the previously aligned sequences of the homologous proteins" Well, either you have not read the paper, or you have not understood my argument. Here, they are debating the possible ancestor ptorin of a specific protein family, as can be seen by the words "homologous proteins connected by a known evolutionary tree". Mt argument, instead, is clearly (if you have read my words) about the generation of basic protein domains,which have no homology one with the other. Is that more clear, this time? The second paper is even more speculative, and in no way shows an evolutionary path with selectable intermediates, if not at complete fairy tale level. If that is the best evidence you can gather, I am really happy I am in the opposite field. However, the paper is an intertesting piece of top down protein engineering, that could be interesting for Petrushka :) "To address this question, a unique “top-down symmetric deconstruction” strategy was utilized to successfully identify a simple peptide motif capable of recapitulating, via gene duplication and fusion processes, a symmetric protein architecture" Top down protein engineering, Petrushka! Can you hear me? Finally, you say: In detecting design, you assign the design. It is really simply that stupid. There is definitely something stupid in that remark, but out of courtesy I will not say what. The simple truth is that "to detect design, I recognize a function, define it, and compute the complexity necessary for the implementation of that function. If the complexity is high enough, I infer design as the best explanation." Does it sound as the same thing? If to you it does, then there is no hope...gpuccio
December 14, 2011
December
12
Dec
14
14
2011
02:16 PM
2
02
16
PM
PDT
Let's go back to this post of mine, a simple example: "Because in determining if something is designed, you start from the assumption that it is designed. A straight flush is an interesting example-out of 2.6 million poker hands, there are 40 straight flushes. Which is the specification-getting one of them, or any of them? Or any hand better than your opponent’s? Choosing the specification inserts a design assumption-that 1 of the flushes, or all of them are what was “specified.” In nature, this is even clearer. A single protein of 20 amino acids is one member of 20^100. But what is the specification? Having that exact sequence=20^100. Having the same function? Having any function useful to the organism? So choosing the specification is making assumptions about what you think the “design” must be.DrREC
December 14, 2011
December
12
Dec
14
14
2011
12:49 PM
12
12
49
PM
PDT
Why do I get the feeling that you, like MathGrrl before you, are not interested in learning about ID, but only in trying to find fault with it?
Both sides could gain by making the assumption that the other side is arguing in good faith. From my side I wonder why ID continues to avoid proposing a theory of design. It would seem to me that before arguing that living things are designed rather than evolved, one should be able to demonstrate that this is even possible.Petrushka
December 14, 2011
December
12
Dec
14
14
2011
12:47 PM
12
12
47
PM
PDT
PaV, This seems a common tactic of yours. If you're so well read, why don't you dismiss my stupid questions with a line or two? I'm quite aware of Dembski's argument. I'm also aware of the many many permutations is seems to have spawned on this site. CSI, dFSCI, FIASCO or whatever KF's pet version is called. Do you disagree with gpuccio's (someone else who claims to be well read on the matter) statement that: "Functional specification...is defined by a conscious designer." Since functional specification is what is used to determine design, you are determining design with a determined design. In light of the counterhypothesis that evolution can produce results that appear designed, this is most unsatisfying.DrREC
December 14, 2011
December
12
Dec
14
14
2011
12:46 PM
12
12
46
PM
PDT
DrREC: So in the determination of design, you have “Specified X is improbable; therefore X was designed.” And specified=”defined by a conscious designer.” So you use choose design in determining design. Specification requires a specifier, and that is you. Why do I get the feeling that you, like MathGrrl before you, are not interested in learning about ID, but only in trying to find fault with it? Your statement, "Specification requires a specifier" completely misunderstands the technical meaning of a "specification". Why don't you read a book about ID? Why don't you read NFL, for example? I've read R.E Fisher's book and Origins. Why not spend the time learning about this stuff before you come over to the website?PaV
December 14, 2011
December
12
Dec
14
14
2011
12:38 PM
12
12
38
PM
PDT
I find this reply a bit unusual. First, you keep acting as though fcsi is actually calculatable, in a meaningful manner. I've never seen you do it. Perhaps take human aldolase reducatse, and walk me through the process. It might clarify the process. What is its specificity? What it is complexity? Secondly, there seems to be an awful lot of knowledge that is dispersible to you: "Possible functional intermediates can certainly be taken into consideration, but they must be shown to exist, and to be naturally selectable in a specific context." Wouldn't it be necessary to rule them out to make a design inference? and again: "If a selectable intermediate is knwn, dFSCi will be computed for that intermediate, and then for the transcition to the final result. IOWs, if B is the final protein, and no selectable intermediate is known, I will compute dFSCI for B. If A is shown to be a functional selectable intermediate for B, I will compute dFSCI for A, and dFSCI for the transition from A to B. IOWs, I compute dfSCI only for the parts of the algorithm that are attributed to RV. NS is a necessity mechanisms, and it is treated separately. But it must be explicit, demonstrated NS" So until evolution is demonstrated for something, to your satisfaction, you assume design? Why not do the actual work, as design scientists, and determine the specificity and complexity? "Now, please, don’t come and say that we cannot exclude that functional intermediates will be found some day." That seems inherently reasonable, given the work of those reconstituting ancestral proteins and determining those intermediates. http://scholar.google.com/scholar?q=reconstruction+ancestral+protein&hl=en&as_sdt=0&as_vis=1&oi=scholart "For basic protein domains, no path based on selectable intermediate is known. " Is actually false. Small peptides can symmetrically fold to make functional domains. This may be a reason so many protein domains have internal symmetry or are built of repeats. http://www.pnas.org/content/108/1/126.full Last, we're straying from my original point. You expressed it best yourself: "“Functional specification, for a protein, is the specific definition of the biochemical function of that protein, and a threshold of minimal functionality for it. It is defined by a conscious designer, on the basis of what he observes." In detecting design, you assign the design. It is really simply that stupid.DrREC
December 14, 2011
December
12
Dec
14
14
2011
12:20 PM
12
12
20
PM
PDT
DrREC: Wrong. The specification is a possible function recognized and defined explicitly by a conscious observer. Let's take the examble of an enzyme. It accelerates a specific biochemical reaction. That is a function that can be defined objectively, and measured in the lab. I am not making it up. My only role is to recognize it as a function, because functions have a meaning only for conscious and purposeful agents. You assume it needs to be of that form and function. What do you measn? I am assuming nothing. My specification, at this point, is: "any molecule that can accelerate reactio X at least of Y, in the lab". There is no assumption at all. You assume no ancestral promiscuous function covered for it. That is completely gratuitous. If you have followed at least some of my posts here, you should know that I have many times stated that dFSCI gives us the probability of getting to a certain type of functional sequence in a purely random system, either thorugh a random search or a random walk from an unrelated state. That I have said clearly many times. Possible functional intermediates can certainly be taken into consideration, but they must be shown to exist, and to be naturally selectable in a specific context. They cannot only be "imagined" or declared "possible". That is not science, but fairy tales. If a selectable intermediate is knwn, dFSCi will be computed for that intermediate, and then for the transcition to the final result. IOWs, if B is the final protein, and no selectable intermediate is known, I will compute dFSCI for B. If A is shown to be a functional selectable intermediate for B, I will compute dFSCI for A, and dFSCI for the transition from A to B. IOWs, I compute dfSCI only for the parts of the algorithm that are attributed to RV. NS is a necessity mechanisms, and it is treated separately. But it must be explicit, demonstrated NS. For basic protein domains, no path based on selectable intermediate is known. Therefore, with our present knowledge, their dFSCI corresponds to the whole functional information of the molecule. Now, please, don't come and say that we cannot exclude that functional intermediates will be found some day. Have some respect for my intelligence (and patience). You assume the design of the system in a way that ignores the evolutionary hypothesis. Not true. I simply ignore generic hypotheses of possible and never proposed, and never shown, paths that, as any sensible observer can understand, are in no way a scientific alternative, being based on mere imagination and faith. You specify the specificity. That's really tough! Is it meant as an offense? :) Anyway, whatever it means, it's simply not true. What I do is: I define the specification. Then you handwave about the complexity-never actually calculating the number of forms that could cover the same function. Completely false. If you read what I have written, you will see that the computation of the functional space is a fundamental step in the computation of dFSCI. Where is your problem?gpuccio
December 14, 2011
December
12
Dec
14
14
2011
10:51 AM
10
10
51
AM
PDT
8.1.1.1.2 gpuccio "Funcctional specification, for a protein, is the specific definition of the biochemical function of that protein, and a threshold of minimal functionality for it. It is defined by a conscious designer, on the basis of what he observes." So in the determination of design, you have "Specified X is improbable; therefore X was designed.” And specified="defined by a conscious designer." So you use choose design in determining design. Specification requires a specifier, and that is you. Lets think about what this actually means. In determining specificity, you assume design-that say, an enzyme, is necessary. You assume it needs to be of that form and function. You assume no ancestral promiscuous function covered for it. You assume the design of the system in a way that ignores the evolutionary hypothesis. You specify the specificity. Then you handwave about the complexity-never actually calculating the number of forms that could cover the same function.DrREC
December 14, 2011
December
12
Dec
14
14
2011
10:16 AM
10
10
16
AM
PDT
lastyearon: You have understood well, except for the last step. The design inference is, as the word says, an inference. The reasoning goes (briefly) as follows: a) I define a formal property, objectively verifiable in an object. b) I check that property in objects that have benn certainly designed (human artifacts, where can ascertain directly the design process), and find that it is often present in that category. c) I check that property fro objects that are not designed (any natural onject where we can exclude, empirically and reasonably, any intervention of a conscious designer in the determination of the specific form we observe), and find that it is never observed. d) Biological objects, being tht controversial category, are obviously excluded from this phase. e) I can also check my property in a blind way agains huamn designed objects and non designed objects: I find that, is used as an empirical marker of design, it gives no false positives and many false negatives (all objects with dFSCI are designed, but not all designed objects exhibit dFSCI). f) From the previous empirical passages, I derive the reasonable expectation that dFSCI is a good marker of designed objects, very specific (no false positives), but not sensitive (many false negatives). g) Analyzing the controversial set of objects, biological objects, I find that many of them exhibit veri high levels of dFSCI. h) On that basis, I infer a design origin for those objects as the best scientific explanation available. i) That implies that other proposed explanations must be shown to be wrong, or flawed, and that is part of ID theory too.gpuccio
December 14, 2011
December
12
Dec
14
14
2011
10:10 AM
10
10
10
AM
PDT
a) I observe that a protein has a specific function. I define it and provide a way to measure it and a minimal threshold for it.
As I understand what your saying, you observe a particular function, and define what the minimal qualifications are to accomplish that function.
By some approximation, I measure the search space for that kind of protein and the target space (the number of fucntional sequences that ensure that function as I have defined it).
I think your saying that you try to identify how improbable that protein's function is by identifying the range of possible functions (and non-functions) for it.
c) The rate of the target space to the search space is the dFSCI of that functional protein (expressed as -log2, in functional bits). d) If the dFSCI is higher than a conventional threshold (I usually propose 150 bits for a realistic biological system, and I believe I am still too generous), I infer design as the best explanation, on the basis that such levels of dFSCI has ever been observed only in designed things.
So a function that is extremely improbable, based on a large search space and a small target space, has high dFSCI. Missing in your process is some way of independently assessing whether the function has hit a specific target that a designer intended. Without that, all your doing is observing that protein X is very complicated, and it's very unlikely that anything else could accomplish the things it does. In no way does that imply that anyone or anything intended it to do that.lastyearon
December 14, 2011
December
12
Dec
14
14
2011
09:44 AM
9
09
44
AM
PDT
lastyearon: I really don't follow your reasoning. I don't need to "justify" that a protein was designed for a specific function. What I do is: a) I observe that a protein has a specific function. I define it and provide a way to measure it and a minimal threshold for it. b) By some approximation, I measure the search space for that kind of protein and the target space (the number of fucntional sequences that ensure that function as I have defined it). c) The rate of the target space to the search space is the dFSCI of that functional protein (expressed as -log2, in functional bits). d) If the dFSCI is higher than a conventional threshold (I usually propose 150 bits for a realistic biological system, and I believe I am still too generous), I infer design as the best explanation, on the basis that such levels of dFSCI has ever been observed only in designed things. I am afraid you don't really understand the process. And yes, "anything that provides people a good investment in a well defined context" is certainly a possible specification for something. But what has that to do with biology?gpuccio
December 14, 2011
December
12
Dec
14
14
2011
09:13 AM
9
09
13
AM
PDT
1 2 3

Leave a Reply