Home » Intelligent Design » Durston Cont’d

Durston Cont’d

Kirk Durston‘s Thoughts on Intelligent Design

 

In this thread, I would like to lay out my own thinking regarding a method to detect or identify examples of intelligent design. I then would like to unpack my thinking in a slow, meticulous (pedantic perhaps?) way and, if we can get that far, apply it to a few examples, including a protein, and the minimal genome.

 

Defining ‘Intelligent Design’:

 

I commonly see the term ‘intelligent design’ used in two ways. An example of the first way is in a magazine headline I saw this morning:

 

‘Evolution by Intelligent Design’

 

The above example is similar to the way ‘planning’ is used in, ‘Success through good planning.’

 

In this sense, we can define Intelligent Design as the ability of a mind to produce an effect that both satisfies a desired function or objective and might not otherwise likely occur. This ability emerges out of what we understand to be intelligence, defined in <a href=”http://en.wikipedia.org/wiki/Intelligence”>Wikipedia</a> as the capacities to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn.

 

The second way I see the term intelligent design used is:

 

‘That traffic control system is a beautiful example of intelligent design.’

 

The usage of ‘intelligent design’ in the above sentence is similar to the usage of planning in, ‘That rescue operation was an excellent piece of planning.’

 

In this second type of usage, we can define intelligent design as an effect that satisfies a function or objective and requires a mind to produce. Other examples of intelligent design are the Sphinx and the Microsoft Vista operating system.

 

In the first sense, ‘intelligent design’ is an ability and in the second sense, ‘intelligent design’ is an effect, or result of that ability.

 

With this in mind, the definition of intelligent design that I will be using in this discussion is as follows:

 

Intelligent Design:  1  the ability of a mind to produce an effect that both satisfies a desired function and might not otherwise occur.  2.  an effect that performs a function and that requires a mind to produce.

 

I realize that there are other definitions out there, some of which I do not at all agree with (e.g., Wiki’s). In general, most of the definitions of intelligent design that I see are actually specific examples, applications or results of intelligent design, rather than the defining essence of intelligent design. Ultimately, what I want to argue is that examples of intelligent design all required a mind to produce. I then want to argue that intelligent design is the most rational explanation for the protein families and the minimal genome. I will pause here in case anyone wishes to raise a question about what I’ve covered thus far. Then I will proceed to the next step.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

99 Responses to Durston Cont’d

  1. Kirk,
    All the examples you cited involve not only a mind but also a body, i.e. a way to bring the design into physical reality. I’m not interested in mind/body semantics (I’m willing to accept that a thought is non-material for the sake of argument), but I think it’s safe to say that no one has ever observed a mind willing something into reality without the help of, for example, hands. so you can’t really say that anything designed is just the product of a mind.

  2. 2

    Kirk,
    I will follow your thinking to the end and not interrupt. If we open the debate already, you’ll never get through!

  3. I clearly follow your premise as I am sure most others can, but would like to point out that the antagonistic position held by the Prof., and most other hard-core Darwinists on this site, is that of unbending resolve of blind chance producing an effect/function no matter how much the unlikelihood is of its “natural” occurrence is pointed out to them.

    I would like to point out Newton’s conclusion in Principia:

    In Newton’s Principia, he concluded that humans know God only by examining the evidences of His creations:

    “This most beautiful system of the sun, planets, and comets could only proceed from the counsel and dominion of an intelligent and powerful Being. He is eternal and infinite, omnipotent and omniscient; that is his duration reaches from eternity to eternity; his presence from infinity to infinity; he governs all things, and knows all things that are or can be done. We know him only by his most wise and excellent contrivances of things, and final causes; we admire him for his perfection; but we reverence and adore him on account of his dominion; for we adore him as his servants.

    i.e.Newton found the fact that such order should be found in reality, instead of chaos, to be overwhelming evidence of God’s ultimate dominion of reality and thus the vacancy of any rational premise in which the atheists has rational basis to appeal to chaos/chance in the first place.

    With that being said, of my utter contempt for the atheistic/materialistic premise in the first place, compared with the actual state of scientific evidence/reality, I will eagerly watch by the sidelines and intrude no more in what I think will be a very interesting exchange that may lead to “checkmate’.

  4. 4

    Dear all,
    Can we all let Kirk finish his presentation before we comment?

  5. I agree Prof_P.Olofsson, please continue Kirk we do not have much to go on so far.

    Like the name Prof_P.Olofsson btw, I have a text book written by a Professor Olofsson. It is a good one too.

  6. How long will it be before we get someone demanding that Kirk must add the motivation of the designer or who created the designer to the discussion.

    Kirk, good start. You will get a lot of irrelevant questions so be careful with your time on this. You will be amazed at the inanity that will come up.

    Peter Olofsson has written a couple of articles attacking ID that are mainly to do with the use of probability and statistics in supporting ID.

    http://ramanujan.math.trinity......Chance.pdf

    http://ramanujan.math.trinity......dPhilo.pdf

    You might want to read them to know what some critics have said. Professor Per is a frequent commenter here at UD.

  7. Other examples of intelligent design are . . . the Microsoft Vista operating system.

    Well, um . . .

    Sorry, Professor. Kirk, please continue.

  8. Kirk,

    very clear and well said. In my usual language, I would just call your number 1 “the ability to design” (a functions of conscious intelligence), and your number 2 “the designed object”, or “the product of design”. Obviously, design detection is made on #2.

    I would suggest that we could also define a third entity, the “process” of design, which in many instances can be observed, but is distinct from the designed object itself.

  9. trib (#5):

    You are a long time IDist, so remember one of our fundamental mottoes: “bad design does not mean no design” :-)

  10. GP, I propose a caveat: Microsoft design does not mean intelligent design :-)

  11. trib:

    Microsoft design would probably need a scientific theory of its own, to be explained! :-)

  12. gpuccio:

    Kirk,

    very clear and well said.

    Where? When?

    As best I can tell, Kirk Durston hasn’t made a peep on this thread.

  13. Microsoft design would probably need a scientific theory of its own, to be explained! :-)

    LOL

    Adel, I’m confident to almost 10^150th that Barry’s letting him use his name.

  14. Color me clueless.

    (Don’t get old.)

  15. 15

    Prof. Olaf…I just read your chance.pdf paper.

    I’m happy to be corrected, but it seems you sell Prof. Behe short. The reason he can claim that NO mutations of probability 10^-20 will occur in a population of 10^12 individuals is because he and other biochemists are acquainted with known mutation rates per generation. Nowhere do I see the mention of mutation rates (which, in your example, might be akin to knowledge of how frequently individuals actually play the lottery) in your paper. It would seem that some knowledge of the underlying biochemical processes would be useful before running through these probability calculations.

  16. “Kirk,
    I will follow your thinking to the end and not interrupt. If we open the debate already, you’ll never get through!”

    Prof O actually I think before going any futher it would seem to me Kirk would want to know if anyone agrees or disgrees with his opening statement before going any further.

    One step at a time

    Vivid

  17. Second Installment

    It would be helpful to construct a method that we could use to detect effects that have been produced by intelligent design (definition 1), so let us think about that for a minute. We cannot know what sort of art other intelligent agents might like, or what they would consider to be a good design. There is one type of circumstance, however, where all intelligent agents must exercise their ability for intelligent design. It occurs when the agent desires some function or objective and the physical system is not likely to cooperate by producing it. The agent then exercises that ability and what results is not only an example of intelligent design (definition 2), but an anomaly within the physical system. It is anomalous in virtue of the fact that the physical system was not likely to produce it.

    For example, an observer gazing at the skies between AD 1000 and the present, would have seen an empty sky save for the usual clouds, birds, etc. However, early in the 20th century, the observer would have seen something anomalous, an aircraft. People desired a function or held an objective to fly. Nature did not seem very helpful in satisfying this objective, so humans exercised their ability for intelligent design (def. 1) and produced a piece of intelligent design (def 2) that would fulfill that function. The resulting aircraft in the sky was an anomaly within the physical system.

    The less likely nature is to satisfy some particular desire or objective of an intelligent agent, the greater the requirement for intelligent design and the greater the resulting anomaly. Conversely, the more likely nature is to supply the desired function or meet the desired objective, the less the need for intelligent design (def 1). The intelligent agent can still exercise her or his ability for intelligent design, but the resulting effect will only be a small anomaly within the physical system. In general, we can think of examples of intelligent design as:

    1. having a function of some sort even if we are unsure as to what it is
    2. anomalous, to varying degrees, within the physical system

    Intelligent design can mimic nature, in which case the physical system can satisfy the objective and the resulting effect produces no anomaly at all. For this reason, among others, I will not concern myself with trying to detect all effects that are products of intelligent design. Rather, I will focus only on detecting those examples of intelligent design where intelligent design (def 1) was required. To do this, I will need to do two things:

    1. provide a method to measure some essential property of these functional anomalies and,
    2. establish some sort of threshold beyond which intelligent design is required.

    By ‘required’ I do not mean that it would be nomologically or logically impossible for nature to produce the effect. I merely mean that the probability that nature could produce the effect becomes so low within the boundary conditions of the problem that the intelligent agent must exercise intelligent design (def 1) and it would be irrational to believe otherwise.

    I’ll pause here in case anyone wants clarification on anything. I’ll try to post at least once every couple days (working full time in addition to being a full time Ph.D. candidate, plus having a family makes it difficult for me to post more often). If we can get past this post, then I will focus on (1) above in the next step.

  18. Kirk:

    again, I agree with your points. Just let me point out that your concepts seem to be approximately the same, with different names, as the classical points in ID theory. In that sense, your concept of function seems the same as that of functional specification, but it is interesting how you link it to a “desire” of the intelligent agent, providing a teleological facet which is often overlooked.

    And your concept of “anomaly” is evidently linked in some way to the concept of complexity (in the sense of improbability) of the observed pattern. But calling it an “anomaly” is interesting in itself. I will have to think about that.

  19. Kirk

    Just delete this comment after reading it. Use of the term intelligent agency to specify your first definition of “intelligent design” will disambiguate it.

    agent = the actor
    agency = the ability to act
    design = the act itself

  20. Dave, that is a helpful point re. ‘intelligent agency’

    A general comment: I welcome collegial criticisms/worries as we proceed. Folks like Prof. Olofsson, Mark and others are useful contributors in either exposing weak links or lack of clarity. My attitude is that the truth can withstand anything you can throw at it, although my grasp of the truth and ability to communicate it may be in dire need of improvement.

  21. Kirk

    I like the concept of function design. It’s meaning is a little clearer to me than specified complexity. My opinion has nothing to do with the fact that I am a Canadian.

    Also, “working full time in addition to being a full time Ph.D. candidate, plus having a family makes it difficult for me to post more often”

    I am working on a Masters ‘full time’ working full time, and have a family too. I almost understand what you are going through. My courses are at the PhD level. I thought universities do not allow PhD students to work full time. Should we keep this on the qt? :)

  22. To detect artifacts/effects that were likely to have required intelligence to produce, I have proposed that we must look for a special type of anomaly …. an anomaly that has some sort of function …. a functional anomaly. I have proposed that we will need two things:

    1. method to measure some essential property of these functional anomalies and,?
    2. a threshold within that essential property above which intelligent design is required.

    To do this, we need a link between the essential property of functional anomalies that we will measure, and intelligence. If you will look at the definition of intelligence, it has several properties. There is one property that I will focus on. It is an empirical fact that intelligence can produce functional information. I realize there may be some reservations about this concept of functional information, but please hold off on those questions until I have defined functional information (probably in my next post). At this point, I will propose an hypothesis as follows:

    H: a unique property of intelligence is the ability to produce significant levels of functional information.

    Note that I have claimed that this property is ‘unique’ to intelligent agents. The hypothesis does not suggest that mindless processes cannot produce any functional information at all but, rather, only insignificant levels within the ‘noise’ levels of those mindless systems.

    Some of you may notice that there is a beautiful symmetry about this. Before the effect existed, there was only the intelligence that desired and produced it. The intelligent agent had the ability to produce the functional information required to produce the effect that nature could not, with the result that a functional anomaly was formed. Later (whether we are looking at a signal from deep space, an archeological artifact, or a suspicious death in forensics) we see only the anomaly. We can then work backward to derive the signs of intelligence behind that anomaly by measuring the functional information required to produce it. Think about this for a bit; I believe it is quite powerful.

    Falsification of H: Out of the hypothesis H arises a prediction that can be falsified; mindless natural processes cannot produce significant levels of functional information. It follows, therefore, that the method that I will propose to detect examples of ID can be falsified. If anyone does not like the results of the application of my method, they have merely to falsify hypothesis H. At this point, some may be thinking that it will be easy to falsify H, but restrain yourself a bit longer until I have provided a definition of functional information.

    So to summarize this post, regarding (1) above, the essential property of functional anomalies that we will measure is the functional information to produce the anomaly. The threshold mentioned in (2) will be some level of functional information above which a mindless process will not be able to exceed with any reasonable probability. That level/threshold will need to be established in a future post. I’ll pause here for queries before I proceed with a definition of functional information.

  23. KD

    Like what I am seeing from you.

    Function-specifying complex information [FSCI] — you and colleagues seem to have been thinking in terms of functional [as opposed to both orderly and random] sequence complexity — is emerging in your discussion, in a very useful, empirically anchored way.

    I, too like the “anomaly” issue: it is not expected to see that — it draws attention, it sticks out. It cries out for explanation per plausibly adequate causal forces, and that leads us to issues on inferences to best explanation over adequacy of mechanical vs intelligent causes on relevant aspects; the latter reflecting art as a possible force.

    [My own rather more modest remarks (esp. by comparison with that work in progress diss . . . I assume you are "writing up from Day 1") are in my always linked, esp. App 3 on FSCI and CSI and their roots in the 70's - 80's. If it will help you in getting a better expression than my rough notes, I also made some initial notes on functionality here.]

    Keep up the good work!

    GEM of TKI

    PS: I too have had to work full-time, study and try to run a family. (Sleep is the first thing that gets lost in that equation. You have my sympathy.)

  24. I guess classic design detection is something along the lines of the Explanatory Filter — if physics can’t explain it, and it couldn’t happen by chance then it was designed. The problem of course is that it was found have found that for many things it was thought physical laws couldn’t explain, they did.

    So a faith has somehow developed that the physical sciences can answer everything.

    OTOH, design indisputably exists.

    Suppose if we dump the EF as I think was discussed a few weeks back?

    As per Dembski, we start with a pattern first & if found to have a certain measured complexity we be be certain it is designed, after which it can be presumed that physics and chance could not have done it.

    This is not eliminative.

    It looks like Kirk is going to start with function first, which is interesting.

  25. Dave, it’s a shame there is not a private means of contacting each other as per FreeRepublic.

  26. KD @22

    a unique property of intelligence is the ability to produce significant levels of functional information.

    Note that I have claimed that this property is ‘unique’ to intelligent agents.

    . . .

    Falsification of H: Out of the hypothesis H arises a prediction that can be falsified; mindless natural processes cannot produce significant levels of functional information. It follows, therefore, that the method that I will propose to detect examples of ID can be falsified. If anyone does not like the results of the application of my method, they have merely to falsify hypothesis H. At this point, some may be thinking that it will be easy to falsify H, but restrain yourself a bit longer until I have provided a definition of functional information.

    If my understanding is correct, you are looking to mathematically quantify the limits of evolutionary mechanisms. I’ve long thought that this is a rich vein for ID research to mine, so I am looking forward enthusiastically to your definition.

    You’re probably already planning on this, but your post doesn’t make it clear whether or not you’ll be providing positive evidence for your uniqueness claim or only predictions that could be falsified in principle. Either would be great, of course, but both would be extremely compelling.

    JJ

  27. 27

    KD, Peter, kf,
    Luxury! I have to work full time, study, run four families in different states, and fly home to Sweden to care for my grandmother every night.

  28. KD,

    A similar discussion but not as theoretically organized as yours has been going on for some time here. It always revolves around the concept of FCSI or as kairosfocus uses FSCI. FSCI is easy to understand and the examples are powerful.

    The argument is that FSCI does not appear anyplace in nature except life. Now life is the issue under analysis so the argument goes, if FSCI has never been generated by nature at any level of complexity, how can one expect so much FSCI to develop in life. The answer is that life is the place where nature developed FSCI. They beg the question but to them that is the answer. They do point to the many ongoing research efforts that exist to show how life could have arisen with a cocky confidence that it is only a matter of time.

    They then go on to say that life which has FSCI creates new FSCI and all the complexity and function we see is the result of these processes working out over deep time. They provide no real data to support this but the answer is always the same, time will do it and the multitude of variation creation processes that change a genome is the basic mechanism and then natural selection leads to new FSCI. There is a certain logic to their arguments and is supported by the fossil record which shows the gradual increase in complexity and increasing diversity of life over deep time.

    That is the argument of the naturalistic thinkers simply stated. No real data but some circumstantial evidence as micro evolutionary processes exists and deep time cures all. So I do not know if your approach handles this. It sounds like it may handle an OOL ok but then the refrain will be the mantra “deep time, deep time” cures all once FSCI exists in life.

    I am sure you are aware of all this and it will be interesting to see how your work and ideas handle these objections. So far we like what we see.

  29. Prof P,

    :Luxury! I have to work full time, study, run four families in different states, and fly home to Sweden to care for my grandmother every night.:

    Exactly what would you expect from a descendant of the mighty Sven.

    But not mocking. No that would make dear Sven turn in his grave. He had no need for that.

  30. and fly home to Sweden to care for my grandmother every night.

    I call BS!!! If your grandmother lives in Sweden you wouldn’t have to care for her!!!!

  31. KD,

    Thank you for you’re efforts, I am following your posts with much interest.

    I don’t want to dirty the water, but I can’t help but notice how much more valuable to the process this conversation is, compared to recent material. It has been argued recently (even by people on this very thread) that design detection theory is of a lesser value if it doesn’t address issues beyond the empirical evidence (such as the presence of evil in the world).

    I hope it becomes apparent (as if it is not already) that such claims are unecessary to design detection.

  32. 32

    tribune[30],
    Touche!

    The exchange reminded me of this oldie:

    http://www.youtube.com/watch?v=Xe1a1wHxTyo

    The “luxury” comments appears at 2:10.

    Sorry,I broke my promise to stay away but the thread is already so cluttered. I’ll compile Kirk’s comments in the end and read them through.

  33. Prof PO:

    You are hereby officially outed as Superman II! (Should have thought a bit more before admitting to flying home to Sweden every night!)

    _______

    Jerry:

    One more little point: FSCI does not come up for serious consideration till you are looking at 500 – 1,000 bits of info storage capacity.

    The latter is room for enough configs to be ten times the SQUARE of the number of quantum states of the cosmos’ atoms across its lifetime. Chance based search sees a needle in a haystack problem on steroids.

    Reasonable minimal storage for life — DNA — ~ 300 k 4-state elements, 600 times the limit.

    So, the islands of function are credibly very sparse in the config space.

    Directly observed cases of FSCI all trace to intelligent design, e.g. longish posts in this thread.

    ________

    Have fun all.

    GEM of TKI

  34. kairosfocus,

    You have outed Prof P as superman and as such 1000 bits of probabilistic power is nothing to the son of Sven to toss aside with a p=1.0.

    We need to find some kryptonite for such a powerful force.

  35. LOLOL, good one Professor O.

  36. gpuccio:

    I am with you completely. No problems up to now. And again, I like your references to “desire”, which underline the role of a conscious intelligent “and” motivated agent behind the process of design.

  37. kirk:

    the previous post was obviously directed to you. It’s the second time that I direct a post to myself by mistake. I am starting to be worried.

  38. 38

    gpuccio,
    At least you are with yourself completely.

  39. In the previous post, I presented Hypothesis H: a unique property of intelligence is the ability to produce significant levels of functional information. I then suggested that an essential property of functional anomalies is the functional information required to produce them. Thus, we have a link between intelligence and functional anomalies. We now need a method to quantify these functional anomalies in terms of functional information.

    In a recent paper in PNAS, Hazen et al, propose a method to measure the functional information encoded within biopolymers (Hazen, R.M., Griffen, P.L., Carothers, J.M. & Szostak, J.W. (2007) ‘Functional information and the emergence of biocomplexity’, PNAS 104, 8574-8581). This paper was an outcome of an earlier article in Nature in 2003 by one of the coauthors of the Hazen paper (J.W. Szostak (2003) ‘Molecular messages’, Nature Vol. 423, p. 689). Hazen’s equation was almost identical to an earlier equation published by Leon Brillouin in 1951.

    In Hazen’s equation, Functional information I(Ex) is defined as

    I(Ex) = – log2[M(Ex)/N] (1)

    where Ex is the degree of function x, M(Ex) is the number of different configurations that achieves or exceeds the specified degree of function x, ? Ex, and N is the total number of possible configurations, both functional and otherwise. For proteins, N is simple to compute,

    N=20^L

    where L= the length of the sequence. The problem, however, is computing M(Ex).

    In 2005 a paper was published that defined three subsets of sequence complexity. The three types were defined as ordered sequence complexity (OSC), random sequence complexity (RSC) and functional sequence complexity (FSC). At that time the authors were uncertain as to how to measure FSC. I contacted them with a method, and we went on to publish a paper proposing a measure of FSC . For both Shannon information and Kolmogorov information, an equivalent term is ‘complexity’ rather than ‘information’. It is the same here. Functional complexity is equivalent to functional information. To check this, the more sophisticated equation for functional complexity presented in the Durston et al. paper can be simplified, with certain assumptions, to the Hazen et al. equation. The beauty of the Durston et al. equation, is that it can actually be evaluated using real data. I have found that if I have at least 500 aligned sequences, the sample size is starting to get large enough to adequately estimate M(Ex), although I prefer to work with at least 1,000 sequences for any protein family.

    For those interested in working with functional complexity, it is important to read the Durston et al. paper and get a firm grasp of the null state, the ground state, and the functional state. The functional complexity of a system is the change in functional uncertainty (defined in the paper) between the ground state and the functional state. The null state can be a special case of the ground state. Also, for basic properties determined by physics, the basic functional state is identical to the ground state, in which case zero functional information is required to produce the effect. This also holds true if the ground state is the null state. All objections I have seen result from a lack of understanding of these three states. The most common is a failure to note that if the functional state is the null state, then a vast number of possibilities are functional, but the functional information required is zero. As Abel and Trevors point out in their paper on three subsets of sequence complexity, there are only three major areas we have ever observed FSC. One is human languages, the other is human-designed software, and the third is in biopolymers such as DNA and proteins. Something to think about.

    We now have a method to measure functional information and can apply it to more than just sequences, but to many other artifacts, effects, and configurations as well, including uses in archeology, forensics, SETI, genetics and even suspected cases of fraud in lotteries and casinos. Once you have an estimate of M(Ex), you then have an estimate of the target size M(Ex)/N for the search. For functional folded proteins, that target size is miniscule, to the point of approaching zero for all normal scientific problems. Keep in mind, it is physics that determines which amino acid sequences have stable folds, not biology. So a biological search engine does not make proteins, it must find them and physics is the ‘keeper of the combinations’ that work. The next step in my method to detect examples of ID is to determine what the threshold is for nature, regarding how much functional information/functional complexity we might reasonably expect to observe within the ‘noise’ of the natural system. I’ll pause here, however, to give people a chance to clear up any questions they may have.

  40. Kirk:

    thank you for the wonderful post. First of all I have to say that I have always admired your paper, and quoted it a lot of times, both here at UD and on another blog, as the only example of easy and immediate computation of functional complexity in proteins. So, I am very happy that we are able to discuss it with you directly here. I think that Abel and Trevors have given a very clear theoretical foundation to the concept of fucntional information in biology, but I didn’t know that the practical application of Shannon’s H to that computation was your personal ides. My most heartfelt compliments for that!

    I have many things that I would like to say about the computation of the target space in proteins. It is a complex and fascinating issue (and a very fundamental one). And it is one issue often used by darwinists to raise obscure objections. I can easily predict that we will see some of them here very soon.

    So, I would rather wait for the discussion to develop, and then offer some personal thought if it is necessary.

  41. Prof_P.Olofsson:

    Just a question to you, to start the discussion. I see that Hazen’s equation:

    I(Ex) = – log2[M(Ex)/N]

    is assuming, if I am not wrong, an uniform probability distribution. As you know, I comletely agree with that position, but I would like your comments about that.

    And this is not Dembski, or any other ID source. This is a paper on PNAS. And it is exactly the same kind of computation that I have suggested many times, in discussions with you and others, both here and on Mark’s blog.

    So, can we discuss here this problem of distributions in a deeper and more objective way?

  42. KD [17,22,39]:
    Just some feedback directed only to the comments you’ve made in this thread:

    There is one type of circumstance, however, where all intelligent agents must exercise their ability for intelligent design. It occurs when the agent desires some function or objective and the physical system is not likely to cooperate by producing it. The agent then exercises that ability and what results is not only an example of intelligent design (definition 2), but an anomaly within the physical system. It is anomalous in virtue of the fact that the physical system was not likely to produce it.

    Why can’t (or shouldn’t) you consider the agent as part of the physical system.

    For example, an observer gazing at the skies between AD 1000 and the present, would have seen an empty sky save for the usual clouds, birds, etc. However, early in the 20th century, the observer would have seen something anomalous, an aircraft. People desired a function or held an objective to fly. Nature did not seem very helpful in satisfying this objective, so humans exercised their ability for intelligent design (def. 1) and produced a piece of intelligent design (def 2) that would fulfill that function. The resulting aircraft in the sky was an anomaly within the physical system.

    I notice you mentioned birds as part of the preexisting default background noise. When an intelligent agent would have come up with his own flying machine it would have been as a result of hours and hours and hours of observation of birds, and then there would be an attempt at duplication of something that already existed in nature.

    By ‘required’ I do not mean that it would be nomologically or logically impossible for nature to produce the effect. I merely mean that the probability that nature could produce the effect becomes so low within the boundary conditions of the problem that the intelligent agent must exercise intelligent design (def 1) and it would be irrational to believe otherwise.

    On another thread today I brought up compressibility in the original concept of CSI, wherein if some sequence of sufficient length indicates any kind of pattern whatsoever this rules out chance (as the percentage of compressible strings is exceedingly small according to Dembski). I was admonished that functionally specified information is the only relative concept within the context of biology. It is this concept that I believe you have attempted to formalize and are alluding to in the above paragraph, as something that exceeds the probability for nature to produce.

    Well my point would be that compressiblity is already an extremely exclusive set. Any sequence of sufficient length exhibiting it I do believe we could rule out as happening by metaphysical chance. So what point does it serve to focus in on a narrower subset of compressibility (functionally specified information) when for the purposes of the probability argument, compressibility will suffice. I presume its because you would have to say the entire universe is designed, since we see patterns everywhere. But focussing on functionally complex specified information I don’t believe solves your problem. You couldn’t say mechanism (“laws”) didn’t produce it, only that randomness did not.

    For the record I do personally think that there is of necessity a direct correlation between the compexity of nature and the complexity of man. Furthermore I think that any set of laws and preexisting conditions that resulted in us would effectively equate to us, i.e. “Man” in a different form, in a different, prexisting phase.

    At this point, I will propose an hypothesis as follows:
    H: a unique property of intelligence is the ability to produce significant levels of functional information.
    Note that I have claimed that this property is ‘unique’ to intelligent agents.

    What if as it turns out that humans do what they do solely by virtue of the configuration of their physical attributes. Then intelligence would be merely a property of a physical system.

    Would it be sufficient for you that any system producing life be labelled “intelligent”? If evolutionists as a concession one day started describing the mechanism they propose (such as it is) as “intelligent”, would that satisfy I.D.? Is there anything in your arguments that establishes that intelligence or mind is something other than natural or physical?

    As Abel and Trevors point out in their paper on three subsets of sequence complexity, there are only three major areas we have ever observed FSC. One is human languages, the other is human-designed software, and the third is in biopolymers such as DNA and proteins. Something to think about

    Wouldn’t you suppose that the reason that human-designed software exhibits FSC is because the “biopolymers such as DNA and proteins” that resulted in human beings exhibit FSC as well?

  43. KD:

    The functional complexity of a system is the change in functional uncertainty (defined in the paper) between the ground state and the functional state.

    That’s where it seems that things might get tricky. Sorry to bring up the diamond example again, but in regards to carbon configurations, you proposed three different ground states. These ground states had N=2, N=1, and N=large number, respectively.

    Presumably, our identification of the ground state is based on the conditions under which the observed configuration was formed. This means that the accuracy of our FSC calculation depends on our knowledge of the configuration’s causal history. Is this correct? If so, I’m sure you can guess what my next point will be.

  44. [42] correction: relative = relevant, Para. 4.

  45. JT:

    On that other thread I already made the point that functional biological information is not particularly compressible, if not at all. What is there of compressible in the sequence of a functional protein? therefore, functional information is perhaps a subset of the more general concept of CSI, but not a subset of “compressible” information.

    Regarding your other points, you make the usual standard objections of materialists, who try to deny the empirical nature of consciousness. I would remark that “intelligence” is a property of a conscious agent, and that nobody has ever demonstrated (notwithstanding all the arrogant statements of strong AI theory) that consciousness and intelligence can be explained as a consequence of the objective and mechanical laws of physics. Consciousness remains an empirical observation, unexplained by current materialistic theories. That is what is called “the hard problem of consciousness”. Therefore, you cannot out of dogmatic authority reduce intelligent agency to a product of mechanical laws.

    Until differently proven, intelligent agents remain an empirically observable reality, and they behave in a characteristic way, and have properties which cannot be found elsewhere. Intelligence, and the ability to output functional information easily and with very high complexity, are among them.

  46. JT:

    “Wouldn’t you suppose that the reason that human-designed software exhibits FSC is because the “biopolymers such as DNA and proteins” that resulted in human beings exhibit FSC as well?”

    I would never suppose that. Plants and lower animals are very rich in FSC in their DNA and proteins, but I am not aware that they design software. Am i missing something in your assumption?

  47. 47

    gpuccio[46],
    I think you might be turning JT’s logic around. I think he is saying essentially that you suppose “only FSC can produce FSC” not that “any FSC can produce FSC.” In logical terms, possessing FSC in the DNA etc is a necessary condition but not a sufficient condition to produce FSC. Or something like it. Sorry JT if I misinterpret you.

  48. Prof_P.Olofsson:

    But we have no evidence that possessing FSC in the DNA is a necessary, even if not sufficient, condition. As far as we know, the only condition necessary to design is to be conscious intelligent agents, as is proved by the observed (both subjectively and objectively) connection between the process of design and specific conscious representations, intentions, desires, and so on. And, unless we have solved the hard problem of consciousness (and we haven’t) nobody can affirm that possessing FSC in the DNA is a necessary condition to produce consciousness.

    So, let’s stay with what we know for certain: being a conscious intelligent agent is the only “necessary” empirical condition connected to the process of design, and to the production of designed objects. That’s an observable fact. As we don’t know what makes humans conscious intelligent agents, all the rest is assumptions.

  49. 49

    gpuccio[48],
    I don’t disagree with you. I don’t know if JD does either. I merely tried to explain what I think JD said when he asked “wouldn’t you suppose….” in which case your objection regarding plants and lower animals would not be valid.

    I was going to make up a metaphor about Italy and soccer, but you get my point!

  50. gpuccio wrote [45,46]:

    On that other thread I already made the point that functional biological information is not particularly compressible, if not at all. What is there of compressible in the sequence of a functional protein? therefore, functional information is perhaps a subset of the more general concept of CSI, but not a subset of “compressible” information.

    We could go back and dissect Dembski’s writings to settle this I suppose, but my characterization of them is based on a conscientious and thorough study of them undertaken within the limits of my abilities only within the last several weeks (You may recall an extended discussion on them here not too long ago.)

    But everything in his arguments starts with the observation regarding the small percentage of strings that are compressible. Then later he says at one point, (and paraphrasing now, but an accurate characterization nonetheless):

    “The key to ruling out chance is to keep the pattern simple.”

    In fact, the measure of CSI in the actual formula is inversely proportional to the complexity of the pattern detected.

    There may be something regarding the “detachability” of a pattern that eludes me and is perhaps not strictly tied to algorthmic compressibility, I don’t know. (Or maybe I’m not missing anything.)

    But the bottom line is, all that can be ruled out by this method is randomness. There is nothing in the whole procedure that can tell you, “This was caused by ‘design’ as conceived in I.D. circles and not by laws or mechanism.”

    I don’t think ruling out randomness is worthless though, as I do believe that evolution does largely equate to randomness unless most of the information came from the natural laws as opposed to the mutations.

    Regarding your other points, you make the usual standard objections of materialists, who try to deny the empirical nature of consciousness. I would remark that “intelligence” is a property of a conscious agent, and that nobody has ever demonstrated (notwithstanding all the arrogant statements of strong AI theory) that consciousness and intelligence can be explained as a consequence of the objective and mechanical laws of physics. Consciousness remains an empirical observation, unexplained by current materialistic theories. That is what is called “the hard problem of consciousness”. Therefore, you cannot out of dogmatic authority reduce intelligent agency to a product of mechanical laws.
    Until differently proven, intelligent agents remain an empirically observable reality, and they behave in a characteristic way, and have properties which cannot be found elsewhere. Intelligence, and the ability to output functional information easily and with very high complexity, are among them.

    I’m merely repeating some pretty commonsensical observations here, but to me, animals seem “conscious” but I for one am not going to assign mystical transcendent attributes to animals. An animal’s behavior, its being, its internal life, is attributable to its physical-chemical makeup. It seems to me a sort of hubris stemming from misconceived religious dogma, to just assume that humans operate according to some entirely different set of mystical principles.

    I think your argument above is obviously an argument from ignorance, as you say, ‘Until proven otherwise, I will assume such and such…” That’s the definition of an argument from ignorance. My philosophical stance OTOH would be one of practicality – If human behavior is the result of something akin to I.D.’s conception of intelligence, then it is not potentially decipherable in the way that the rest of nature is. I would say that whatever we could potentially understand about intelligence is of neccesity quantifiable, measurable, observable, physical. And to the extent intelligence is not these things, then it is meaningless.

    [JT:] “Wouldn’t you suppose that the reason that human-designed software exhibits FSC is because the “biopolymers such as DNA and proteins” that resulted in human beings exhibit FSC as well?”
    [gpuucio:]I would never suppose that. Plants and lower animals are very rich in FSC in their DNA and proteins, but I am not aware that they design software. Am i missing something in your assumption?

    The things that a human’s internal organs do exhibit a lot of genius – the function of the liver the heart, and so on, presumably all the FSC in them is directly attributable to FSC in human DNA. Of course, higher animals share all these attributes with us. But animals can perform remarkable feats of dexterity as well, that if they are broken down and analyzed are incredibly complex. I’m thinking of the dexterity of a cat, for example. Certainly many animals manifest various types of genius that we could never hope to duplicate. Mankind has his own type of unique genius as well, undoubtedly.

  51. 51

    Everybody,
    Could we perhaps abstain from very long comments? We want Kirk to be able to finish before this thread also gets long and starts loading slowly. Rememeber why it was started in the first place.

  52. JT wrote:

    But everything in his arguments starts with the observation regarding the small percentage of strings that are compressible. Then later he says at one point,…

    Both FSC and CSI deal with small subsets: FSC (as expounded by Durston et al.) deals with the small subset of functional states among total possible states; CSI (as expounded by Dembski) deals with small subset of compressible strings among all possible strings.

    Both deal with subsets and sets, just different kinds. Durston’s is easier to deal with (IMHO) and is more focused and concrete, hence why we usually use FSCI around here, rather than the more generic CSI.

    Just my two cents.

    Atom

  53. Response to ID D:

    Response to JT:

    JT asked, ‘ Why can’t (or shouldn’t) you consider the agent as part of the physical system.’

    By physical system, I mean a system that is described/prescribed by the laws of physics. The laws of physics, described by simple equations, most of which do not exceed even one line of text, do not have the horsepower to explain intelligence. The onus would be on the person who wants to suggest that physics can explain, say, thinking, to show it or model it. Belief in leprechauns and belief that physics can explain thinking have something in common …. the complete absence of any evidence for either. Of course, if someone believes that human thought can be fully described by the laws of physics, then they would need to test that theory. Mere conjecture does not constitute science. I think the paper on three subsets of complexity, mentioned in my last post, is a propos here. The laws of nature tend to produce effects that are repeatable, thus, have very little capability for producing functional information.

    re. Birds as a model for aircraft:

    Birds themselves are an anomaly in the physical system. In the old days, God or the gods were credited with creating them. Now we have the tools to test at least the hypothesis that there were created by intelligent agency, through measuring their functional complexity. Of course, if an intelligent agent is successful in building an artificial bird, that is an exercise in intelligent design. In the same way, any successes we have in designing a new protein, or building an artificial life form, are also examples of intelligent design.

    Re. compressibility and patterns:

    Patterns and compressibility do not distinguish between functional and non-functional information (meaningful information and gibberish). The laws of physics can produce repeating patterns (ordered complexity), such as in a crystal lattice, as well as sequences that so far as we can see cannot be compressed (random complexity), such as atactic polystyrene. For biological life, it matters a great deal whether a sequence is functional (functional complexity), not whether the sequence is compressible or contains patterns.

    Rob
    Only a knowledge of the laws of physics is required to determine what constitutes the ground state.

  54. JT:

    I am not stating that animals are not conscious. I believe they are. And as such, they cannot, IMO, be completely explained by current materialistic theories.

    But what I was stating is that animals cannot generate functional complex information: language, software, machines. I am not denying that animals can have some form of language or some form of intelligence, I am just saying that our definition of FSCI is not matched by what they do.

    And finally, my statement:

    “Until differently proven, intelligent agents remain an empirically observable reality, and they behave in a characteristic way, and have properties which cannot be found elsewhere.”

    is in no way an argument from ignorance. I am stating observable things. And I am saying that the “theory” that those observable things (consciousness and intelligence) can be explained on the basis of other observed things (material objects and the inherent material laws) which have different properties and behaviour is just a theory, and needs specific support to be taken in consideration. Where is this an argument from ignorance?

  55. Atom (#52):

    I agree with you, only I would consider CSI as the general set (any kind of specified complex information), and compressible or functionally specified information as two distinct subsets of CSI. But, obviously, it’s just a matter of agreeing on terms and definitions. The substance remains the same.

  56. Folks:

    Most interesting thread!

    Atom, how’s the luminous one?

    PO-Superman II, how’s the sub-orbital flying into Sweden these days? [Did you run into St Nick last Dec 24/5? How does he do that round the world trick in one eventing?]

    1] of ornithopters and ID:

    First, a point of information: ornithopters have been designed, built and flown. (Cf the video!)

    So,as with Ventner on production of bio-information by intelligent design, we know that intelligent designers can make a bird-like flying machine. Inetelligent designers are the only OBSERVED originators of bird-like flying machines.

    Now, the project to make a LIVING birdlike flying machine, that’s a hard on for ya. (Maybe, someone can try converting a lizard into a flying animal by genetic manipulation?)

    2] Of Specifications and complex sequences

    Plainly, Wm A D’s point — and he used this example –was to specify a narrow target in a large config/state space, but not by painting the rings around the arrow after it hits.

    Functionality is one way to do that: as KD points out for biopolymers, function is kept by physics, not by bio. So Bio must only search for the prize, it cannot directly generate it.

    Compressibility and the like, are other ways to get to narrow targets.

    A third, more general one is: independent, “simply” statable specification of the target zone.

    (a) FSCI hits as “it works,” which is macroscopically recognisable.

    (b) Compressibility works as the specifying program is shorter than the original sequence and specifies it. [I note that this does not get away form complexity as the program has to be stated in a language, encoded as a signal, expressed by executing machinery, putting you right back up on complexity.]

    (c)Aesthetically appealing art object works as the object is digitally definable and can be simply described: Ultra-large, representational, 3-D portraits of four US presidents of note, in a group.

    (d) convenient card hands work [within the constraint that the deck of cards gives a much smaller space] e.g. 13 spades in a standard deck of 52.

    3] FSC metrics, islands of function and probability

    Here KD et al have put up a metric that allows us to recognise that functionality comes in islands and the islands are subsets of a much larger config space.

    In the case of proteins of a family, the H2N-CHR-COOH building blocks chain on the H2N- and -COOH backbone, hosting the functional groups, R, as a branch. the basic chaining chemistry and the functionality are attributes of different features of the blocks.

    then too, the sequencing is coded in DNA, which is itself chained independent of the active parts of the GCAT monomers.

    So, we have good reason to infer to more of less uniform odds for each possible slot. That only some islands exhibit function is then reasonably modelled by taking the Hazen ratio [M(Ex)/N] as a probability metric.

    But we are not locked up to such, as KD discusses on ground states: cosntraints may shift from uniform distributions [null state as ground state] and metrics can be made to address that [e.g. weighted sums].

    But as he observes [2007, p.4]

    Physical constraints increase order and change the ground state away from the null state, restricting freedom of selec-tion and reducing functional sequencing possibilities . . . The genetic code, for example, makes the synthesis and use of certain amino acids more proba-ble than others, which could influence the ground state for proteins. However, for proteins, the data indicates that, although amino acids may naturally form a nonran-dom sequence when polymerized in a dilute solution of amino acids [30], actual dipeptide frequencies and single nucleotide frequencies in proteins are closer to random than ordered [31]. For this reason, the ground state for biosequences can be approximated by the null state.

    This is of course a very important observation relative to the many (sometimes heated) discussions previously held in and around this blog.

    4] the Durston et al FSC metric:

    Maybe, I can help us a bit on deciphering, per your remarks on marking clear distinctions?

    [KD, kindly correct if I miss a key point.]

    The measure of Functional Sequence Complexity,denoted as Z, is defined as the change in functional uncer-tainty from the ground state H(Xg(ti)) to the functional state H(Xf(ti)), or

    Z = [delta]H (Xg(ti), Xf(tj)) . . . Eqn 6 [Using Z for zeta] [P. 4]

    –> here we imagine a ground [macro-]state which specifies in effect the set of possible sequences, then we jump to an island of function [functional macrostate].

    –> We can see protein chains of given length, and we can see those that are of that length and WORK in the required role

    –> there is a jump in information, here on a per symbol average basis [which is what H measur5es in info theory generally]: from the generic sequence of length X state, tot he in an island of function state, in effect

    –> Z is naturally in bits [given its components], and since they are functional, we have a measure in Fits.

    5] On a per aa basis:

    PP 4 – 5:

    Consider that there are usually only 20 different amino acids possible per site for proteins, Eqn. (6) can be used to calculate a maximum Fit value/protein amino acid site of
    4.32 Fits/site [NB: - log2 (20) = 4.32]. We use the formula log (20) – H(Xf) to cal-culate the functional information at a site specified by the variable Xf
    such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f [putt he chains in parallel with aa codes laid out in cols by corresponding sites]. The measured FSC for the whole protein is then calcu-lated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic chal-lenge, in terms of probability, in achieving needed meta-bolic function . . . .

    A high Fit value for individual sites within a protein indicates sites that require a high degree of functional information. High Fit values may also point to the key structural or binding sites within the overall 3-D structure. Since the functional uncertainty, as defined by Eqn(1) is proportional to the -log of the probability, we can see that the cost of a linear increase in FSC is an exponential decrease in probability.

    –> How to roll yer own.

    6] Functional state probabilities:

    For the current approach, both equi-probability of mono-mer availability/reactivity and independence of selection
    at each site within the strand can be assumed as a starting point, using the null state as our ground state. For the functional state, however, an a posteriori probability esti-mate based on the given aligned sequence ensemble must
    be made . . . [.]

    [A] set of aligned sequences with the same presumed function, is produced by methods such as CLUSTAL, downloaded from Pfam. Since real sequence data is used, the effect of the genetic code on amino acid frequency is already incorporated into the outcome. Let the total number of sequences with the specified function in the set be denoted by M. The data set can be represented by the N-tuple X = (X1, … XN) where N denotes the aligned sequence length as mentioned earlier. The total number of occurrences, denoted by d, of a specific amino acid “aa” in
    a given site is computed. An estimate for the probability
    that the given amino acid will occur in that site Xi, denoted by P(Xi= “aa”) is then made by dividing the number of occurrences d by M
    , or,

    P(Xi = “aa”) = d/M.(7)

    More specifically, continuing:

    For example, if in a set of 2,134 aligned sequences, we observe that proline occurs 351 times at the third site, then P (“proline”) = 351/2,134. Note that P (“proline”) is a conditional probability for that site variable on condi-tion of the presumed function f. This is calculated for each amino acid for all sites. The functional uncertainty of the amino acids in a given site is then computed using Eqn. (1)

    [I.e. H(Xf(t)) =

    -[SUM] P(Xf(t)) logP(Xf(t)) (1),

    where Xf denotes the conditional variable of the given sequence data (X) on the described biological function f which is an outcome of the variable (F) . . . , p. 2 ]

    . . . using the estimated probabilities for each amino acid
    observed. The Fit value for that site is then obtained by subtracting the functional uncertainty of that site from the null state, in this case using Eqn. (4), log20. The individ-ual Fit values for each site can be tabulated and analyzed . . . [.]

    The summed total of the fitness values for each site can be used as an estimate for the overall FSC value for the entire protein and compared with other proteins.

    –> he goes on to discuss changes in Z on mutations etc . . .

    7] Evolving proteins and changing Z:

    Pp 5 – 6:

    In principle, some proteins may change from a non-func-tional state to a functional state gradually as their sequences change . . . .

    Intuitively, the greater the reduction in FSC a mutation produces, the more likely the mutation is deleterious to the given function. This can be evaluated using known mutations introduced individually into a set of aligned, wild-type sequences to measure the change in FSC. The
    results could then be ranked. Operating under the hypothesis that mutations producing the greatest decrease in FSC are most likely to be deleterious, experimental
    investigations into certain genes with certain mutations could be prioritized according to how negatively they affect FSC . . .

    of course the now famous table of 35 values follows.

    WELL DONE, KD!

    GEM of TKI

  57. kairosfocus:

    thank you for the clear and exhaustive contribution.

    Since we are finally discussing the whole “package” of specified information and its computation, I would like to add a couple of thought which I have already stressed in previous discussions, and which remain very important for me:

    1) Of the two properties of CSI (I am using the term here in the more general, inclusive sense, until we agree on nomenclature), it is “specifictaion” which is the real mark of design. So, in FSCI, specification is directly willed and conceived by the conscious intelligent designer, as a form of specific meaning and function to be expressed. Complexity, on the other hand, can or cannot be present at significant levels. So, if the specification (function) can be achieved with low complexity, then the designer will be happy to do so. The designed object, in that case, remains designed, but it is a designed (functional) object of low complexity.
    But, when high complexity is necessary to ensure function, human designers are surprisingly good at generating it: see for instance the example of language.
    The high complexity, therefore, is not essential to designed things, but it is essentially to objectively recognize designed things, distinguishing them form possible pseudo-designed things (objects which appear specified but, being of low complexity, could arise by chance in a random system).

    2) To examplify as simply as possible what I have said in point 1), I propose again a very simple example: a string of digital information (let’s say binary, for simplicity), which corresponds to the digits of pi. Now, let’s suppose that we don’t know for certain the origin of that string: let’s say that it can be read in some physical series of events or objects, and that such a series could have arisen by chance in a random system, or have been designed by someone.

    Now, we look at the string. And let’s suppose we have 3 strings, one of 8 bits, another of 200 bits, and another one of 500 bits.

    Now, here we have the same specification in all three cases: the bits are apparently random and, as far as I know, not compressible, and yet, if we recognize that they correspond to the first n bits of pi, they specify a very useful and important mathematical object. So, in the right context, they are absolutely functional.

    What is the search space of the 3 strings? That’s easy: 2^8, 2^200, 2^500. And the target space, here, is easy to calculate: it is 1 in all 3 cases, because only one string of that length is correct, given the specification. So, the probability of each string (assuming an uniform distribution in the supposed random system) is 2^-8, 2^-200 and 2^-500. And the complexity is 8, 200, and 500 bits.

    Now, are those three strings specified? Yes. But can we say that they are designed by an intelligent agent, or is the specification in them a pseudo-specification, not connected to design?

    Here the complexity helps. It is easy to see that string number one can easily be a random result, given its very low complexity. And I think that everybody would agree that string number 3 must have been designed.

    What can we say of string number two? Even if, formally, it is very distant from our usual UPB suggested by Dembski, I would definitely consider it designed. And so would probably do most reasonable people. Indeed, I have often stated that the UPB is really an excessive threshold.

    But what if the complexity is, say, 30 bits? Here many would have doubts. So, that’s why we have to agree on some threshold, and the threshold must be appropriate for the physical system which is supposed to have generated the string randomly. So, different systems will need different thresholds.

    So, I hope this example helps in clarifying the relative roles of specification and complexity, which are often misunderstood.

  58. GP

    Excellent thoughts.

    My only caveat: when we state probabilities as like 1 in 2^500, in English we call that the “odds” of.

    G

  59. ..we went on to publish a paper proposing a measure of FSC…

    In Hazen’s equation, Functional information I(Ex) is defined as
    I(Ex) = – log2[M(Ex)/N] (1)

    …there are only three major areas we have ever observed FSC. One is human languages, the other is human-designed software, and the third is in biopolymers such as DNA and proteins. Something to think about.

    Just some very informal musings-

    It seems ironic to say that a simple law can distinguish life from nonlife but a simple law could not generate life.

    Keep in mind, it is physics that determines which amino acid sequences have stable folds, not biology.

    But I thought that physics could only produce simple repetitive patterns.

    So imagine a retarded child and all he can do is make simple repetitive patterns all day. And yet he can also determine which amino acid sequences have stable folds. Sounds like he and nature are idiot savants.

    Speaking of simple repetitive patterns, I’m thinking of quasars and black holes and galaxies and weather and tornadoes and earthquakes and volcanoes and the rings of Saturn, and rainbows and million upon millions of galaxies and a universe millions of light years in extent. Seems quite an accomplishment for the laws of physics.

  60. 60

    kairosfocus[28],
    Tecnical point since you brought it up: “odds” and “probability” are different. Odds give the relative size of a probability of an event to that of its complement. For example, the probability to roll 6 with a die is 1/6 so the odds against rolling 6 is (5/6)/(1/6) usually expressed as 5 to 1. The odds in favor of rolling 6 is the reciprocal, 1 to 5. The preposition is usually key:

     1 in 6 probability of rolling 6&nbsp1 to 5 odds in favor of rolling 6.
    Now I read gpuccio’s post, and the point became moot as he actually writes probability 10^-250 which is the preferred way, mathematically speaking.

  61. 61

    Sorry about a forgotten semicolon that made the previous post look ugly.

    1 in 6 probability of rolling 61 to 5 odds in favor of rolling 6 5 to 1 odds against rolling 6

  62. 62

    ok, i give up, html is not cooperating today…

  63. JT:

    KD is talking on Van der Waals interatomic forces, bond rigidity [Proline is the classic on that] etc.

    In the second instance he is speaking of periodic, ordered crystals, not aperiodic info bearing molecules created algorithmically and rolling up the energy hill (ATP is another case in point, due to the rotating turret molecule ATP synthase.)

    PO/Sup II:

    Point taken.

    1 in X odds is a looser way but it is used too.

    GEM of TKI

  64. gpuccio,

    I agree, CSI deals with “specification” in general (which is just a subset of something else), and Dembski uses compressible strings as his particular subset, hence why I said Dembski’s CSI deals with compressible strings.

    But your general point is correct, CSI is the general theory, FSCI is an extremely useful particular application of it, using “function” as the well-defined and measurable subset.

    KF,

    She is good and close to my side always. Today is our one year five month anniversary (a month away from 1.5 years) and we’re still seeking G-d’s guidance to help our marriage grow and still madly in love. So things are blessed.

    Now back to Durston…

  65. Atom

    On the way ou tthe door to a techie for a sick PC.

    Great!

    Greet her from us all at UD.

    G

  66. 66

    kairosfocus[63],
    Yup. And it doesn’t really make a difference for the tiny probabilities we talk about here. Good to be in agreement with my ludlumesque friend!   SuperPO

  67. KF,

    Sick PC? All you need to know are two things:

    1) Avira AntiVir (Free)

    and

    2) Malwarebytes Anti-Malware (also free)

    I had a dead computer a couple weekends back and norton anti-virus stood by and watched while I got infected. The two above completely removed the infections without having to re-install anything or reformat my system.

  68. Kirk,

    I have a simple question about proteins in general. And it is an attempt to improve on what may be a little learning on my part. From what I know there are introns which are interspersed into a gene and are removed after splicing. The remaining exons are then translated into a protein.

    Somewhere I have heard that multiple genes may be made from the same gene by only splicing together some of the exons. Thus, one gene can make several different proteins depending upon the exons used. Is this correct?

    And if it is correct does each exon have its own folding properties and does the whole protein have different folding properties from its parts. I understand it is larger and the physics maybe somewhat different but the fact that each exon may fold mean that combinations of the exons will also fold and can they be predicted from how the individual exons fold.

    This may be naive or I may be talking nonsense. The thought hit me that if folding proteins are very rare in general then the fact that the whole may fold may mean that various combinations of the parts may also fold and be useful.

    If this is not simple and does not fit into what you plan to present then maybe some time in the future you could answer it.

  69. Jerry:

    Eukaryotic protein coding genes are split in multiple exons (sometimes really many of them) separated by (usually longer) introns. The whole protein sequence is reconstituted by splicing out the introns at the level of mRNA (before translation).

    That has nothing to do with folding. Folding is a property of the whole protein molecule, not of a single exon. For instance, if you take human myoglobin, a very simple protein of 153 aminoacids, the coding sequence in split into 3 exons. When they are joined, after splicing the introns, the resulting mature mRNA is translated, and the protein folds in one single compact fold, the globin fold. So, single exons do not fold, and are not functional elements: the whole protein sequence is the functional element.

    It is true that alternative splicing can give birth to variant proteins. That process is certainly important, and it has brought to the end of the classical “one gene – one protein” model. But I don’t think we understand really how alternative spicing is controlled or regulated.

    You must consider that introns are probably extremely important for regulation, although we scarcely understand their role. While protein coding genes represent only 1.5% of human genome, introns represent more than 30%. The extreme fragmentation of protein coding genes in eukaryotes remains, as far as I know, a fascinating mystery.

  70. Ah Atom:

    I suspect a video RAM hardware headache . . .

    G

    PS: been using AVG for ~ 5 y, with reasonable success.

  71. SuperPO:

    Somehow, it seems from my background I have heard “odds of 1 IN 6″ (fractional = probability], and “odds of 1 TO 5″ [a weird sort of improper, correspondence "ratio"] both used.

    G

  72. Jerry and GP:

    the fact that with introns and exons we can have multiple proteins coded for with the same DNA strand implies multiple layering of codes.

    That reminds me of the microcontroller programmer’s trick form the bad old days of scant memories — imagine a friend just reminded me of his early-mid 70′s 1 MB RAM for video coding research that cost US$ 1 millions . . . — by which the same storage was interpreted by one framing as code to be executed and by another as data to execute upon!

    I never even TRIED that trick.

    The levels of sophistication of the design of the cell are getting deeper and deeper. (I assume by now we have all seen the NHK video on the self-assembly of the flagellum, where the length of the elbow is set by the uncoiled string length of the protein being sent up the pipe, based on catalytic effects . . .]

    Dah is be SERIOUS engineering, mon!

    G

  73. 73

    kf[71],
    I suppose you can say odds and mean probability, but the mathematical definition of odds is the ratio p/(1-p).

  74. gpuccio,

    Thank you for your explanation. One of the things that would be nice to have on the site are faqs about the science of micro biology itself. That maybe too much as it may prove endless. But we frequently talk about technical things and sometimes with little knowledge.

    What I was trying to understand was if the whole folds, and I realize how it folds is due to the physics of the attraction and repelling of the individual amino acids, does that mean that the parts will also fold and potentially be useful. I realize the folding of a sub protein may be quite different than the folding of the whole protein since the forces will be different.

    From what I understand, Kirk said it is extremely rare for a protein to fold. And in protein sequence space there are great differences between one foldable island and another. Each island is a set of possibly millions/billions of foldable proteins, each just a little bit different from its neighbor but eventually you run into neighbors on all sides that do not fold and thus cannot be useful. These islands also consist of sub proteins of a larger protein that also fold. And also the proteins in these islands may fold completely different from its neighbor because the differences between the two could represent a different force set due to just one change.

    So what I am trying to understand is just what do these isolated islands consist of. It is one thing to say they are rare but I was trying to understand this process better and do it at a level that is understandable for a non expert in the field. We have a new tool in our basket and we know so little about it. It is here that Prof O’s expertise could be of use since we are dealing with instances of a phenomena and how can one form the proper probability distributions to analyze them.

    Maybe Prof O should look into this since he could become the expert in this upcoming field of the probability distributions of protein functionality.

  75. Hello all. Due to my involvement in this discussion, I have placed myself in an awkward position, the details of which I would like to keep confidential. To avoid further complications, I have decided to withdraw from contributing further. I do apologize for this. Again, I think it best to keep the details confidential.

  76. I think most of us understand, KD.

  77. Jerry:

    You could give a look at the SCOP classification of proteins. Superfamilies could more or less correspond to your concept of islands.

    Proteins are not probably millions, but there are certainly a lot of them. Individual folds are probably in the range of a few thousands, with abou one thousand representing 60-70% of known proteins.

    And yes, proteins which fold are rare, even if nobody knows exactly how rare. And each fold is certainly an island. And there are not “sub-proteins” which fold. Fodling is a very complex process, which we still do not understand completely. Indeed, given a primary sequence, it is still very difficult to know if it will fold.

    Some proteins are multi-domain: they are very big, and include more than one fold in their structure. But most proteins have a single domain.

    Folding is not enough for functionality, but it is necessary. Beyond folding, a protein has to have an active site, which is responsible for function, and which the correct folding positions in the correct way. Moreover, many proteins have to undergo conformational changes ahter binding their ligand, and those changes are essential to function.

    Moreover, the more complex proteins cannot fold spontaneously: they need other very complex proteins, called chaperones, to help their folding. The way the most complex chaperones work is sitll a mystery.

    And it is still more complex than that. The relationship between primary sequence and folding is very unpredictable. Just to give an example, bacterial hemoglobin, which is one of the first examples of a myoglobin, has a folding which is almost identical to human myoglobin, but its primary sequence is completely different, sharing only about 20% homology with the human protein. And the function is the same. Other times, just a single aminoacid change will prevent both folding and function.

    And I agree with you, we should talk more about biological matters. ID is much more powerful and self-evident when biological realities are correctly understood.

  78. Durston Diccont’d?

  79. The DNEA (Darwinian Narrative Enforcement Agency) appears to gotten through to Kirk. It was only a matter of time…

  80. Sounds like the work of one of Canada’s biggest academic assholes -Larry Moran

    http://www.evolutionnews.org/2....._desi.html

  81. Kirk:

    we do understand, but it’s a real pity! I hope we can hear from you as soon as it is convenient for you.

  82. Kirk,

    Thanks for your informative comments. Best of luck.

    Would it be possible for another of our very erudite bloggers to pick up where Kirk left off? I was really hoping to learn more about the absolute limit of nature to generate functional information.

  83. 83

    Kirk,
    Too bad indeed. I originally requested this new thread so you could address my comments and questions and I’m sorry it got you into an awkward position.

  84. Peter:

    I am not happy that the discussion remain truncated. It is a discussion which, however, will have to be taken again, and soon enough.

    I don’t know how Kirk was going to guide the discussion, but I would like to suggest a few points which certainly could stimulating contribution.

    1) If we accept the definitions given up to now, functional information can be defined according to Hazen’s equation:

    I(Ex) = – log2[M(Ex)/N]

    Please notice that Hazen is assuming an uniform distribution of the protein sequences, which is the only reasonable position for a biochemical system where all four nucleotides have similar probabilities in each position of the sequence. Anyway, if anybody has objections to that, he should state those objections clearly and explicitly, so that we can discuss them now. I have alredy invited Prof. Olofsson to comment on that, but he has not yet obliged.

    That said, the problem remains of how to compute the functional state, M(Ex). There are two ways of doing that.

    First of all, I would suggest that for the moment we define the functional state with reference to the specific function defines for the protein we are considering, and, as stated by Kirk, according to a definite threshold of function: Ex is therefore one specific function, measured with reference to a specific minimum level. In other words, Ex is binary (it is either absent or present), and M(Ex) is the number of sequences in the search space for which Ex is present.

    The two ways to compute M(Ex) are: direct and indirect.

    The direct way consists in knowing with some approximation the real number of sequences which will express Ex. While that is at present impossible with absolute precision for any known protein, because we know too little of protein function and of the structure of the protein space, still many comsiderations, both qualitative and quantitative, can already be made to try to assess at least the range of orders of magnitude which we can expect for M(Ex). Moreover, our knowledge is rapidly increasing due to the dtat coming from the field of protein engineering. But this subject is very vast, and I would stop here at the moment (but we can deepen any aspect of that issue), to pass to the second way.

    The second way to compute M(Ex) is probably the one Kirk was trying to describe in detail, because it is given in his paper “Measuring the functional sequence complexity of proteins” (with Chiu, Abel, and Trevors), which I invite all those interested to read with great attention.

    In brief, the method consists in measuring the functional information as the reduction of uncertainty in a protein family with respect to the random state. I quote from the paper:

    “The measure of Functional Sequence Complexity, denoted as ?, is defined as the change in functional uncertainty from the ground state H(Xg(ti)) to the functional state H(Xf(ti)), or

    ? = ?H (Xg(ti), Xf(tj)).(6)

    The resulting unit of measure is defined on the joint data and functionality variable, which we call Fits (or Functional bits). The unit Fit thus defined is related to the intuitive concept of functional information, including genetic instruction and, thus, provides an important distinction between functional information and Shannon information”

    So, just to make an example, if you look at table 1 in the paper, you will see that for ribosomal protein S12, a protein of 121 aminoacids, the analysis performed on 603 different sequences of the same protein in different species shows that, while the ground state has an uncertainty (H) of 523 bits, the reduction of that uncertainty in the functional set is of 359 bits: therefore, that is the value in Fits for that protein according to this method of measurement.

    I would like to invite further discussion on these poiints. If we accept them, the points still to be discussed are:

    a) defining a threshold for what a random process can achieve in some definite biological system

    b) discussing the possible role of necessity (NS, special configurations of the search space, like in the transition from an existing protein to another, and any other possible model of necessity which can reduce the role of random processes).

    c) discussing the role of function definition in the above procedure, in particular in reference to the often mentioned objection that evolution can attain “any possible function”.

  85. Prof P.

    Why don’t you converse with Kirk directly? Since you are not soiled with our sins, and are on record as part of the anti ID establishment you should not cause Kirk any dismay. Your two articles are credentials enough for you to pursue this further.

    Did you see the last paragraph of my comment #74? There is a bright future in this for you.

  86. I see that the equation from The Durston paper did not appear correctly in the above post. I try again:

    ?E = ?H (Xfa(ti), Xfb(tj))

  87. No, it shows correctly in the preview, but is changed in the process of posting. It shoud be:

    Zeta (greek character) E = Delta (greek character) H, and then the rest.

  88. 88

    jerry[74,85],
    Thanks for the tip but I’ll be content with the dull future my current research holds in promise. Conversing directly with Kirk is certainly an option that we might pursue. I understand if he prefers to focus on his research and dissertation though.

       PO, unsoiled establishment type

  89. Testing Greek:

    &Delta; Δ
    &delta; δ
    &Zeta; Ζ
    &zeta; ζ

    GP, If the above shows up correctly, the issue is most likely with use of the trailing semicolon. The preview (unfortunately) allows its omission, however it is definitely required for proper formatting.

    HTML codes will not show up in the post without the trailing semicolon.

    Απολλως

  90. Apollos,

    thank you.

    So the equation should be:

    ζE = ΔH(Xfa(ti), Xfb(tj))

  91. “PO, unsoiled establishment type”

    Quite right, PO.

    Dr. Fuller, are you listening? Would you care to comment on this?

  92. 92

    Lutepisc[91],
    That’s jerry’s branding of me. I have to trust him; he is after all a real Swede who knows that it is “lutfisk” and not “lutefisk.”

  93. Thanks for that, PO. Yes, cognoscenti spell the fish without the “e” (and, of course, they don’t spell it with a “p” or “c.”

    For those unacquainted with the delicacy, may I point you to a web site which describes the proper way to eat it?

    http://www.shirky.com/writings/lutefisk.html

  94. 94

    Lutepisc[93],
    Indeed, the secret is in the aquavit. The “e” is a norwegianism.

  95. jerry #68, gpuccio #69

    There are actually examples where the boundaries of exons correlate with the boundaries of protein domains in larger proteins and protein domains are normally capable of folding independently from the remainder of the protein. See: Mingyi Liu and Andrei Grigoriev, Trends in Genetics, Vol. 20, 2004, pages 399-403.
    Title: “Protein domains correlate strongly with exons in multiple eukaryotic genomes – evidence of exon shuffling?”
    Abstract: “We conducted a multi-genome analysis correlating protein domain organization with the exon–intron structure of genes in nine eukaryotic genomes. We observed a significant correlation between the borders of exons and domains on a genomic scale for both invertebrates and vertebrates. In addition, we found that the more complex organisms displayed consistently stronger exon-domain correlation, with substantially more significant correlations detected in vertebrates compared with invertebrates. Our observations concur with the principles of exon shuffling theory, including the prediction of predominantly symmetric phase of introns flanking the borders of correlating exons. These results suggest that extensive exon shuffling events during evolution significantly contributed to the shaping of eukaryotic proteomes.”

  96. rna,

    Thank you. Though I am not sure how much of this I understand. Are you suggesting that the exons are modules and that they are picked and chosen to form a larger protein by combining the sub proteins into a larger protein. And that each exon folds on its own?

    If you have some time to provide some layman English to it, it would be appreciated.

  97. jerry:

    domains are functional subunits of proteins, and the concept of domain is similar, bur not identical, to the concept of fold. Simpler proteins contain only one domain, while more complex proteins contain many domains. Each domain has its own folding.

    I have not read the whole article quoted by rna, but I think that the general idea is that sometimes, but not always, domain boundaries correspond to exon boundaries (but that does not necessarily mean, I think, that one domain is made of one single exon). That would make easier the “shuffling” of one domain from one protein to another.

    The fact is, domains are functional units of proteins. Domains and folds are in the number of thousands. They form thousands of thousands of different, and differently functional proteins. And different proteins form thousands of different multi-protein molecular machines.

    In other words, biological information is layered in different layers of ever increasing complexity, and each of them has functional organization. Darwinian theory would like to intepret all that as a series of lucky “shufflings”: domains comne out of lucky shufflings of aminoacids (or codons, if you want); more complex proteins come out of lucky shufflings of domains; molecular machines come out of lucky shufflings of proteins. I am not aware of an explicit shuffling theory for transcriptomes and transcription regulation, but who knows?

    Obviously, all that is easily explained in a design context: domains sre functional units, much like objects in object oriented programming. Programming is modular. Advanced programming is even more modular. That’s the way design works.

    But I suppose our darwinist friends would easily explain the whole Windows Vista as the product of “lucky” shuffling… (I know, I know, many of you will agree; well, maybe not with the “lucky” part). :-)

  98. jerry #96

    you got the idea exactly right: some exons code for a functional protein domain capable of folding and function independently from its context and thereby ‘exon shuffling’ could be a mechanism for creating novel proteins by combining these sub-proteins into novel larger proteins. It is definitely not true for all exons, some of them for instance are much to small to code for an independent protein domain but for a significant fraction. Shuffling around of genetic information is an experimentally observable fact.

    gpuccio #97

    “Obviously, all that is easily explained in a design context: domains sre functional units, much like objects in object oriented programming. Programming is modular. Advanced programming is even more modular. That’s the way design works.”

    But it is not the only way design can work. If for instance the medieval churches in europe were designed in that way they would consist of an aggregation of many small houses. But the dome in florence is designed as a single entity. Somehow this grand unified kind of design is not that obvious in nature.

  99. rna:

    Well, maybe the biological designer is more an object oriented software programmer than a medieval engineer. Like many darwinists, you are going beyond the current purposes of ID: you are trying to design an outline of the designer. We in ID usually think we have not yet enough data to do that, but your argument about design modalities is interesting.

    There is another aspect which could be interesting, anyway. Our understanding of biological design is rather limited, because we know something about the effectors (the proteins), but almost nothing about the procedures (the regulation, and the general plan). So, before trying to understand the general style of the designer, we should perhaps be able to understand more of the real design. At present, with just the protein coding part partially understood, and that 98.5% of our genome still mysterious, no to speak of any other possible epigenetic source of information, I would say that our understanding is necessarily very partial.

    And moreover, it is not completely true that medieval cathedrals are not modular at all. Have you never seen more than one altar in them? More than one door, or arch, or paintings with a similar religious subject, or similar tombs, and so on? All those are designed modules, which are often “shuffled” in a cathedral, or between cathedrals, to achieve similar functions in different contexts.

    So, maybe our designer can still have something of a medieval engineer, too.

Leave a Reply