Home » Intelligent Design » Low Probability is Only Half of Specified Complexity

Low Probability is Only Half of Specified Complexity

In a prior post the order of a deck of cards was used as an example of specified complexity.  If a deck is shuffled and it results in all of the cards being ordered by rank and suit, one can infer design.  One commenter objected to this reasoning on the grounds that the specified order is no more improbable than any other order of cards (about 1 in 10^68).  In other words, the probably of every deck order is about 1 in 10^68, so why should we infer something special about this deck order simply because it has a low probability.

Well, last night at my friendly poker game I decided to test this theory.  We were playing five card poker with no draws after the deal.  On the first hand I delt myself a royal flush in spades.  Eyebrows were raised, but no one objected.  On the second hand I delt myself a royal flush in spades, as well as every hand all the way through the 13th. 

When my friends objected I said, “Lookit, your intuition has led you astray.  You are infering design — that is to say that I’m cheating — simply on the basis of the low probability of this sequence of events.  But don’t you understand that the odds of me receiving 13 royal flushes in spades in a row is exactly the same as me receiving any other 13 hands. ” In a rather didactic tone of voice I continued, “Let me explain.  In the game we are playing there are 2,598,960 possible hands.  The odds of receiving a straight flush in spades is therefore 1 in 2,598,960.  But don’t you see, the odds of receiving ANY hand are exactly the same, 1 in 2,598,960.  And the odds of a series of events is simply the product of the odds of all of the events.  Therefore the odds of receiving 13 royal flushes in spades in a row is about 2.74^-71.  But, and here’s the clincher, the odds of receiving ANY series of 13 hands is exactly the same, 2.74^-71.  So there, pay up and kwicher whinin’.” 

Unfortunately for me, one of my friends actually understands the theory of specified complexity, and right about this time this buttinski speaks up and says, “Nice analysis, but you are forgetting one thing.  Low probability is only half of what you need for a design inference.  You have completely skipped an analysis of the other half – i.e. [don't you just hate it when people use "i.e." in spoken language] A SPECIFICATION.”

“Waddaya mean, Mr. Smarty Pants,” I replied.  “My logic is unassailable. ” “Not so fast,” he said.  “Let me explain.  There are two  types of complex patterns, those that warrant a design inference (we call this a ‘specification’ and those that do not (which we call a ‘fabrication’).  The difference between a specification and a fabrication is the descriptive complexity of the underlying patterns [see Professor Sewell's paper linked to his post below for a more detailed explanation of this].  A specification has a very simple description, in our case ’13 royal flushes in spades in a row.’  A fabrication has a very complex description.  For example, another 13 hand sequence could be described as ’1 pair; 3 of a kind; no pair; no pair; 2 pair; straight; no pair; full house; no pair; 2 pair; 1 pair; 1 pair; flush.’  In summary, BarryA, our fellow players’ intuition has not led them astray.  Not only is the series of hands you delt yourself massively improbable, it is also clearly a specification.  A design inference is not only warranted, it is compelled.  I infer you are a no good, four flushin’, egg sucking mule of a cheater.”  He then turned to one of the other players and said, “Get a rope.”  Then I woke up.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

72 Responses to Low Probability is Only Half of Specified Complexity

  1. This analogy will only convince people who want to believe it. It poorly matches the biological realities, in which there would only be four kinds of cards in the deck, and advantageous hands (combinations of three cards) would be preserved into the next shuffle.

    If you try a story with those limitations, I think you’ll find it much harder to make your case.

  2. BarryA,

    I still say that the deck of cards analogy doesn’t work because it requires pattern recognition that is not available to all observers. Understand, I’m not arguing against the concept of CSI itself. For someone who’s never seen a deck of cards, and who has no knowledge of the rules (or even the existence) of poker, the concept of a straight flush is meaningless and for that observer, the strict rules of probability–that any five-card hand is no more improbable than any other–will rule. In this case, the precise specification is arbitrary and a matter of foreknowledge, so the construction is necessarily tautological.

  3. Mickey, please please please come to my house for poker tonight.

  4. Barry,

    I think that there’s a good possibility that if I were to accept your kind offer, one of us would be poorer at the end of the evening, but that has nothing to do with the subject at hand. I understand both the rules of poker and the rules of probability. It’s an ignorant observer you’re looking for, which support my point, no?

  5. Mickey, sorry. You are mistaken. Dembski explains the concept you are getting at as follows:

    The pattern doesn’t need to be given prior to an event to imply design. Consider the following cipher text:

    nfuijolt ju jt mjlf b xfbtfm

    Initially this looks like a random sequence of letters and spaces—initially you lack any pattern for rejecting chance and inferring design.

    But suppose next that someone comes along and tells you to treat this sequence as a Caesar cipher, moving each letter one notch down the alphabet. Behold, the sequence now reads,

    methinks it is like a weasel

    Even though the pattern is now given after the fact, it still is the right sort of pattern for eliminating chance and inferring design. In contrast to statistics, which always tries to identify its patterns before an experiment is performed, cryptanalysis must discover its patterns after the fact. In both instances, however, the patterns are suitable for inferring design.

    Go here for the whole article: http://www.leaderu.com/ftissue.....mbski.html

  6. My opinion is the term “specified complexity” can be translated to “meaningful”. When you get an objection to specified complexity that most people notice, the person is claiming that the pattern has no meaning. Of course it has meaning otherwise most people wouldn’t notice it from the other background noise. If it was just another meaningless pattern then most people wouldn’t bother to say anything about it.

  7. If I were to encounter the text string you give as an example, I would first consider context. For example if someone handed me a piece of paper with it written on it, I would infer that some sort of meaning were involved and proceed to try and discover it. There is no practical situation I can think of where I wouldn’t wonder about the meaning, in fact. In similar fashion, if I were ignorant of the existence of playing cards and their use, I would probably infer some purpose, but have no way of knowing what it might be. Thus if I were to turn over the first five cards and they formed a royal flush, I would have no way of knowing whether or not the pattern was significant, or just the result of random ordering. This is fundamentally different from your cryptographic example, and provides further support that the playing card analogy doesn’t work.

    I understand the basics of the concept of CSI, and I’m not arguing against it. I’m only saying that the card deck analogy is not a good example.

  8. but mickey…

    if someone was playing a game with you and passed out cards….suddenly you realized that whatever the rules of the game were, they continued to deal themselves a hand that beat all of the other hands…you would have more than enough information to recognize CSI.

    i think that’s the point and i think the analogy works well enough for it to stand as a strong one when it comes to what Evolutionists believe happens with DNA.

  9. Mickey, the 2,598,960 number did not come out of the air. If one examines a deck of cards and does the math concerning the possible five card combinations, that number will always be the result of their calculations. It is a simple step to understand that the royal flush in spades is only 1 of those combinations. Then the math is easy.

  10. interested,

    I’m almost sorry I brought the subject up in another thread. I am totally ignorant of the existence of playing cards, the reasons for their existence,and the games played with them, under what circumstances might I find myself playing a card game with someone who does know all of those things?

  11. ReligionProf @ #1:
    Kudos to you for your interest in biological realities. I am too, but I’m not a scientist so I’m having a hard time following your analogy. Help me out by specifying an actual, real-world example in which only four kinds of cards exist, from which an advantageous combination of three cards is preserved into the next shuffle.

    Thanks in advance,
    -sb

  12. Interesting post, seems to be getting a bit heated in here – but I am learning a lot.

    So, for my own clarity, is the post saying that low probability is the same as complexity?

    From here specified complexity is defined: An event with a low probability that is necessary/desired, and does indeed occur?

    So, to keep the analogies rolling lets use a dart board. Any particular point on a dartboard has a probability near zero- however the bullseye is desired. The probability of the bullseye is greater than the probability of a point- but still low compared to the rest of the dartboard.

    Now, if someone hits the bullseye some might blow it off as “luck”, but if it occurs again people often begin to wonder. Either the player is skilled, or “very lucky”. But even these two appear to be probabilities: P(skilled) and P(~skilled)… However when the player consistently hits the bullseye- say 13 times in a row, people understand that P(~skilled) goes to 0… thus a person can infer skill.

    So, one aspect of ID seems to note not only the low probability of a single event occuring, but the likelihood of it reoccuring?

    If my analogy is correct, what happens if someone desires a different point but consistently hits the bullseye? We can infer some skill in precision, but not accuracy.

    Thanks.

  13. Bork, in the article I link in [5] Dembksi uses arrows instead of darts, but the concept is the same. Here it is:

    What is a suitable pattern for inferring design? Not just any pattern will do. Some patterns can legitimately be employed to infer design whereas others cannot. It is easy to see the basic intuition here. Suppose an archer stands fifty meters from a large wall with bow and arrow in hand. The wall, let’s say, is sufficiently large that the archer can’t help but hit it. Now suppose each time the archer shoots an arrow at the wall, the archer paints a target around the arrow so that the arrow sits squarely in the bull’s-eye. What can be concluded from this scenario? Absolutely nothing about the archer’s ability as an archer. Yes, a pattern is being matched; but it is a pattern fixed only after the arrow has been shot. The pattern is thus purely ad hoc.

    But suppose instead the archer paints a fixed target on the wall and then shoots at it. Suppose the archer shoots a hundred arrows, and each time hits a perfect bull’s-eye. What can be concluded from this second scenario? Confronted with this second scenario we are obligated to infer that here is a world-class archer, one whose shots cannot legitimately be explained by luck, but rather must be explained by the archer’s skill and mastery. Skill and mastery are of course instances of design.

    Like the archer who fixes the target first and then shoots at it, statisticians set what is known as a rejection region prior to an experiment. If the outcome of an experiment falls within a rejection region, the statistician rejects the hypothesis that the outcome is due to chance

  14. Religion Prof, your comment [1] is not germane. I am not attempting to model DNA. My sole purpose is to illustrate the concept of specified complexity.

    Everyone understands on a deep intuitive basis that 13 royal flushes in spades in a row absolutey must be the produce of design and not mere chance. What I have done is show that this intuition can be demonstrated rigorously.

    Now, if I were to apply this to biology, I would start the same place. Everyone who has studied the matter the least litte bit understands at an intuitive level that living things appear to be designed. Even arch-atheist Richard Dawkins understands this as demonstrated by his famous concession that biology is the study of complicated things that appear to have been designed for a purpose.

    Dembski’s project is to demonstrate that this untuition can also be affirmed in a rigorous way.

  15. Mickey, I think I finally understand your problem. You say: “the deck of cards analogy doesn’t work because it requires pattern recognition that is not available to all observers.”

    This statement is simply wrong. The existence of design in no way depends upon your subjective response to it. In other words, the fact that a given observer may not understand that specified complexity exists has nothing to do with whether it in fact exists.

    In my example, assume a non-card player is sitting at the table. The fact that he does not understand that I cheated and that my cheating was obvious to someone who understands the game in know way undermines the fact that the pattern is in fact both complex and specified. In other words, both the complexity and the specification exist independently of any observer’s ability to see them.

    If you don’t see this, I can’t help you and we’ll just have to agree to disagree.

  16. Barry,

    Obviously I agree completely that the CSI exists independently but that still would not prevent the non-card player from making a false negative using formalized design detection, would it?

    On another note, there is the falsification of design detection. Let’s say we found a 2001-style monolith on the moon and all the planets. Design would likely be inferred. But suppose later on we discover an unknown process (a Law) that is observed to create these monoliths in space. Similarly, formalized design detection in regards to biology is open to falsification based upon new observations. But just because some minor sub-systems are capable of being produced by Darwinian mechanisms under limited scenarios does not automatically mean that the entire ID scientific program is kaput.

    I like this quote by Behe:

    “I think a lot of folks get confused because they think that all events have to be assigned en masse to either the category of chance or to that of design. I disagree. We live in a universe containing both real chance and real design. Chance events do happen (and can be useful historical markers of common descent), but they don’t explain the background elegance and functional complexity of nature. That required design.”

    Personally I think some ID proponents set the bar too low, which sets up ID for a possible easy embarrassment. I think we should expect to find some valid examples of Darwinian evolution that go beyond our expectations.

  17. Patrick writes: “Obviously I agree completely that the CSI exists independently but that still would not prevent the non-card player from making a false negative using formalized design detection, would it?”

    Agreed, but I understood Mickey to be making a different point than that a false negative is possible if there is insufficient information. I understood him to be saying that unless, in his words, “all observers” understand the pattern, no design inference can be made. This is, as I stated, simply not the case.

    I also agree with your second point, and I think it is very important. In science a design conclusion, like all scientific conclusions,must always be contingent. As Popper said, science never makes absolute statements.

  18. Barry, while a particular order of the cards is highly unlikely, it is not impossible. I, personally, have never liked this angle for that reason.

    What is needed to persuade die hard idealogues is not merely the improbable, but the impossible. A random shuffle of cards has an equal chance at any arrangement of the cards, but no matter how many times a deck is shuffled, the A on the Ace of Spades will never turn upside down on the paper!

    The blind watchmaker crowd notes that certain kinds of empirically verified generic changes occur and then illogically extrapolate that to any and all form or function we see in the biological world, despite absence of empirical validation. We, in the information and engineering sciences, know that you cannot always get from point A to point B by wishful thinking. That something is “plausible” in one’s imagination doesn’t mean a thing unless it is demonstrated to have a “path of actuation.” Until a concept is proved, it is merely an idea. The blind watchmaker crowd seems to be oblivious to this. They assume that there are blind natural chemical pathways from earlier species to later ones that explains the generation of all the cell types, tissue type, organs and body plans.

    Lately I’ve been thinking about a challenge to the blind watchmaker crowd. I provide several output number tables and an algorithm. You tell me what input to the algorithm yields a particular result, or if no input can yield the result. Or course, for extra fun I can make the algorithm self-modifying (in a deterministic way.)

    Any takers?

  19. mike1962 sasy: “Barry, while a particular order of the cards is highly unlikely, it is not impossible. What is needed to persuade die hard idealogues is not merely the improbable, but the impossible.”

    I can’t agree with you here mike. If our purpose is to falsify materialist accounts of origins, we must do so on probability, not logical impossibility, grounds. The reason for this is that materialists posit time, chance and necessity as the raw materials of their program. If we take away chance in principle, they will say, “we are talking past each other, because you are not dealing with our theory on its own terms.”

    We eliminate chance not in principle, but in practice. That is why the subtitle to Dembski’s book is “Eliminated Chance Through Small Probabilities.”

    Note that Dembski never eliminates chance in principle; he eliminates it in practice though the universal probability bound.

    In my example, if I were to deal a 13 hand series to every atom in our galaxy (i.e., to 10^68 atoms) it is better than even money that none of them would get 13 royal flushes in spades in a row. If someone continues to insist that it is nevertheless possible for this to happen, then there is no hope for them and we can safely ignore them.

  20. Barry, “I can’t agree with you here mike. If our purpose is to falsify materialist accounts of origins, we must do so on probability, not logical impossibility, grounds.”

    I liked your post, but I think it should be falsified on both fronts. When dealing with the probability question, I see it as in “our court” to demonstrate this positively. When dealing with the possibility question, I see it in “their court” to demonstrate their positive conjecture. I believe it is fruitful to make them chase their tails on both fronts.

  21. 21

    Barry,

    The whole thing with the cards came up because I had asked a question of you in another thread I was responding to DaveScot, who proposed the deck-of-cards analogy:

    I understand your analogy, but don’t see how a group of 52 unique, unchanging objects equates to what goes on in the genome. We can calculate the probability of any given order of a deck of cards precisely, but that’s not possible with mutation and subsequent changes in the genome. Thus my question to BarryA still remains: if you identify CSI strictly by its complexity, how do you escape the tautology?

    As far as I can tell, the question hasn’t been answered.

  22. Mickey, the question has not been asnwered because it poses a null set. You ask, “if you identify CSI strictly by its complexity, how do you escape the tautology?” The whole purpose of this post is to demonstrate that CSI is not identified “strictly by its complexity.” Look at the title of the post. Complexity (i.e., low probability) is only half the story. The other half is specification.

  23. …if you identify CSI strictly by its complexity, how do you escape the tautology?

    Mickey, you seem to be ignoring the fact that CSI is not identified strictly by its complexity.

    An hour long presentation of static on your television screen is complex, but can’t for a moment be confused with an hour long presentation of the history of the Roman empire, which is complex but also specific. To say that each is just as improbable is a non sequitur.

    If you’re trying to engage this issue seriously and honestly, you need to consider the specification aspect of CSI, Complex Specified Information.

    A deck of cards ordered by suit and rank is complex and specified as opposed to a random arrangement which is complex but not specific.

    All arrangements are not equal simply because the probability is the same. Certain arrangements contain a specification which cannot easily be attributed to chance. There are properties of these arrangements which can objectively be identified.

    Chief among these is the suit/rank arrangement. Here is its specification: Four categories: hearts, diamonds, clubs, spades, each containing a set of indices from 1 to 13: ace, 2-10, jack, queen, king. (This can easily and succinctly be expressed in computer code).

    I’ve just used 22 words to describe the arrangement of every card in a 52 card deck. Try to specify a random arrangement and see what you come up with. Also note that it doesn’t really matter what “random” order you use, they will all be equally non-specific (roughly).

    To any others: am I wrong to consider this type of semantic compression a property of specification?

  24. Apollos writes: “am I wrong to consider this type of semantic compression a property of specification?”

    Of course not. This is the very point I make in the last paragraph of the original post.

  25. 25

    Apollos said,

    All arrangements are not equal simply because the probability is the same. Certain arrangements contain a specification which cannot easily be attributed to chance. There are properties of these arrangements which can objectively be identified.

    I may regret this, but I have to ask: if there are two decks of cards, one in no discernible order, and one ordered by rank and suit, which arrangement was more likely to occur randomly? And one more question: suppose I am the head of a secret card-arranging society, and the deck that is in no discernible order to you, has in fact been put in a double-secret special order by me. Which deck displays CSI to you?

  26. Mickey, that’s it. If you have not been convinced so far your ignorance has been proven to be invincible. Please move along.

  27. 27

    What, no lovely parting gifts? :>)

  28. Hi BarryA et al:

    You are right on the central importance of complex specified information, and indeed the case of 13 Royal Flushes in a row in a Poker session is indeed an excellent case in point that shows just how design becomes a superior [and even prudent] explanation relative to chance.

    I comment on some points of note above:

    1] But any shuffle outcome is just as likely as any other . . .

    One problem with this case is that the functionality that is a component of the specification is external to the actual outcome of the shuffling: WE, in playing poker assign a certain hand a certain high value.

    So, it is “easy” to object that in effect the unique macro-state [Royal Flush] is arbitrary, and “any other microstate [particular ordering of cards] is just as likely].

    Sure, it is foundational to statistical thermodynamics-type reasoning that any given microstate [here, shuffle outcome] is just as [un]likely as any other.

    But the trick is that once we have identifiable MACROstates that can be independently recognised [at "macroscopic" level], they will normally each include certain numbers of microstates.

    So, when a particular specified and useful macrostate [Royal Flush here] has a very small statistical weight [no of compatible microstates],compared to the non-royal flush macrostate. By RF def’n: a poker hand with the ace, king, queen, jack, and 10 all in the same suit, there are four such microstates, and the rest of the just about 10^68 microstates are non-royal flush.

    So, if one goes to a standard poker game and the shuffling is by chance, the probabilistic resources are such that the odds of seeing 13 such RFs in a row are negligibly different from zero. BUT, it is not that hard for a clever agent to conveniently give himself 13 such hands in a row. Therefore, on inferenfce to best explanation, agent action is the best and prudent explanation of the outcome as you report in the OP.

    2] But, that’s not a PROOF

    This is of course not a proof beyond all dispute or objection, but it is the precise sort of thing we have to do every day and in science in order to function in the real world.

    3] But any other state could have been chosen and would work just as well . . .

    The other point of concern is that the functionality is in effect arbitrary [though the fact of compressibility of description is an excellent index that it is meaningful -- most such sets of hands could only be described card by card, for each of 13 hands].

    That is part of why I developed the microjets example discussed here, in Appendix A, point 6 to my always linked.

    For, if we select any arrangement of microjet parts in a vat at random, the overwhelming likelihood is that they will be in a scattered state. A clumped at random state will be much less probable and is best explained by intelligently directed clumping work.

    But even relative to such a randomly clumped state, a flyable configuration will be exceedingly rare, and is best explained in terms of intelligent assembly. So we see clumping work then configuring work to a plan.

    And, if you think such work is a reasonable outcome of chance, you are in effect denying the premise on which thermodynamics rests — i.e you are being selectively hyperskeptical, as the evidence and otherwise acceptable reasoning are not pointing where you want to go.

    Let us note as well that in directly observed cases of CSI, EVERY case where we directly know the causal story is the product of agency. The stat mech argument simply shows why.

    So, the origin of life, and the origin of body-plan level biodiversity, both of which are well above the threshold of 500 – 1,000 bits of information, are best exp-lained as the product of agency; even though as yet we do not know the way it was done.

    4] But functional states are not that rare . . .

    And, BTW, that is also why Religion Prof’s argument in another thread fails, as he is failing to see that we are dealing with chains of DNA of order 500,000 – 3 billion 4-state elements long [1 mega bit to 6 gigabits of storage capacity!], which are tied to functionality that is plainly rare in the configuration space.

    GEM of TKI

  29. Oops: RP’s argument is in this thread!

  30. Micky

    it requires pattern recognition that is not available to all observers.

    So?

    Recognizing a photomicrograph of an integrated circuit as a designed object requires pattern recognition not available to all observers. Ignorance on the part of any particular observer doesn’t change the truth of the matter.

  31. Dave,

    I understand your point, and have no issues with it. What I was trying to get at originally was that the playing card analogy wasn’t apt in light of what goes on in living cells, due to the restricted nature size of the deck of cards, No matter. The questions I posed to Apollos in #25 above have yet to be answered, though.

  32. Mickey, re [32] Because it is a silly question.

  33. I may regret this, but I have to ask: if there are two decks of cards, one in no discernible order, and one ordered by rank and suit, which arrangement was more likely to occur randomly? And one more question: suppose I am the head of a secret card-arranging society, and the deck that is in no discernible order to you, has in fact been put in a double-secret special order by me. Which deck displays CSI to you?

    I thought I answered you twice already.

    Try reading Dembski’s books. That’s called a false negative, which is a valid minor issue with formalized design detection. But we’re really only concerned if there is a false positive.

    While there are specifications that are context sensitive other specifications are independent of culture and such. The flagellum provides motility, for example.

    …..

    Obviously I agree completely that the CSI exists independently but that still would not prevent the non-card player from making a false negative using formalized design detection, would it?

    BTW, just one or two lucky draws aren’t enough to qualify as CSI. That’s why there is a UPB. It’s also why things like the Bible Code where people try to find secret messages in the Bible or other books like Moby Dick do not count. It also prevents conspiracy theorists who look for secret messages in newspapers (or other such examples) from using ID to support them.

  34. Hi Apollos,

    “Certain arrangements contain a specification which cannot easily be attributed to chance”

    As I see it specificity is the *opposite* of chance. Specifications therefore “by definition” cannot be produced by chance. The card deck however has its endogenous/residual low level specificity that could, with very low probability, produce the configuration that we call “a royal flush.” The “royal flush” and other such winning configurations however, are not legitimate members of the mere set of cards (and a mere uniform probability distribution over the set of possible card configurations). The “royal flush is a part of a “superset” world beyond the mere “set” of cards. This “superset” world is the set of human gaming specifics and associated human monetary values and specific interests.

    The “specification,” as I see it comes from this “superset.” I am thinking that a rigorous probabilistic analysis would involve combining the smaller (card)set and the larger superset into a single set in which smaller card set provides only a very small level of uncertainty (due to its residual specificity).

  35. Mickey #25 — Ok, I’ll bite.

    ” . . .if there are two decks of cards, one in no discernible order, and one ordered by rank and suit, which arrangement was more likely to occur randomly? And one more question: suppose I am the head of a secret card-arranging society, and the deck that is in no discernible order to you, has in fact been put in a double-secret special order by me. Which deck displays CSI to you?”

    The deck with no discernible order was more likely to have occured randomly.

    By the way, my view of your double-secret society is this: According to its own charter, it can not exist. It can not exist because its charter is to order cards whose order is not discernible.

    To order is to select, to select implies meaning, meaning implies understanding, understanding requires discernment. But, you are not allowed discernment. I am not talking about the poor sap who happens across the two decks. I am talking about YOU.

    You didn’t say that the deck’s order was not discernible by me UNTIL you changed the story later. you wrote somewhat too cleverly “one in no discernible order”. Either the deck has an order that is discernible (even if not by me right then) so it was not properly described, OR its order is completely indiscernible in which case you could not have ordered it.

    Oh, and to answer your second question, the ordered deck displays “more” CSI because the first deck is not there.

  36. BarryA,
    Love the poker skit.

    The existence of design in no way depends upon your subjective response to it. In other words, the fact that a given observer may not understand that specified complexity exists has nothing to do with whether it in fact exists.

    I think Mickey’s question is not “doesn’t SC’s existence depend on an observer’s knowledge of independent patterns?” but rather “doesn’t the warrant to infer SC depend on the inferrer’s knowledge of independent patterns”? Or if that’s not what he meant, it’s a natural next question.

    And the answer, if I am following this correctly, is “Yes it does. And we humans (those who are willing to pay attention) fall into the category of observers who have knowledge of independent patterns that can be observed in nature (e.g. in the cosmos and in bio systems). Or to put it another way, systems in nature conform to independent patterns (such as IC?) that we humans have knowledge of. So it is warranted for us to infer SC and ID.” And a big part of the ID project is demonstrating that such patterns are found in nature and are independent (specified).

    Right?

    I believe I’m agreeing with BarryA and DaveScot, just perhaps putting things in a different way that may speak to the mental state I would have if I were asking Mickey’s question. As far as I can tell, he is asking in all honesty, and his question deserves to be treated seriously. (As you have been.)

    P.S. Check spelling of “dealt”! ;-)

  37. Brookfield, I think you are putting a needless layer of complexity on this. Yes, I put the cards in the context of a poker game to make the story interesting. But even if there were never a card name called “poker” and what we call a “royal flush in spades” was nothing but an interesting subset of cards, the math would be the same.

  38. great thread….thank you very much kairosfocus. i think you answered all objections excellently and beyond any reasonable rebuttal.

  39. Mickey:

    If there are two decks of cards, one in no discernible order, and one ordered by rank and suit, which arrangement was more likely to occur randomly?

    Just to clarify the final clause, do you mean “which would be more likely to occur if we randomly shuffled decks of cards” or “which could be more confidently imputed to mere chance, given that both had occurred?” I’ll assume the latter. Given that you’ve said the (unmentioned) observer cannot discern the order of the first deck, but presumably knows enough to recognize the specified order of the second, he cannot impute the order of the first deck to anything but chance with any confidence.

    And one more question: suppose I am the head of a secret card-arranging society, and the deck that is in no discernible order to you, has in fact been put in a double-secret special order by me. Which deck displays CSI to you?

    Dembski writes about this in The Design Inference in terms of “background knowledge” (see pp. 16-18, 45, 69-73; especially section 3.3 “Background Information”). “… briefly, probability is a relation between background information and events.” … “Change the background information and the probability changes as well.”
    To your hypothetical observer (me) who has no background knowledge of the “non-discernible” order, CSI cannot be inferred for the first deck. But of course it can be inferred by a society member who has the right background information.

  40. Mickey:

    If I were to encounter the text string you give as an example, I would first consider context. For example if someone handed me a piece of paper with it written on it, I would infer that some sort of meaning were involved and proceed to try and discover it. There is no practical situation I can think of where I wouldn’t wonder about the meaning, in fact.

    What difference does any clues about the context, so long as you were uncertain to any meaning? Why would you, after seeing the revealed message after use of the Caesar decoding scheme, then think it was designed by a person (or other intelligence)?

    Different scenario: What if these characters came printed out on a paper that you believed was a random character generator? And afterwards revealed the same decoded phrase… and this scneario is hypothetically isolated from the first.

  41. BarryA,

    I think you might find the following article about guided evolution matches up with your analogy, even though they still roll off the tongue how evolution may have began, they’re guiding it by ID.

    A few paragraphs…

    Mario Ruben at the Forschungszentrum Karlsruhe GmbH (FZK) explain that this observation of molecular organization at surfaces may lead to further insight of how simple, inanimate molecules can build up biological entities of increasing structural and functional complexity, such as membranes, cells, leaves, trees, etc.

    Dr. Mario Ruben’s research team at FZK is responsible for designing molecules with built-in instructions, which when read out activate the self-selection process. He comments: “Spontaneous ordering from random mixtures only occurs when built-in instructions are carefully designed and sufficiently strong to initiate successful self-selection.”

    Of course, this is also in a pristine lab environment as well. But I think it bodes well for ID type research. “Built-in instructions” that facilitate “selection” criteria.

    Evolution in the Nano World
    http://www.mpg.de/english/illu.....200710292/

    Max Plank Institute;
    HT: CreationSafaris.com

    Can you say FrontLoading :)

  42. Why does this work with the analogy?

    Because the cards are pre-fabricated with CSI as Appollos stated above, though the selection process is external, it is open to many interpreations dependant upon rules based systems as well.

    The project of Nano Self-Assembly has the intelligent selection process built-in. Undoubtedly, it can be expanded upon, feedback loops put in place for error checking, and agents of communication can be put in contact that act differently for external purposes independent of the prefab instructions.

    To cool… Am I close? Or am I off base?

  43. 43

    My whole point here has been to point out that there are enough situations where design isn’t discernible without foreknowledge, which means (at least to me) that the concept of CSI involves inescapable circular reasoning. How do you know it’s very complex? It’s designed. How do you know it’s designed? It’s very complex.

    The very idea of “specification,” it seems to me, requires foreknowledge that there is a source of such orders, as in the deck of cards ordered by rank and suit. If we had no foreknowledge of such things, no order could be differentiated from random orders.

  44. Even though the first deck being ordered by rank and suit is impressive (52! = 8.06581752×10^67) that still does not exceed Dembski’s UPB of 10^150 or 500 informational bits. Now we could make a weak design inference (aka police investigation) but not an ID-based design inference if this was a one-time shuffle to win a jackpot. In that scenario we would presumably be able to discover the mechanism for potential cheating so we could use that design/designer detection method instead of ID. It’s not as if ID methods are the only way to detect design.

    EDIT: For the jackpot scenario I’m presuming the prize winner would be required to shuffle an entire deck and the prize would be awarded if a contestant managed to get some sort of combination that is close to 1 in 10^8 (around the odds of Powerball). By turning up this result the contestant is essentially providing a result that is overkill for the terms of the prize. So although the guy might have got really, really lucky they will still investigate to see if it was rigged somehow.

    Saw this interesting article:

    http://creationevolutiondesign.....ne_30.html

    We can accept a certain amount of luck in our explanations, but not too much…. In our theory of how we came to exist, we are allowed to postulate a certain ration of luck. This ration has, as its upper limit, the number of eligible planets in the universe…. We [therefore] have at our disposal, if we want to use it, odds of 1 in 100 billion billion as an upper limit (or 1 in however many available planets we think there are) to spend in our theory of the origin of life. This is the maximum amount of luck we are allowed to postulate in our theory. Suppose we want to suggest, for instance, that life began when both DNA and its protein-based replication machinery spontaneously chanced to come into existence. We can allow ourselves the luxury of such an extravagant theory, provided that the odds against this coincidence occurring on a planet do not exceed 100 billion billion to one. [Dawkins, R., "The Blind Watchmaker," Norton: (New York, 1987, pp. 139,145-46]

    I find that interesting considering Koonin’s comments regard the unguided Origin Of Life (OOL) scenarios and as a conservative estimate he calculated 1 in 10^-1018 for the possibility that such a system could have arisen.

  45. My whole point here has been to point out that there are enough situations where design isn’t discernible without foreknowledge, which means (at least to me) that the concept of CSI involves inescapable circular reasoning. How do you know it’s very complex? It’s designed.

    Err…no. Complexity can be calculated without knowing whether something is designed or not. In fact, with the explanatory filter the complexity is calculated without presuming design or no design. A non-designed object can also be very complex.

    How do you know it’s designed? It’s very complex.

    And, again, that ignores specification.

  46. 46

    No, it doesn’t ignore the specification. It assumes the specification. “Very complex” reaches some undefined (or very vaguely defined) point where design is assumed, without knowing anything more than the thing is so complex that it’s difficult to understand how it might happen naturally.

    A

  47. “Very complex” reaches some undefined (or very vaguely defined) point where design is assumed, without knowing anything more than the thing is so complex that it’s difficult to understand how it might happen naturally.

    Specification does not equate to not “knowing anything more than the thing is so complex that it’s difficult to understand how it might happen naturally.”

  48. Patrick (46)-

    What can you calculate the complexity of? I can’t figure out how to calculate the complexity of anything.

  49. For calculating the informational bits using 8-bit single-byte coded graphic characters, here is an example: “ME THINKS IT IS LIKE A WEASEL” is only 133 bits of information(when calculated as a whole sentence; the complexity of the individual items of the set is 16, 48, 16, 16, 32, 8, 48 plus 8 bits for each space). So aequeosalinocalcalinoceraceoaluminosocupreovitriolic would be 416 informational bits. The specification is that it is an English word with a specific function. That specific function does not have any intermediates that maintain the same function. Here we have a situation where indirect intermediates are well below 500 informational bits and thus there is nothing to select for that will help much in reaching the goal. Thus this canyon must be bridged in one giant leap of recombination of various factors, making it difficult for Darwinian mechanisms. Even though that is not 500 I would still be surprised if that showed up in a program such as Zachriels unless the fitness function was designed in a certain manner.

    For more on calculating such things refer to Dembski’s work.

  50. Mickey:

    you stated:

    “if there are two decks of cards, one in no discernible order, and one ordered by rank and suit, which arrangement was more likely to occur randomly”

    The answer is simple: the probability is the same for both decks, that is a very low probability.

    Now, just imagine that the exact sequence of the first deck is communicated to you in advance, and then the deck of cards comes out exactly as it was said to you: would you still believe that it happened in a random way? No (at least I hope, for your sake). That’s an example of pre-specification.

    Or just suppose that the deck of cards comes out in perfect order. Would you still believe that it was correctly and randomly shaffled? No (at least, I hope, for your sake). That’s an example of specification by compressibility.

    Or just suppose that the cards are binary, 0 and 1, and are more numerous (a few hundreds). Suppose that the deck of cards, in the exact order, can be written as the binary code of a small software program, and that such a program works as an ordering algorithm. Would you still believe that the the deck of cards was really random? No ((at least, I hope, again for your sake). That’s an example of specification by function.

    Genomes and proteins are all specified by function. They all exhibit CSI, of the highest grade. It is simple. Those who try to speculate on hypothetical contradictions of the concept of specification are completely missing the power, beauty and elegance of the concept itself. And its beautiful, perfect simplicity.

  51. 51

    gpuccio said,

    Now, just imagine that the exact sequence of the first deck is communicated to you in advance, and then the deck of cards comes out exactly as it was said to you: would you still believe that it happened in a random way?

    I would believe that randomness was possible, but highly unlikely.

    Now a question for you: Shuffle a deck of cards throroughly, then note the order. Now keep shuffling and noting the order. How long do you believe it will be before the original order is encountered again?

  52. Hi BarryA

    “Brookfield, I think you are putting a needless layer of complexity on this. Yes, I put the cards in the context of a poker game to make the story interesting.”

    Sorry if I am needlessly complexifying things. That was not my intent. I am looking for a way of explaining SC that is less prone to confusion and possible detractor obfuscation. The poker story was great and many can relate to it, but I am thinking maybe we could be describing such situations in terms of the probabilistic resources from randomly shuffled cards — set #1 (hideously low) — versus the probabilistic resources from the “superset” #2. (significantly higher) and a subsequent inference to the best explanation…with superset #2 (intelligent design) as the winner…?

    While Granville’s notion of macroscopic improbability is quite good it doesn’t seem to work for nanotechnologies that are both SC and microscopic. Or perhaps I am missing something?

  53. I can see from gpuccio’s thoughtful response that I had not made myself clear concerning the two decks.

    ((“if there are two decks of cards, one in no discernible order, and one ordered by rank and suit, which arrangement was more likely to occur randomly”

    The answer is simple: the probability is the same for both decks, that is a very low probability. gpuccio (51)))

    What I should have conveyed:

    The suited and ordered deck is face up (that is how we know it is suited and ordered), but the deck with no discernible order is by definition face down. If it were face up, then an order would be discerned. I thought this clearly to myself, but failed to type it in, sorry. This definition should further sew up the reason why the secret society can not exist.

    Let’s nail down the specification thing. Mickey wrote, “Shuffle a deck of cards throroughly, then note the order”. By “note the order” I think what you are doing is specifying a target. It really doesn’t matter how you formed the sequence. What matters is that it was FIRST specified THEN sought.

    Probability stories like these will eventually get me hoisted by my own petard, but I am quite sure that the action of selection (even if generated by chance, i.e. shuffling) generates a target that has been specified.

  54. 54

    Tim,

    What matters (to me at least) is the question I asked. Given a target arrangement of 52 cards, and continual reshuffling, how long (in terms of reshuffles) should it take before the target order is repeated, given that the odds are about 1 in 10^68?

  55. 55

    Note that I should clarify “shuffling” to mean a random rearrangement of the cards, with “random” meaning that every possible order is equally likely with each shuffle.

  56. Tim

    If there are two decks, one perfectly ordered by suit and rank and one with no discernable order the one with no discernable order is the more likely to be generated by a random shuffle. The reason: there are gazillions of possible arrangements that display no discernable order and very few that are perfectly ordered.

    This is actually very analogous to coding genes. There are 52 different cards in a standard deck while there are 64 different codons (nucleic acid triplets). Genes are further complicated because they have no fixed length and may be thousands of codons long. There are gazillions of codon sequences that don’t fold into potentially biologically active molecules and few that do consistenly fold into a biologically active molecule. Complicating it even further is that biologically active proteins don’t exist in a vacuum but must fold in such a way as to precisely fit (in at least five dimensions – 3 spatial dimensions plus hydrophophic and hydrophilic surfaces) the shapes of other proteins and other non-protein molecules they need to grasp and release. The folding process is so complex that being able to predict it is the Holy Grail of biochemistry.

    So, while any gene sequence is as likely as any other from a randomly generated string of codons the odds of getting a gene that codes for a biologically active protein from a randomly generated sequence are very remote because of the ratio of useful sequences to useless sequences. Any randomly generated gene is astronomically unlikely to be of any biological use just as any randomly ordered deck of cards is astronomically unlikely to exhibit any perfect orderings.

  57. DaveScot said:

    “Any randomly generated gene is astronomically unlikely to be of any biological use just as any randomly ordered deck of cards is astronomically unlikely to exhibit any perfect orderings.”

    This is the heart of the matter as I understand it, relating to Mickey’s objection about determining CSI based on a perfectly ordered deck.

    Am I correct in assuming it possible to develop a reasonable “signal to noise ratio” for a deck of playing cards? This should perfectly illuminate the “equal probability” obfuscation.

  58. By the way I think this is essentially what William Brookfield was getting at in #53 above.

  59. Interested (and BarryA and Dave):

    First, Interested, thanks. (I should also again thank the former UD and regular ARN commenter Pixie for going through a long exchange with me on the subject at my own blog.)

    BarryA

    I think the problem here has been well addressed by GPuccio in other threads, when he pointed out that specification comes in different flavours, but of course when conjoined with informational complexity and contingency [i.e. in effect info storage capacity beyond 500 - 1,000 bits, the latter taking in effectively all practical cases of archipelagos of Functionality, not just the unique functional state in 10^150 states that the first does].

    GP at no 51 in the Oct 27 Darwinist predictions thread:

    Specification can be of at least 3 different kinds:

    1) Pre-specification: we can recognize a pattern because we have seen it before. In this case, the pattern in itself is probably random, but its occurrence “after” a pre-specification is a sign of CSI (obviously provided that complexity is also present, but that stays true for each of the cases).

    2) Compressibility: some mathematical patterns of information are highly compressible, which means that they can be expressed in a much shorter sequence than their original form. That is the case, for instance, for numbers like 3333333333, which can be written as “10 times 3?. Such compressible patterns are usually recognizable by a conscious mind, for reasons that are probably much deeper than I can understand. In this case, specification is in some way intrinsic to the specific pattern of information, we could say that it is inherent in its mathematical properties.

    3) Finally there is perhaps the most important form of specification, at least for our purposes: specification by function. A few patterns of information are specified because thay can “do” something very specific, in the right context. That’s the case of proteins, obviously, but also of computer programs, or in general of algorithms. In this case specification is not so much a characteristic of the mathemathical properties of the sequence, but rather of what the sequence can realize in a particular physical context: for example, the sequence of an enzyme is meaningless in itself, but it becomes very powerful if it is used to guide the synthesis of the real protein, and if the real protein can exist in a context where it can fulfill its function. Function is a very objective evidence of specification, because it does not depend on pre-conceptions of the observer (at least, not more than any other concept in human knowledge).

    So, this is the theoretic frame of CSI: complexity “plus” specification. And, obviously, the absence of any known mechanical explanation of the specific specified pattern in terms of necessity (that is, we are observing apparently random phenomena). The summary is:

    a) If you have a very complex pattern (very unlikely) and

    b) If no explanation of that patterm is known in terms of necessity on the basis of physical laws (in other words, if that pattern is equally likely as all other possible patterns, in terms of physical laws, and is therefore potentially random) and

    c) If that pattern is recognizable as specified, in any of the ways I have previously described:

    then

    we are witnessing CSI, and the best empirical explanation for that is an intelligent agent.

    As Dave just pointed out, biofunctionality is a macroscopically recognisable criterion. Second, it isolates islands and archipelagos of microstates of various protein and DNA sequences that are exceedingly rare individually and collectively in the overall configuration space.

    Thus, we see a natural emergence of specification in the functional sense, and a context of complexity in the sense of high information storage capacity plus contingency. For instance, DNA strands for life systems start at about 500 000 base pairs [~ 1 Mbit], or a config space of 4^500k ~ 9.9 * 10^301,029. Similarly, a typical 300 element protein has 20 state elements with a config space of 20^300 ~ 2.04 * 10^390; there are hundreds of such in a living cell, all of which must function together according to a complex set of algorithmic, code-based life processes.

    The odds of such bio-functionally specified, fine-tuned, tightly integrated complex information [including the codes and the algorithms] emerging or evolving all together in the same space of order ~ 1 micron, by undirected mechanical necessity and/or the random processes known to dominate the molecular scale world ONLY, in the gamut of our observed cosmos, are negligibly different from zero. [And the resort to a quasi-infinite cosmos as a whole proposals is after the fact, worldview saving ad hoc speculation; not science. It is a backhanded acknowledgement of the force of the point.]

    And yet we KNOW that such FSCI is routinely produced by agents all the time.

    Now, what is happening with the card example of such FSCI is not that the basic principle is wrong or improperly put, but that the fact that the functionality is arbitrary [we humans made up the game] invites red herrings leading out to easily ignitable strawman objections that, when burned cloud [and sometimes poison] the atmosphere.

    So, we need a “de-confuser.”

    That is why I took up the late great Sir Fred Hoyle’s classic tornado in a junkyard example and turned it into a case of sci-fi nanotech that is within reach of molecule scale processes and forces such as brownian motion, that makes the same basic point; i.e I have reduced the scale so we don’t need to think of tornadoes, just ordinary well known well understood statistical mechanical principles that underly thermodynamics. (It helps to note that even Dawkins speaks of the tornado case with respect, too.)

    The result? The same one that the card shuffling case makes, and the same one that the Caputo election fraud case makes. Not surprising, actually, once you understand what a functional specification means in a really large config space. [This makes the proverbial finding the needle in a haystack challenge seem an easy task! BTW, the tired out lottery objection also fails: the vastness of the config spaces means that we just don't have enough search opportunities to reasonably expect to get to the lucky number, no matter how many tickets we print. Indeed, even if every atom in the observed universe [~10^80] was a ticket and every 10^-43 seconds we do another draw across its lifespan [~13.7 BY to date by generally accepted estimates],that is not going to be nearly good enough. That is the point of Dembski’s UPB. Real lotteries are carefully designed to generate just enough winners to encourage enough gamblers to make unconscionable profits . . . and so are insurance policies and other well-designed option trades.]

    Onlookers . . .

    observe how, over many months now, there are no “biters” on the microjets case who have put forth a cogent rebuttal. [Cf my always linked app 1, section 6.]

    But by contrast, observe how readily the usual tired out talking points on playing cards and the like are trotted out. [Some months ago I was similarly astonished to see the sort of objections that were made to Dembski's citing of the classic NJ Caputo election scam!]

    I contend that the Cards case, the Caputo case and the microjets and nanobots case are all about the issue of getting to FSCI — a relevant subset of CSI — by chance processes plus mechanical necessity only.

    In each case, the complexity [info carrying capacity: contingency plus high enough number of bits of potential storage] is such that random searches run out of probabilistic resources and the idea of “natural selection” across intermediate steps is either irrelevant, or fails the criterion that each such alleged intermediate must also be functional.

    But of course, agents routinely produce FSCI as a subset of CSI. For instance, the posts in this thread are examples.

    Stronger than that, in EVERY actually observed case of FSCI, where we directly observe the causal process, the source is an agent.

    Thus by very strong induction anchored in direct observations of such causal chains, and the underlying statistical mechanics that underpins thermodynamics, as well as general probability considerations, the FSCI observed in cells is a product of agency.

    And, in that the increments in FSCI to get to the many diverse body plans with intermediate functional states run into the same barrier, body plan level biodiversity is similarly a product of agency.

    Even further, on the observations of physics and cosmology, the fine-tuned, tightly coordinated physics of our cosmos that is a requisite for life and even for science as we know it, indicates the cosmos is the product of an exceedingly intelligent and powerful agent who intended to produce life — including life capable of science such as we are.

    Of course, committed evolutionary materialists will find such inferences objectionable [and may even try to redefine science to dodge them], but that is plain out and out question-begging and is besides the point.

    For, such a set of inferences is well-warranted on good empirical and logical grounds. That the state of late C20 and early C21 science turns out to be more friendly to theism than say that of late C19 and early C20 science, is just a matter of how the evidence has turned out.

    At least, if you are committed to the principle that science should be the best, empirically anchored, reasonable, open-to-improvement explanation of the world as we experience it.

    GEM of TKI

  60. Interested (and BarryA and Dave):

    First, Interested, thanks. (I should also again thank the former UD and regular ARN commenter Pixie for going through a long exchange with me on the subject at my own blog.)

    BarryA

    I think the problem here has been well addressed by GPuccio in other threads, when he pointed out that specification comes in different flavours, but of course when conjoined with informational complexity and contingency [i.e. in effect info storage capacity beyond 500 - 1,000 bits, the latter taking in effectively all practical cases of archipelagos of Functionality, not just the unique functional state in 10^150 states that the first does].

    GP at no 51 in the Oct 27 Darwinist predictions thread:

    Specification can be of at least 3 different kinds:

    1) Pre-specification: we can recognize a pattern because we have seen it before. In this case, the pattern in itself is probably random, but its occurrence “after” a pre-specification is a sign of CSI (obviously provided that complexity is also present, but that stays true for each of the cases).

    2) Compressibility: some mathematical patterns of information are highly compressible, which means that they can be expressed in a much shorter sequence than their original form. That is the case, for instance, for numbers like 3333333333, which can be written as “10 times 3?. Such compressible patterns are usually recognizable by a conscious mind, for reasons that are probably much deeper than I can understand. In this case, specification is in some way intrinsic to the specific pattern of information, we could say that it is inherent in its mathematical properties.

    3) Finally there is perhaps the most important form of specification, at least for our purposes: specification by function. A few patterns of information are specified because thay can “do” something very specific, in the right context. That’s the case of proteins, obviously, but also of computer programs, or in general of algorithms. In this case specification is not so much a characteristic of the mathemathical properties of the sequence, but rather of what the sequence can realize in a particular physical context: for example, the sequence of an enzyme is meaningless in itself, but it becomes very powerful if it is used to guide the synthesis of the real protein, and if the real protein can exist in a context where it can fulfill its function. Function is a very objective evidence of specification, because it does not depend on pre-conceptions of the observer (at least, not more than any other concept in human knowledge).

    So, this is the theoretic frame of CSI: complexity “plus” specification. And, obviously, the absence of any known mechanical explanation of the specific specified pattern in terms of necessity (that is, we are observing apparently random phenomena). The summary is:

    a) If you have a very complex pattern (very unlikely) and

    b) If no explanation of that patterm is known in terms of necessity on the basis of physical laws (in other words, if that pattern is equally likely as all other possible patterns, in terms of physical laws, and is therefore potentially random) and

    c) If that pattern is recognizable as specified, in any of the ways I have previously described:

    then

    we are witnessing CSI, and the best empirical explanation for that is an intelligent agent.

    As Dave just pointed out, biofunctionality is a macroscopically recognisable criterion. Second, it isolates islands and archipelagos of microstates of various protein and DNA sequences that are exceedingly rare individually and collectively in the overall configuration space.

    Thus, we see a natural emergence of specification in the functional sense, and a context of complexity in the sense of high information storage capacity plus contingency. For instance, DNA strands for life systems start at about 500 000 base pairs [~ 1 Mbit], or a config space of 4^500k ~ 9.9 * 10^301,029. Similarly, a typical 300 element protein has 20 state elements with a config space of 20^300 ~ 2.04 * 10^390; there are hundreds of such in a living cell, all of which must function together according to a complex set of algorithmic, code-based life processes.

    The odds of such bio-functionally specified, fine-tuned, tightly integrated complex information [including the codes and the algorithms] emerging or evolving all together in the same space of order ~ 1 micron, by undirected mechanical necessity and/or the random processes known to dominate the molecular scale world ONLY, in the gamut of our observed cosmos, are negligibly different from zero. [And the resort to a quasi-infinite cosmos as a whole proposals is after the fact, worldview saving ad hoc speculation; not science. It is a backhanded acknowledgement of the force of the point.]

    And yet we KNOW that such FSCI is routinely produced by agents all the time.

    Now, what is happening with the card example of such FSCI is not that the basic principle is wrong or improperly put, but that the fact that the functionality is arbitrary [we humans made up the game] invites red herrings leading out to easily ignitable strawman objections that, when burned cloud [and sometimes poison] the atmosphere.

    So, we need a “de-confuser.”

    That is why I took up the late great Sir Fred Hoyle’s classic tornado in a junkyard example and turned it into a case of sci-fi nanotech that is within reach of molecule scale processes and forces such as brownian motion, that makes the same basic point; i.e I have reduced the scale so we don’t need to think of tornadoes, just ordinary well known well understood statistical mechanical principles that underly thermodynamics. (It helps to note that even Dawkins speaks of the tornado case with respect, too.)

    The result? The same one that the card shuffling case makes, and the same one that the Caputo election fraud case makes. Not surprising, actually, once you understand what a functional specification means in a really large config space. [This makes the proverbial finding the needle in a haystack challenge seem an easy task! BTW, the tired out lottery objection also fails: the vastness of the config spaces means that we just don't have enough search opportunities to reasonably expect to get to the lucky number, no matter how many tickets we print. Indeed, even if every atom in the observed universe [~10^80] was a ticket and every 10^-43 seconds we do another draw across its lifespan [~13.7 BY to date by generally accepted estimates],that is not going to be nearly good enough. That is the point of Dembski’s UPB. Real lotteries are carefully designed to generate just enough winners to encourage enough gamblers to make unconscionable profits . . . and so are insurance policies and other well-designed option trades.]

    Onlookers . . .

    observe how, over many months now, there are no “biters” on the microjets case who have put forth a cogent rebuttal. [Cf my always linked app 1, section 6.]

    But by contrast, observe how readily the usual tired out talking points on playing cards and the like are trotted out. [Some months ago I was similarly astonished to see the sort of objections that were made to Dembski's citing of the classic NJ Caputo election scam!]

    I contend that the Cards case, the Caputo case and the microjets and nanobots case are all about the issue of getting to FSCI — a relevant subset of CSI — by chance processes plus mechanical necessity only.

    In each case, the complexity [info carrying capacity: contingency plus high enough number of bits of potential storage] is such that random searches run out of probabilistic resources and the idea of “natural selection” across intermediate steps is either irrelevant, or fails the criterion that each such alleged intermediate must also be functional.

    But of course, agents routinely produce FSCI as a subset of CSI. For instance, the posts in this thread are examples.

    Stronger than that, in EVERY actually observed case of FSCI, where we directly observe the causal process, the source is an agent.

    Thus by very strong induction anchored in direct observations of such causal chains, and the underlying statistical mechanics that underpins thermodynamics, as well as general probability considerations, the FSCI observed in cells is a product of agency.

    And, in that the increments in FSCI to get to the many diverse body plans with intermediate functional states run into the same barrier, body plan level biodiversity is similarly a product of agency.

    Even further, on the observations of physics and cosmology, the fine-tuned, tightly coordinated physics of our cosmos that is a requisite for life and even for science as we know it, indicates the cosmos is the product of an exceedingly intelligent and powerful agent who intended to produce life — including life capable of science such as we are.

    Of course, committed evolutionary materialists will find such inferences objectionable [and may even try to redefine science to dodge them], but that is plain out and out question-begging and is besides the point.

    For, such a set of inferences is well-warranted on good empirical and logical grounds. That the state of late C20 and early C21 science turns out to be more friendly to theism than say that of late C19 and early C20 science, is just a matter of how the evidence has turned out.

    At least, if you are committed to the principle that science should be the best, empirically anchored, reasonable, open-to-improvement explanation of the world as we experience it.

    GEM of TKI

  61. Really weird error message, folks . . .

  62. Really weird error message, folks . . .

    Ok kairosfocus, I’m not sure how it helps, but here you go:

    “Your computer will become self-aware after Windows restarts. Please disconnect it from the network.”

    If you need a weirder one, let me know.

  63. DaveScot said:

    If there are two decks, one perfectly ordered by suit and rank and one with no discernable order the one with no discernable order is the more likely to be generated by a random shuffle. The reason: there are gazillions of possible arrangements that display no discernable order and very few that are perfectly ordered.

    You have no way of knowing that the first deck is randomly ordered, or that second deck isn’t, although that’s certainly the way to bet. Therefore, in order to conclude design, you must rely on facts not in evidence.

    If a phenomenon is the result of random action, the fact that the odds against it are one in a gazillion doesn’t mean that it can’t happen in the first opportunity. In fact, randomness is defined by the idea that the phenomenon can occur on any opportunity, because in order for something to be random, each possible outcome must have an equal chance of occurring on each opportunity.

    All of these arguments from probability seem to hinge on the mistaken idea that if the odds against something happening are a gazillion to one, we’re going to have to wait through a gazillion opportunities for it to happen.

  64. Mickey:

    You asked:

    “Now a question for you: Shuffle a deck of cards throroughly, then note the order. Now keep shuffling and noting the order. How long do you believe it will be before the original order is encountered again?”

    As I am neither a mathematician nor a good card player, I have looked for the answer on the internet. I attach here a very good treament of the question, from the site of a guy whose name is matthew weathers (you can look at it directly at matthewweathers.com):

    “Shuffling Cards

    Every time you shuffle a deck of playing cards, it’s likely that you have come up with an ordering of cards that is unique in human history. For example, I shuffled a deck of cards this afternoon, and my friend Adam split the deck, and this is the order that the cards came out it.
    How many different orders are there?
    There are 52 cards in a deck of cards. Imagine an “ordering of cards” as 52 empty spots to be filled:
    How many different possibilities are there for what could go in the first spot? The answer is 52 – any of the 52 cards could go there. What about the second spot? Now that you’ve already chosen a card for the first spot, there are only 51 cards left, so there are only 51 different possibilities for the second spot. And for the third spot, we only have 50 choices.
    If we stop there, and just fill up the first three spots, that’s like asking how many different possibilites there are for dealing three cards in order.
    How many different possible combinations are there for three cards in order? We just multiply how many possibilities there were for the first position (52) with the possibilities for the second position (51) with the possibilities for the third position (50). So there are 52 • 51 • 50 = 132600 different possibilites for three cards in order.
    What about a whole deck? We just multiply the possibilities for each of the 52 positions, which is 52 • 51 • 50 • 49 • 48 • 47 • 46 • 45 • 44 • 43 • 42 • 41 • 40 • 39 • 38 • 37 • 36 • 35 • 34 • 33 • 32 • 31 • 30 • 29 • 28 • 27 • 26 • 25 • 24 • 23 • 22 • 21 • 20 • 19 • 18 • 17 • 16 • 15 • 14 • 13 • 12 • 11 • 10 • 9 • 8 • 7 • 6 • 5 • 4 • 3 • 2 • 1. A mathematical way of representing all those numbers multiplied together is called the factorial (See description on MathWorld), so we could write this as 52!, which means the same thing. When you multiply all those numbers together, you get 80658175170943878571660636856403766975289505440883277824000000000000. That number is 68 digits long. We can round off and write it like this: 8.0658X1068.
    How many times have cards been shuffled in human history?
    That’s an impossible number to know. So let’s overestimate. Currently, there are between 6 and 7 billion people in the world. Also, the modern deck of 52 playing cards has been around since 1300 A.D. probably. If we assume that 7 billion people have been shuffling cards once a second for the past 700 years, that will be way more than the actual number of times cards have been shuffled. 700 years is 255675 days (plus or minus a couple for leap year centuries), which is 22090320000 seconds. Now, if 7000000000 people had been shuffling cards once a second for 22090320000 seconds, they would have come up with 7000000000 • 22090320000 different combinations, or orderings of cards. When you multiply those numbers together you get 154632240000000000000, or rounding off, 1.546X1023.
    So, it’s safe to say that in human history, playing cards have been shuffled in less than 1.546X1023 different orders.
    Is this order unique in human history?
    Probably so. When I shuffled the cards this afternoon, and came up with the order you see in the picture, that is one of 8.0658X1068 different possible orders that cards can be in. However, in the past 700 years since playing cards were invented, cards have been shuffled less than 1.546X1023 times. So the chances that one of those times they got shuffled into the same exact order you see here are less than 1 in 100000000000000000000000000000000000000000000 (1 in 1044).
    At what point do you say something is impossible? If the chances are 1 in 1000? 1 in a million?1 in ten trillion?1 in 1 in 1044? In the movie Dumb and Dumber, Lloyd asks Mary what the chances are of the two of them getting together. She replies “1 in a million.” He responds, “so you’re saying there’s a chance?!”
    So… if you think there’s a chance that maybe, just maybe somebody, somewhere, at some time may have shuffled a deck of cards just like this ordering you see here, then you’re like Lloyd Christmas in the movie.”

    So, you see, the answer to your question: “How long do you believe it will be before the original order is encountered again?” is: practically forever, in realistic terms it will never happen.

    The example of the deck of cards is not perfect for DNA and protein information, but gives a good idea of the order of magnitude of the improbability. The deck of cards is a factorial problem, because once you have assigned a card, the number of the remaining possibilities is reduced by one. For proteins, each place can be occupied by any of the 20 aminoacids, with the possibility of repetition. For a 52 aminoacid protein, the combinations are 1:20^52, that is about 1:4*10^67. We are in the same order of magnitude of the deck of cards. But a 52 aminoacids protein is definitely a very small protein.

    Besides, the miracle of such an improbability for proteins should have happened not once, but billions of times, each time a new protein is formed. And don’t tell me that the single steps are selected, because that’s simply impossible: each single aminoacid mutation can never bring new function and, as Behe as clearly shown, if more than 2 or 3 aminoacid mutations are necessary, that event will practically never happen.

    So, I really can’t understand your last statement:

    “All of these arguments from probability seem to hinge on the mistaken idea that if the odds against something happening are a gazillion to one, we’re going to have to wait through a gazillion opportunities for it to happen”

    Why is the idea mistaken? If the odds are a gazillion to one, you will indeed have to wait through (approximately) a gazillion opportunities, and if I understand well the meaning of gazillion, you will have to be really patient!

    And remember, that’s not happened once, it’s happened a gazillion times, for each new protein. So, you will have to wait a gazillion gazillion times. Good luck…

  65. Did anyone bother to look at the link I posted regarding built-in instructions above with nano-particles and there thoughts how OOL possibly started like that?

    I related it to the cards due to the design of suits and numbers.
    But, the selection process is external for cards. Whereas the link I posted they inserted instructions into the process. Essentially, this is teleological insertions.

    I believe what this shows is that DNA can no more align itself for meaningfult expression thru proteins than playing cards could by themselves. The Blueprint is both imprinted and teleological.

    It takes an intelligent selection process, not a blind one prior to any bet being made on the hand.

    I think they’re proving the case for ID more so in these experiments.

    I thought it significant for ID.

    Specification is targeted outcomes for selection processes. Nothing is random in the nano-experiments, neither is anything random in card choices. For the cards to do what Mickey “I think” would like them to do, they would in turn have to have pre-built instructions just like the nanobots to align by suit and number.

    Anyone?

  66. Mickey B:

    I think you are missing the point that most of our knowledge is probabilistic to some extent or other. That is, absolute proof beyond all dispute is a mirage — even in Mathematics, post Godel.

    What we deal with on scientific knowledge is revisable inference to best explanation, and the objection that something utterly improbable just may happen by accident is not the prudent way to bet in such an explanation, once it has crossed the explanatory filters two-pronged test. [The probabilities we are dealing with are comparable to or lower than those that every oxygen molecule in your room could at random rush to one end, causing you to asphyxiate; something you don't usually worry about I suspect, and BTW, on pretty much the same statistical mechanical grounds, as I discuss in the appendix A, in the always linked. In a nutshell, the statistical weight of the mixed macrostate is so much higher than that of the clumped one, that we would not expect that to happen just once at random in the history of the observed cosmos.]

    Note, too, that in EVERY case where we directly observe the causal story, CSI is produced by agency.

    To see what lies under it, read my always linked, Appendix A section 6.

    The objections I saw in 64 above strike me as getting into the territory of selective hyperskepticism, which is self-refuting.

    GEM of TKI

  67. MickeyBitsko is no longer with us.

  68. Hi Gpuccio — (great comments BTW)

    for numbers like 3333333333, which can be written as “10 times 3?. Such compressible patterns are usually recognizable by a conscious mind, for reasons that are probably much deeper than I can understand.

    Ok I’ll bite.

    If on *exceedingly* rare occasions a random shuffling of numbers produces a long string of say, three “3333333333,” the “meaning” and “order” apparent here is just an illusion. Randomness knows no value (“3″ is a numeric value) and each “3″ is but an *independent*, fortuitous event, devoid of any connection to any other “3″ (which is yet another independent fortuitous event).

    *Minds,* however perceive “whole-isticaly” (they see the whole string in the “mind’s eye”), and subsequently recognize *connections* between valued events and typically generate non-fortuitous, connected, value laden systems and sequences –”archipelagos” (thanks karios) of real order and real function in a great ocean of disorder/disfunction.

  69. Post Mickey response:

    Mickey wrote:

    How do you know it’s very complex? It’s designed. How do you know it’s designed? It’s very complex.

    Design inferrence is not ciruclar like that. Even if one inferred specification, the beginning question, “How do you know it’s very complex? It’s designed.”, is false. Example: Imagine a single dot on a blank sheet of paper. This is very simple. However, it was designed as art by a person. It almost certainly would not have been ascribed to be an intelligent made design without knowledge of the intent behind it’s origin.

  70. [...] is the Poker Player’s Best FriendMarch 13, 2012A couple of years ago I trotted out the “highly improbable things happen all the time” meme our Darwinist friends use to such advantage at my home poker game.  For those who don’t [...]

  71. [...] a good illustration.  Here’s a sample: A couple of years ago I trotted out the “highly improbable things happen all the time” meme our Darwinist friends use to such advantage at my home poker game.  For those who don’t [...]

  72. [...] couple of years ago I trotted out the “highly improbable things happen all the time” meme our Darwinist friends use to such advantage at my home poker game.  For those who don’t [...]

Leave a Reply