Uncommon Descent Serving The Intelligent Design Community

Low Probability is Only Half of Specified Complexity

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a prior post the order of a deck of cards was used as an example of specified complexity.  If a deck is shuffled and it results in all of the cards being ordered by rank and suit, one can infer design.  One commenter objected to this reasoning on the grounds that the specified order is no more improbable than any other order of cards (about 1 in 10^68).  In other words, the probably of every deck order is about 1 in 10^68, so why should we infer something special about this deck order simply because it has a low probability.

Well, last night at my friendly poker game I decided to test this theory.  We were playing five card poker with no draws after the deal.  On the first hand I delt myself a royal flush in spades.  Eyebrows were raised, but no one objected.  On the second hand I delt myself a royal flush in spades, as well as every hand all the way through the 13th. 

When my friends objected I said, “Lookit, your intuition has led you astray.  You are infering design — that is to say that I’m cheating — simply on the basis of the low probability of this sequence of events.  But don’t you understand that the odds of me receiving 13 royal flushes in spades in a row is exactly the same as me receiving any other 13 hands. ” In a rather didactic tone of voice I continued, “Let me explain.  In the game we are playing there are 2,598,960 possible hands.  The odds of receiving a straight flush in spades is therefore 1 in 2,598,960.  But don’t you see, the odds of receiving ANY hand are exactly the same, 1 in 2,598,960.  And the odds of a series of events is simply the product of the odds of all of the events.  Therefore the odds of receiving 13 royal flushes in spades in a row is about 2.74^-71.  But, and here’s the clincher, the odds of receiving ANY series of 13 hands is exactly the same, 2.74^-71.  So there, pay up and kwicher whinin’.” 

Unfortunately for me, one of my friends actually understands the theory of specified complexity, and right about this time this buttinski speaks up and says, “Nice analysis, but you are forgetting one thing.  Low probability is only half of what you need for a design inference.  You have completely skipped an analysis of the other half — i.e. [don’t you just hate it when people use “i.e.” in spoken language] A SPECIFICATION.”

“Waddaya mean, Mr. Smarty Pants,” I replied.  “My logic is unassailable. ” “Not so fast,” he said.  “Let me explain.  There are two  types of complex patterns, those that warrant a design inference (we call this a ‘specification’ and those that do not (which we call a ‘fabrication’).  The difference between a specification and a fabrication is the descriptive complexity of the underlying patterns [see Professor Sewell’s paper linked to his post below for a more detailed explanation of this].  A specification has a very simple description, in our case ’13 royal flushes in spades in a row.’  A fabrication has a very complex description.  For example, another 13 hand sequence could be described as ‘1 pair; 3 of a kind; no pair; no pair; 2 pair; straight; no pair; full house; no pair; 2 pair; 1 pair; 1 pair; flush.’  In summary, BarryA, our fellow players’ intuition has not led them astray.  Not only is the series of hands you delt yourself massively improbable, it is also clearly a specification.  A design inference is not only warranted, it is compelled.  I infer you are a no good, four flushin’, egg sucking mule of a cheater.”  He then turned to one of the other players and said, “Get a rope.”  Then I woke up.

Comments
[...] couple of years ago I trotted out the “highly improbable things happen all the time” meme our Darwinist friends use to such advantage at my home poker game.  For those who don’t [...]Uncommon Descent | The Multiverse is the Poker Player’s Best Friend
May 28, 2012
May
05
May
28
28
2012
10:04 AM
10
10
04
AM
PDT
[...] a good illustration.  Here’s a sample: A couple of years ago I trotted out the “highly improbable things happen all the time” meme our Darwinist friends use to such advantage at my home poker game.  For those who don’t [...]The Multiverse Theory = the Atheists’ Concession Speech. | Eternity Matters
March 13, 2012
March
03
Mar
13
13
2012
10:11 AM
10
10
11
AM
PDT
[...] is the Poker Player’s Best FriendMarch 13, 2012A couple of years ago I trotted out the “highly improbable things happen all the time” meme our Darwinist friends use to such advantage at my home poker game.  For those who don’t [...]God's iPod - Uncommon Descent - Intelligent Design
March 13, 2012
March
03
Mar
13
13
2012
08:56 AM
8
08
56
AM
PDT
Post Mickey response: Mickey wrote:
How do you know it’s very complex? It’s designed. How do you know it’s designed? It’s very complex.
Design inferrence is not ciruclar like that. Even if one inferred specification, the beginning question, "How do you know it's very complex? It's designed.", is false. Example: Imagine a single dot on a blank sheet of paper. This is very simple. However, it was designed as art by a person. It almost certainly would not have been ascribed to be an intelligent made design without knowledge of the intent behind it's origin.JGuy
November 4, 2007
November
11
Nov
4
04
2007
05:57 AM
5
05
57
AM
PDT
Hi Gpuccio -- (great comments BTW)
for numbers like 3333333333, which can be written as “10 times 3?. Such compressible patterns are usually recognizable by a conscious mind, for reasons that are probably much deeper than I can understand.
Ok I'll bite. If on *exceedingly* rare occasions a random shuffling of numbers produces a long string of say, three "3333333333," the "meaning" and "order" apparent here is just an illusion. Randomness knows no value ("3" is a numeric value) and each "3" is but an *independent*, fortuitous event, devoid of any connection to any other "3" (which is yet another independent fortuitous event). *Minds,* however perceive "whole-isticaly" (they see the whole string in the "mind's eye"), and subsequently recognize *connections* between valued events and typically generate non-fortuitous, connected, value laden systems and sequences --"archipelagos" (thanks karios) of real order and real function in a great ocean of disorder/disfunction.William Brookfield
November 3, 2007
November
11
Nov
3
03
2007
03:50 PM
3
03
50
PM
PDT
MickeyBitsko is no longer with us.DaveScot
November 2, 2007
November
11
Nov
2
02
2007
06:00 PM
6
06
00
PM
PDT
Mickey B: I think you are missing the point that most of our knowledge is probabilistic to some extent or other. That is, absolute proof beyond all dispute is a mirage -- even in Mathematics, post Godel. What we deal with on scientific knowledge is revisable inference to best explanation, and the objection that something utterly improbable just may happen by accident is not the prudent way to bet in such an explanation, once it has crossed the explanatory filters two-pronged test. [The probabilities we are dealing with are comparable to or lower than those that every oxygen molecule in your room could at random rush to one end, causing you to asphyxiate; something you don't usually worry about I suspect, and BTW, on pretty much the same statistical mechanical grounds, as I discuss in the appendix A, in the always linked. In a nutshell, the statistical weight of the mixed macrostate is so much higher than that of the clumped one, that we would not expect that to happen just once at random in the history of the observed cosmos.] Note, too, that in EVERY case where we directly observe the causal story, CSI is produced by agency. To see what lies under it, read my always linked, Appendix A section 6. The objections I saw in 64 above strike me as getting into the territory of selective hyperskepticism, which is self-refuting. GEM of TKIkairosfocus
November 2, 2007
November
11
Nov
2
02
2007
03:55 PM
3
03
55
PM
PDT
Did anyone bother to look at the link I posted regarding built-in instructions above with nano-particles and there thoughts how OOL possibly started like that? I related it to the cards due to the design of suits and numbers. But, the selection process is external for cards. Whereas the link I posted they inserted instructions into the process. Essentially, this is teleological insertions. I believe what this shows is that DNA can no more align itself for meaningfult expression thru proteins than playing cards could by themselves. The Blueprint is both imprinted and teleological. It takes an intelligent selection process, not a blind one prior to any bet being made on the hand. I think they're proving the case for ID more so in these experiments. I thought it significant for ID. Specification is targeted outcomes for selection processes. Nothing is random in the nano-experiments, neither is anything random in card choices. For the cards to do what Mickey "I think" would like them to do, they would in turn have to have pre-built instructions just like the nanobots to align by suit and number. Anyone?Michaels7
November 2, 2007
November
11
Nov
2
02
2007
01:45 PM
1
01
45
PM
PDT
Mickey: You asked: "Now a question for you: Shuffle a deck of cards throroughly, then note the order. Now keep shuffling and noting the order. How long do you believe it will be before the original order is encountered again?" As I am neither a mathematician nor a good card player, I have looked for the answer on the internet. I attach here a very good treament of the question, from the site of a guy whose name is matthew weathers (you can look at it directly at matthewweathers.com): "Shuffling Cards Every time you shuffle a deck of playing cards, it's likely that you have come up with an ordering of cards that is unique in human history. For example, I shuffled a deck of cards this afternoon, and my friend Adam split the deck, and this is the order that the cards came out it. How many different orders are there? There are 52 cards in a deck of cards. Imagine an "ordering of cards" as 52 empty spots to be filled: How many different possibilities are there for what could go in the first spot? The answer is 52 - any of the 52 cards could go there. What about the second spot? Now that you've already chosen a card for the first spot, there are only 51 cards left, so there are only 51 different possibilities for the second spot. And for the third spot, we only have 50 choices. If we stop there, and just fill up the first three spots, that's like asking how many different possibilites there are for dealing three cards in order. How many different possible combinations are there for three cards in order? We just multiply how many possibilities there were for the first position (52) with the possibilities for the second position (51) with the possibilities for the third position (50). So there are 52 • 51 • 50 = 132600 different possibilites for three cards in order. What about a whole deck? We just multiply the possibilities for each of the 52 positions, which is 52 • 51 • 50 • 49 • 48 • 47 • 46 • 45 • 44 • 43 • 42 • 41 • 40 • 39 • 38 • 37 • 36 • 35 • 34 • 33 • 32 • 31 • 30 • 29 • 28 • 27 • 26 • 25 • 24 • 23 • 22 • 21 • 20 • 19 • 18 • 17 • 16 • 15 • 14 • 13 • 12 • 11 • 10 • 9 • 8 • 7 • 6 • 5 • 4 • 3 • 2 • 1. A mathematical way of representing all those numbers multiplied together is called the factorial (See description on MathWorld), so we could write this as 52!, which means the same thing. When you multiply all those numbers together, you get 80658175170943878571660636856403766975289505440883277824000000000000. That number is 68 digits long. We can round off and write it like this: 8.0658X1068. How many times have cards been shuffled in human history? That's an impossible number to know. So let's overestimate. Currently, there are between 6 and 7 billion people in the world. Also, the modern deck of 52 playing cards has been around since 1300 A.D. probably. If we assume that 7 billion people have been shuffling cards once a second for the past 700 years, that will be way more than the actual number of times cards have been shuffled. 700 years is 255675 days (plus or minus a couple for leap year centuries), which is 22090320000 seconds. Now, if 7000000000 people had been shuffling cards once a second for 22090320000 seconds, they would have come up with 7000000000 • 22090320000 different combinations, or orderings of cards. When you multiply those numbers together you get 154632240000000000000, or rounding off, 1.546X1023. So, it's safe to say that in human history, playing cards have been shuffled in less than 1.546X1023 different orders. Is this order unique in human history? Probably so. When I shuffled the cards this afternoon, and came up with the order you see in the picture, that is one of 8.0658X1068 different possible orders that cards can be in. However, in the past 700 years since playing cards were invented, cards have been shuffled less than 1.546X1023 times. So the chances that one of those times they got shuffled into the same exact order you see here are less than 1 in 100000000000000000000000000000000000000000000 (1 in 1044). At what point do you say something is impossible? If the chances are 1 in 1000? 1 in a million?1 in ten trillion?1 in 1 in 1044? In the movie Dumb and Dumber, Lloyd asks Mary what the chances are of the two of them getting together. She replies "1 in a million." He responds, "so you're saying there's a chance?!" So... if you think there's a chance that maybe, just maybe somebody, somewhere, at some time may have shuffled a deck of cards just like this ordering you see here, then you're like Lloyd Christmas in the movie." So, you see, the answer to your question: "How long do you believe it will be before the original order is encountered again?" is: practically forever, in realistic terms it will never happen. The example of the deck of cards is not perfect for DNA and protein information, but gives a good idea of the order of magnitude of the improbability. The deck of cards is a factorial problem, because once you have assigned a card, the number of the remaining possibilities is reduced by one. For proteins, each place can be occupied by any of the 20 aminoacids, with the possibility of repetition. For a 52 aminoacid protein, the combinations are 1:20^52, that is about 1:4*10^67. We are in the same order of magnitude of the deck of cards. But a 52 aminoacids protein is definitely a very small protein. Besides, the miracle of such an improbability for proteins should have happened not once, but billions of times, each time a new protein is formed. And don't tell me that the single steps are selected, because that's simply impossible: each single aminoacid mutation can never bring new function and, as Behe as clearly shown, if more than 2 or 3 aminoacid mutations are necessary, that event will practically never happen. So, I really can't understand your last statement: "All of these arguments from probability seem to hinge on the mistaken idea that if the odds against something happening are a gazillion to one, we’re going to have to wait through a gazillion opportunities for it to happen" Why is the idea mistaken? If the odds are a gazillion to one, you will indeed have to wait through (approximately) a gazillion opportunities, and if I understand well the meaning of gazillion, you will have to be really patient! And remember, that's not happened once, it's happened a gazillion times, for each new protein. So, you will have to wait a gazillion gazillion times. Good luck...gpuccio
November 2, 2007
November
11
Nov
2
02
2007
09:19 AM
9
09
19
AM
PDT
DaveScot said:
If there are two decks, one perfectly ordered by suit and rank and one with no discernable order the one with no discernable order is the more likely to be generated by a random shuffle. The reason: there are gazillions of possible arrangements that display no discernable order and very few that are perfectly ordered.
You have no way of knowing that the first deck is randomly ordered, or that second deck isn't, although that's certainly the way to bet. Therefore, in order to conclude design, you must rely on facts not in evidence. If a phenomenon is the result of random action, the fact that the odds against it are one in a gazillion doesn't mean that it can't happen in the first opportunity. In fact, randomness is defined by the idea that the phenomenon can occur on any opportunity, because in order for something to be random, each possible outcome must have an equal chance of occurring on each opportunity. All of these arguments from probability seem to hinge on the mistaken idea that if the odds against something happening are a gazillion to one, we're going to have to wait through a gazillion opportunities for it to happen.Mickey Bitsko
November 2, 2007
November
11
Nov
2
02
2007
08:07 AM
8
08
07
AM
PDT
Really weird error message, folks . . .
Ok kairosfocus, I'm not sure how it helps, but here you go: "Your computer will become self-aware after Windows restarts. Please disconnect it from the network." If you need a weirder one, let me know.Apollos
November 2, 2007
November
11
Nov
2
02
2007
01:54 AM
1
01
54
AM
PDT
Really weird error message, folks . . .kairosfocus
November 2, 2007
November
11
Nov
2
02
2007
01:39 AM
1
01
39
AM
PDT
Interested (and BarryA and Dave): First, Interested, thanks. (I should also again thank the former UD and regular ARN commenter Pixie for going through a long exchange with me on the subject at my own blog.) BarryA I think the problem here has been well addressed by GPuccio in other threads, when he pointed out that specification comes in different flavours, but of course when conjoined with informational complexity and contingency [i.e. in effect info storage capacity beyond 500 - 1,000 bits, the latter taking in effectively all practical cases of archipelagos of Functionality, not just the unique functional state in 10^150 states that the first does]. GP at no 51 in the Oct 27 Darwinist predictions thread:
Specification can be of at least 3 different kinds: 1) Pre-specification: we can recognize a pattern because we have seen it before. In this case, the pattern in itself is probably random, but its occurrence “after” a pre-specification is a sign of CSI (obviously provided that complexity is also present, but that stays true for each of the cases). 2) Compressibility: some mathematical patterns of information are highly compressible, which means that they can be expressed in a much shorter sequence than their original form. That is the case, for instance, for numbers like 3333333333, which can be written as “10 times 3?. Such compressible patterns are usually recognizable by a conscious mind, for reasons that are probably much deeper than I can understand. In this case, specification is in some way intrinsic to the specific pattern of information, we could say that it is inherent in its mathematical properties. 3) Finally there is perhaps the most important form of specification, at least for our purposes: specification by function. A few patterns of information are specified because thay can “do” something very specific, in the right context. That’s the case of proteins, obviously, but also of computer programs, or in general of algorithms. In this case specification is not so much a characteristic of the mathemathical properties of the sequence, but rather of what the sequence can realize in a particular physical context: for example, the sequence of an enzyme is meaningless in itself, but it becomes very powerful if it is used to guide the synthesis of the real protein, and if the real protein can exist in a context where it can fulfill its function. Function is a very objective evidence of specification, because it does not depend on pre-conceptions of the observer (at least, not more than any other concept in human knowledge). So, this is the theoretic frame of CSI: complexity “plus” specification. And, obviously, the absence of any known mechanical explanation of the specific specified pattern in terms of necessity (that is, we are observing apparently random phenomena). The summary is: a) If you have a very complex pattern (very unlikely) and b) If no explanation of that patterm is known in terms of necessity on the basis of physical laws (in other words, if that pattern is equally likely as all other possible patterns, in terms of physical laws, and is therefore potentially random) and c) If that pattern is recognizable as specified, in any of the ways I have previously described: then we are witnessing CSI, and the best empirical explanation for that is an intelligent agent.
As Dave just pointed out, biofunctionality is a macroscopically recognisable criterion. Second, it isolates islands and archipelagos of microstates of various protein and DNA sequences that are exceedingly rare individually and collectively in the overall configuration space. Thus, we see a natural emergence of specification in the functional sense, and a context of complexity in the sense of high information storage capacity plus contingency. For instance, DNA strands for life systems start at about 500 000 base pairs [~ 1 Mbit], or a config space of 4^500k ~ 9.9 * 10^301,029. Similarly, a typical 300 element protein has 20 state elements with a config space of 20^300 ~ 2.04 * 10^390; there are hundreds of such in a living cell, all of which must function together according to a complex set of algorithmic, code-based life processes. The odds of such bio-functionally specified, fine-tuned, tightly integrated complex information [including the codes and the algorithms] emerging or evolving all together in the same space of order ~ 1 micron, by undirected mechanical necessity and/or the random processes known to dominate the molecular scale world ONLY, in the gamut of our observed cosmos, are negligibly different from zero. [And the resort to a quasi-infinite cosmos as a whole proposals is after the fact, worldview saving ad hoc speculation; not science. It is a backhanded acknowledgement of the force of the point.] And yet we KNOW that such FSCI is routinely produced by agents all the time. Now, what is happening with the card example of such FSCI is not that the basic principle is wrong or improperly put, but that the fact that the functionality is arbitrary [we humans made up the game] invites red herrings leading out to easily ignitable strawman objections that, when burned cloud [and sometimes poison] the atmosphere. So, we need a "de-confuser." That is why I took up the late great Sir Fred Hoyle's classic tornado in a junkyard example and turned it into a case of sci-fi nanotech that is within reach of molecule scale processes and forces such as brownian motion, that makes the same basic point; i.e I have reduced the scale so we don't need to think of tornadoes, just ordinary well known well understood statistical mechanical principles that underly thermodynamics. (It helps to note that even Dawkins speaks of the tornado case with respect, too.) The result? The same one that the card shuffling case makes, and the same one that the Caputo election fraud case makes. Not surprising, actually, once you understand what a functional specification means in a really large config space. [This makes the proverbial finding the needle in a haystack challenge seem an easy task! BTW, the tired out lottery objection also fails: the vastness of the config spaces means that we just don't have enough search opportunities to reasonably expect to get to the lucky number, no matter how many tickets we print. Indeed, even if every atom in the observed universe [~10^80] was a ticket and every 10^-43 seconds we do another draw across its lifespan [~13.7 BY to date by generally accepted estimates],that is not going to be nearly good enough. That is the point of Dembski's UPB. Real lotteries are carefully designed to generate just enough winners to encourage enough gamblers to make unconscionable profits . . . and so are insurance policies and other well-designed option trades.] Onlookers . . . observe how, over many months now, there are no "biters" on the microjets case who have put forth a cogent rebuttal. [Cf my always linked app 1, section 6.] But by contrast, observe how readily the usual tired out talking points on playing cards and the like are trotted out. [Some months ago I was similarly astonished to see the sort of objections that were made to Dembski's citing of the classic NJ Caputo election scam!] I contend that the Cards case, the Caputo case and the microjets and nanobots case are all about the issue of getting to FSCI -- a relevant subset of CSI -- by chance processes plus mechanical necessity only. In each case, the complexity [info carrying capacity: contingency plus high enough number of bits of potential storage] is such that random searches run out of probabilistic resources and the idea of "natural selection" across intermediate steps is either irrelevant, or fails the criterion that each such alleged intermediate must also be functional. But of course, agents routinely produce FSCI as a subset of CSI. For instance, the posts in this thread are examples. Stronger than that, in EVERY actually observed case of FSCI, where we directly observe the causal process, the source is an agent. Thus by very strong induction anchored in direct observations of such causal chains, and the underlying statistical mechanics that underpins thermodynamics, as well as general probability considerations, the FSCI observed in cells is a product of agency. And, in that the increments in FSCI to get to the many diverse body plans with intermediate functional states run into the same barrier, body plan level biodiversity is similarly a product of agency. Even further, on the observations of physics and cosmology, the fine-tuned, tightly coordinated physics of our cosmos that is a requisite for life and even for science as we know it, indicates the cosmos is the product of an exceedingly intelligent and powerful agent who intended to produce life -- including life capable of science such as we are. Of course, committed evolutionary materialists will find such inferences objectionable [and may even try to redefine science to dodge them], but that is plain out and out question-begging and is besides the point. For, such a set of inferences is well-warranted on good empirical and logical grounds. That the state of late C20 and early C21 science turns out to be more friendly to theism than say that of late C19 and early C20 science, is just a matter of how the evidence has turned out. At least, if you are committed to the principle that science should be the best, empirically anchored, reasonable, open-to-improvement explanation of the world as we experience it. GEM of TKIkairosfocus
November 2, 2007
November
11
Nov
2
02
2007
01:27 AM
1
01
27
AM
PDT
Interested (and BarryA and Dave): First, Interested, thanks. (I should also again thank the former UD and regular ARN commenter Pixie for going through a long exchange with me on the subject at my own blog.) BarryA I think the problem here has been well addressed by GPuccio in other threads, when he pointed out that specification comes in different flavours, but of course when conjoined with informational complexity and contingency [i.e. in effect info storage capacity beyond 500 - 1,000 bits, the latter taking in effectively all practical cases of archipelagos of Functionality, not just the unique functional state in 10^150 states that the first does]. GP at no 51 in the Oct 27 Darwinist predictions thread:
Specification can be of at least 3 different kinds: 1) Pre-specification: we can recognize a pattern because we have seen it before. In this case, the pattern in itself is probably random, but its occurrence “after” a pre-specification is a sign of CSI (obviously provided that complexity is also present, but that stays true for each of the cases). 2) Compressibility: some mathematical patterns of information are highly compressible, which means that they can be expressed in a much shorter sequence than their original form. That is the case, for instance, for numbers like 3333333333, which can be written as “10 times 3?. Such compressible patterns are usually recognizable by a conscious mind, for reasons that are probably much deeper than I can understand. In this case, specification is in some way intrinsic to the specific pattern of information, we could say that it is inherent in its mathematical properties. 3) Finally there is perhaps the most important form of specification, at least for our purposes: specification by function. A few patterns of information are specified because thay can “do” something very specific, in the right context. That’s the case of proteins, obviously, but also of computer programs, or in general of algorithms. In this case specification is not so much a characteristic of the mathemathical properties of the sequence, but rather of what the sequence can realize in a particular physical context: for example, the sequence of an enzyme is meaningless in itself, but it becomes very powerful if it is used to guide the synthesis of the real protein, and if the real protein can exist in a context where it can fulfill its function. Function is a very objective evidence of specification, because it does not depend on pre-conceptions of the observer (at least, not more than any other concept in human knowledge). So, this is the theoretic frame of CSI: complexity “plus” specification. And, obviously, the absence of any known mechanical explanation of the specific specified pattern in terms of necessity (that is, we are observing apparently random phenomena). The summary is: a) If you have a very complex pattern (very unlikely) and b) If no explanation of that patterm is known in terms of necessity on the basis of physical laws (in other words, if that pattern is equally likely as all other possible patterns, in terms of physical laws, and is therefore potentially random) and c) If that pattern is recognizable as specified, in any of the ways I have previously described: then we are witnessing CSI, and the best empirical explanation for that is an intelligent agent.
As Dave just pointed out, biofunctionality is a macroscopically recognisable criterion. Second, it isolates islands and archipelagos of microstates of various protein and DNA sequences that are exceedingly rare individually and collectively in the overall configuration space. Thus, we see a natural emergence of specification in the functional sense, and a context of complexity in the sense of high information storage capacity plus contingency. For instance, DNA strands for life systems start at about 500 000 base pairs [~ 1 Mbit], or a config space of 4^500k ~ 9.9 * 10^301,029. Similarly, a typical 300 element protein has 20 state elements with a config space of 20^300 ~ 2.04 * 10^390; there are hundreds of such in a living cell, all of which must function together according to a complex set of algorithmic, code-based life processes. The odds of such bio-functionally specified, fine-tuned, tightly integrated complex information [including the codes and the algorithms] emerging or evolving all together in the same space of order ~ 1 micron, by undirected mechanical necessity and/or the random processes known to dominate the molecular scale world ONLY, in the gamut of our observed cosmos, are negligibly different from zero. [And the resort to a quasi-infinite cosmos as a whole proposals is after the fact, worldview saving ad hoc speculation; not science. It is a backhanded acknowledgement of the force of the point.] And yet we KNOW that such FSCI is routinely produced by agents all the time. Now, what is happening with the card example of such FSCI is not that the basic principle is wrong or improperly put, but that the fact that the functionality is arbitrary [we humans made up the game] invites red herrings leading out to easily ignitable strawman objections that, when burned cloud [and sometimes poison] the atmosphere. So, we need a "de-confuser." That is why I took up the late great Sir Fred Hoyle's classic tornado in a junkyard example and turned it into a case of sci-fi nanotech that is within reach of molecule scale processes and forces such as brownian motion, that makes the same basic point; i.e I have reduced the scale so we don't need to think of tornadoes, just ordinary well known well understood statistical mechanical principles that underly thermodynamics. (It helps to note that even Dawkins speaks of the tornado case with respect, too.) The result? The same one that the card shuffling case makes, and the same one that the Caputo election fraud case makes. Not surprising, actually, once you understand what a functional specification means in a really large config space. [This makes the proverbial finding the needle in a haystack challenge seem an easy task! BTW, the tired out lottery objection also fails: the vastness of the config spaces means that we just don't have enough search opportunities to reasonably expect to get to the lucky number, no matter how many tickets we print. Indeed, even if every atom in the observed universe [~10^80] was a ticket and every 10^-43 seconds we do another draw across its lifespan [~13.7 BY to date by generally accepted estimates],that is not going to be nearly good enough. That is the point of Dembski's UPB. Real lotteries are carefully designed to generate just enough winners to encourage enough gamblers to make unconscionable profits . . . and so are insurance policies and other well-designed option trades.] Onlookers . . . observe how, over many months now, there are no "biters" on the microjets case who have put forth a cogent rebuttal. [Cf my always linked app 1, section 6.] But by contrast, observe how readily the usual tired out talking points on playing cards and the like are trotted out. [Some months ago I was similarly astonished to see the sort of objections that were made to Dembski's citing of the classic NJ Caputo election scam!] I contend that the Cards case, the Caputo case and the microjets and nanobots case are all about the issue of getting to FSCI -- a relevant subset of CSI -- by chance processes plus mechanical necessity only. In each case, the complexity [info carrying capacity: contingency plus high enough number of bits of potential storage] is such that random searches run out of probabilistic resources and the idea of "natural selection" across intermediate steps is either irrelevant, or fails the criterion that each such alleged intermediate must also be functional. But of course, agents routinely produce FSCI as a subset of CSI. For instance, the posts in this thread are examples. Stronger than that, in EVERY actually observed case of FSCI, where we directly observe the causal process, the source is an agent. Thus by very strong induction anchored in direct observations of such causal chains, and the underlying statistical mechanics that underpins thermodynamics, as well as general probability considerations, the FSCI observed in cells is a product of agency. And, in that the increments in FSCI to get to the many diverse body plans with intermediate functional states run into the same barrier, body plan level biodiversity is similarly a product of agency. Even further, on the observations of physics and cosmology, the fine-tuned, tightly coordinated physics of our cosmos that is a requisite for life and even for science as we know it, indicates the cosmos is the product of an exceedingly intelligent and powerful agent who intended to produce life -- including life capable of science such as we are. Of course, committed evolutionary materialists will find such inferences objectionable [and may even try to redefine science to dodge them], but that is plain out and out question-begging and is besides the point. For, such a set of inferences is well-warranted on good empirical and logical grounds. That the state of late C20 and early C21 science turns out to be more friendly to theism than say that of late C19 and early C20 science, is just a matter of how the evidence has turned out. At least, if you are committed to the principle that science should be the best, empirically anchored, reasonable, open-to-improvement explanation of the world as we experience it. GEM of TKIkairosfocus
November 2, 2007
November
11
Nov
2
02
2007
01:26 AM
1
01
26
AM
PDT
By the way I think this is essentially what William Brookfield was getting at in #53 above.Apollos
November 2, 2007
November
11
Nov
2
02
2007
12:57 AM
12
12
57
AM
PDT
DaveScot said:
"Any randomly generated gene is astronomically unlikely to be of any biological use just as any randomly ordered deck of cards is astronomically unlikely to exhibit any perfect orderings."
This is the heart of the matter as I understand it, relating to Mickey's objection about determining CSI based on a perfectly ordered deck. Am I correct in assuming it possible to develop a reasonable "signal to noise ratio" for a deck of playing cards? This should perfectly illuminate the "equal probability" obfuscation.Apollos
November 2, 2007
November
11
Nov
2
02
2007
12:38 AM
12
12
38
AM
PDT
Tim If there are two decks, one perfectly ordered by suit and rank and one with no discernable order the one with no discernable order is the more likely to be generated by a random shuffle. The reason: there are gazillions of possible arrangements that display no discernable order and very few that are perfectly ordered. This is actually very analogous to coding genes. There are 52 different cards in a standard deck while there are 64 different codons (nucleic acid triplets). Genes are further complicated because they have no fixed length and may be thousands of codons long. There are gazillions of codon sequences that don't fold into potentially biologically active molecules and few that do consistenly fold into a biologically active molecule. Complicating it even further is that biologically active proteins don't exist in a vacuum but must fold in such a way as to precisely fit (in at least five dimensions - 3 spatial dimensions plus hydrophophic and hydrophilic surfaces) the shapes of other proteins and other non-protein molecules they need to grasp and release. The folding process is so complex that being able to predict it is the Holy Grail of biochemistry. So, while any gene sequence is as likely as any other from a randomly generated string of codons the odds of getting a gene that codes for a biologically active protein from a randomly generated sequence are very remote because of the ratio of useful sequences to useless sequences. Any randomly generated gene is astronomically unlikely to be of any biological use just as any randomly ordered deck of cards is astronomically unlikely to exhibit any perfect orderings. DaveScot
November 1, 2007
November
11
Nov
1
01
2007
08:45 PM
8
08
45
PM
PDT
Note that I should clarify "shuffling" to mean a random rearrangement of the cards, with "random" meaning that every possible order is equally likely with each shuffle.Mickey Bitsko
November 1, 2007
November
11
Nov
1
01
2007
04:55 PM
4
04
55
PM
PDT
Tim, What matters (to me at least) is the question I asked. Given a target arrangement of 52 cards, and continual reshuffling, how long (in terms of reshuffles) should it take before the target order is repeated, given that the odds are about 1 in 10^68?Mickey Bitsko
November 1, 2007
November
11
Nov
1
01
2007
04:52 PM
4
04
52
PM
PDT
I can see from gpuccio's thoughtful response that I had not made myself clear concerning the two decks. ((“if there are two decks of cards, one in no discernible order, and one ordered by rank and suit, which arrangement was more likely to occur randomly” The answer is simple: the probability is the same for both decks, that is a very low probability. gpuccio (51))) What I should have conveyed: The suited and ordered deck is face up (that is how we know it is suited and ordered), but the deck with no discernible order is by definition face down. If it were face up, then an order would be discerned. I thought this clearly to myself, but failed to type it in, sorry. This definition should further sew up the reason why the secret society can not exist. Let's nail down the specification thing. Mickey wrote, "Shuffle a deck of cards throroughly, then note the order". By "note the order" I think what you are doing is specifying a target. It really doesn't matter how you formed the sequence. What matters is that it was FIRST specified THEN sought. Probability stories like these will eventually get me hoisted by my own petard, but I am quite sure that the action of selection (even if generated by chance, i.e. shuffling) generates a target that has been specified.Tim
November 1, 2007
November
11
Nov
1
01
2007
04:18 PM
4
04
18
PM
PDT
Hi BarryA "Brookfield, I think you are putting a needless layer of complexity on this. Yes, I put the cards in the context of a poker game to make the story interesting." Sorry if I am needlessly complexifying things. That was not my intent. I am looking for a way of explaining SC that is less prone to confusion and possible detractor obfuscation. The poker story was great and many can relate to it, but I am thinking maybe we could be describing such situations in terms of the probabilistic resources from randomly shuffled cards -- set #1 (hideously low) -- versus the probabilistic resources from the "superset" #2. (significantly higher) and a subsequent inference to the best explanation...with superset #2 (intelligent design) as the winner...? While Granville's notion of macroscopic improbability is quite good it doesn't seem to work for nanotechnologies that are both SC and microscopic. Or perhaps I am missing something?William Brookfield
November 1, 2007
November
11
Nov
1
01
2007
03:31 PM
3
03
31
PM
PDT
gpuccio said,
Now, just imagine that the exact sequence of the first deck is communicated to you in advance, and then the deck of cards comes out exactly as it was said to you: would you still believe that it happened in a random way?
I would believe that randomness was possible, but highly unlikely. Now a question for you: Shuffle a deck of cards throroughly, then note the order. Now keep shuffling and noting the order. How long do you believe it will be before the original order is encountered again?Mickey Bitsko
November 1, 2007
November
11
Nov
1
01
2007
03:14 PM
3
03
14
PM
PDT
Mickey: you stated: "if there are two decks of cards, one in no discernible order, and one ordered by rank and suit, which arrangement was more likely to occur randomly" The answer is simple: the probability is the same for both decks, that is a very low probability. Now, just imagine that the exact sequence of the first deck is communicated to you in advance, and then the deck of cards comes out exactly as it was said to you: would you still believe that it happened in a random way? No (at least I hope, for your sake). That's an example of pre-specification. Or just suppose that the deck of cards comes out in perfect order. Would you still believe that it was correctly and randomly shaffled? No (at least, I hope, for your sake). That's an example of specification by compressibility. Or just suppose that the cards are binary, 0 and 1, and are more numerous (a few hundreds). Suppose that the deck of cards, in the exact order, can be written as the binary code of a small software program, and that such a program works as an ordering algorithm. Would you still believe that the the deck of cards was really random? No ((at least, I hope, again for your sake). That's an example of specification by function. Genomes and proteins are all specified by function. They all exhibit CSI, of the highest grade. It is simple. Those who try to speculate on hypothetical contradictions of the concept of specification are completely missing the power, beauty and elegance of the concept itself. And its beautiful, perfect simplicity.gpuccio
November 1, 2007
November
11
Nov
1
01
2007
02:57 PM
2
02
57
PM
PDT
For calculating the informational bits using 8-bit single-byte coded graphic characters, here is an example: “ME THINKS IT IS LIKE A WEASEL" is only 133 bits of information(when calculated as a whole sentence; the complexity of the individual items of the set is 16, 48, 16, 16, 32, 8, 48 plus 8 bits for each space). So aequeosalinocalcalinoceraceoaluminosocupreovitriolic would be 416 informational bits. The specification is that it is an English word with a specific function. That specific function does not have any intermediates that maintain the same function. Here we have a situation where indirect intermediates are well below 500 informational bits and thus there is nothing to select for that will help much in reaching the goal. Thus this canyon must be bridged in one giant leap of recombination of various factors, making it difficult for Darwinian mechanisms. Even though that is not 500 I would still be surprised if that showed up in a program such as Zachriels unless the fitness function was designed in a certain manner. For more on calculating such things refer to Dembski's work.Patrick
November 1, 2007
November
11
Nov
1
01
2007
01:38 PM
1
01
38
PM
PDT
Patrick (46)- What can you calculate the complexity of? I can't figure out how to calculate the complexity of anything.congregate
November 1, 2007
November
11
Nov
1
01
2007
01:25 PM
1
01
25
PM
PDT
“Very complex” reaches some undefined (or very vaguely defined) point where design is assumed, without knowing anything more than the thing is so complex that it’s difficult to understand how it might happen naturally.
Specification does not equate to not "knowing anything more than the thing is so complex that it’s difficult to understand how it might happen naturally."Patrick
November 1, 2007
November
11
Nov
1
01
2007
01:22 PM
1
01
22
PM
PDT
No, it doesn't ignore the specification. It assumes the specification. "Very complex" reaches some undefined (or very vaguely defined) point where design is assumed, without knowing anything more than the thing is so complex that it's difficult to understand how it might happen naturally. AMickey Bitsko
November 1, 2007
November
11
Nov
1
01
2007
01:17 PM
1
01
17
PM
PDT
My whole point here has been to point out that there are enough situations where design isn’t discernible without foreknowledge, which means (at least to me) that the concept of CSI involves inescapable circular reasoning. How do you know it’s very complex? It’s designed.
Err...no. Complexity can be calculated without knowing whether something is designed or not. In fact, with the explanatory filter the complexity is calculated without presuming design or no design. A non-designed object can also be very complex.
How do you know it’s designed? It’s very complex.
And, again, that ignores specification.Patrick
November 1, 2007
November
11
Nov
1
01
2007
12:48 PM
12
12
48
PM
PDT
Even though the first deck being ordered by rank and suit is impressive (52! = 8.06581752×10^67) that still does not exceed Dembski's UPB of 10^150 or 500 informational bits. Now we could make a weak design inference (aka police investigation) but not an ID-based design inference if this was a one-time shuffle to win a jackpot. In that scenario we would presumably be able to discover the mechanism for potential cheating so we could use that design/designer detection method instead of ID. It's not as if ID methods are the only way to detect design. EDIT: For the jackpot scenario I'm presuming the prize winner would be required to shuffle an entire deck and the prize would be awarded if a contestant managed to get some sort of combination that is close to 1 in 10^8 (around the odds of Powerball). By turning up this result the contestant is essentially providing a result that is overkill for the terms of the prize. So although the guy might have got really, really lucky they will still investigate to see if it was rigged somehow. Saw this interesting article: http://creationevolutiondesign.blogspot.com/2006/10/origin-of-life-quotes-by-jbs-haldane_30.html
We can accept a certain amount of luck in our explanations, but not too much.... In our theory of how we came to exist, we are allowed to postulate a certain ration of luck. This ration has, as its upper limit, the number of eligible planets in the universe.... We [therefore] have at our disposal, if we want to use it, odds of 1 in 100 billion billion as an upper limit (or 1 in however many available planets we think there are) to spend in our theory of the origin of life. This is the maximum amount of luck we are allowed to postulate in our theory. Suppose we want to suggest, for instance, that life began when both DNA and its protein-based replication machinery spontaneously chanced to come into existence. We can allow ourselves the luxury of such an extravagant theory, provided that the odds against this coincidence occurring on a planet do not exceed 100 billion billion to one. [Dawkins, R., "The Blind Watchmaker," Norton: (New York, 1987, pp. 139,145-46]
I find that interesting considering Koonin's comments regard the unguided Origin Of Life (OOL) scenarios and as a conservative estimate he calculated 1 in 10^-1018 for the possibility that such a system could have arisen.Patrick
November 1, 2007
November
11
Nov
1
01
2007
12:45 PM
12
12
45
PM
PDT
My whole point here has been to point out that there are enough situations where design isn't discernible without foreknowledge, which means (at least to me) that the concept of CSI involves inescapable circular reasoning. How do you know it's very complex? It's designed. How do you know it's designed? It's very complex. The very idea of "specification," it seems to me, requires foreknowledge that there is a source of such orders, as in the deck of cards ordered by rank and suit. If we had no foreknowledge of such things, no order could be differentiated from random orders.Mickey Bitsko
November 1, 2007
November
11
Nov
1
01
2007
12:43 PM
12
12
43
PM
PDT
1 2 3

Leave a Reply