Home » Complex Specified Information, Mathematics » The Fundamental Law of Intelligent Design

The Fundamental Law of Intelligent Design

After being in the ID movement for 10 years, and suffering through many debates, if someone were to ask me what is the most fundamental law upon which the ID case rests, I would have to say it is the law of large numbers (LLN). It is the law that tells us that a set of fair coins randomly shaken will converge on 50% heads and not 100% heads. It is the law that tells us systems will tend toward disorganization rather than organization. It is the law of math that makes the 2nd law of thermodynamics a law of physics. Few notions in math are accorded the status of law. We have the fundamental theorem of calculus, the fundamental theorem of algebra, and the fundamental theorem of arithmetic — but the law of large numbers is not just a theorem, it is promoted to the status of law, almost as if to emphasize its fundamental importance to reality.

To understand what the law of large numbers is, it requires understanding the notion of expected value or expectation value. Rather than giving the somewhat brutal mathematical formalism of expected value, let me give an illustration with coins. If we have large set of fair coins, there is an expectation that approximately 50% of the fair coins will be heads after a vigorous shaking or flipping of the coins (a random process). That is, the expected value for the proportion of heads is 50%.

As we examine sets of coins that are very large (say 10,000 coins), the outcome will tend to converge so close to 50% heads so frequently that we can say from a practical standpoint, the proportion will be 50% or close to 50% with every shaking of the set. If we consider each coin in the set as a “trial”, the example illustrates the law of large numbers. Formally stated the law of large numbers says:

the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.

Law of Large Numbers

How does this play out for ID? Before answering that question, let me classify 3 kinds of designs (or forms of organization).

A. Non-functional ordered objects (like all fair coins heads, or homochirality in biology)

B. Non-functional dis-ordered, but recognizably designed objects (like a set of numbered coins organized according to a pre-specified pattern, an binary representation of Hamlet, DNA strings that identify GMOs, etc.)

C. Functional objects (like components assembled into a functioning machine, a software bit stream, etc.)

In this essay, I’ll illustrate design using the law of large numbers with the “non-functional ordered objects”. I’ll save for later discussion the illustration of design in the more challenging cases of “non-functional dis-ordered, but recognizably designed objects” and “functional objects”.

If I had 500 fair coins in a box all heads, I would conclude the 100% proportion of heads is far away from the expected value of 50% heads, thus we have a significant violation of the law of large numbers for random processes, thus a random process is rejected as the mechanism of creating the all-heads pattern. By convention, the ID community classifies objects as designed if they do not conform to the products of law and chance. Whether they are designed in the ultimate sense is a separate question, but the practical rejection of the chance hypothesis in this case is unassailable.

A typical mistake ID proponents make is saying, “the all-heads pattern happens on average only 1 out 2^500 times, therefore the chance hypothesis is rejected”. The Darwinists will counter by saying, “that pattern is no more special than any other since every pattern happens only 1 out of 2^500 times, therefore all-coins heads is consistent with the chance hypothesis”. Last year, Darwinists at The Skeptical Zone tried to pull that same rhetorical stunt on me with these words:

if you have 500 flips of a fair coin that all come up heads, given your qualification (“fair coin”), that is outcome is perfectly consistent with fair coins,

Law of Large Numbers vs. Keiths

But I came prepared to counter their maneuvers. :-) They obviously didn’t anticipate I’d debate them from an unorthodox angle, namely the law of large numbers and application of expected value. I pointed out based on the binomial distribution and expectation value of 50% heads, 100% heads is a violation of law of large numbers and hence a violation of the chance hypothesis from a practical standpoint. My opponents in the debate were thrown into disarray. But as always, they never admitted defeat in the exchange. They camped out at UD and would not rest until I confessed the following creed:

if you have 500 flips of a fair coin that all come up heads, given your qualification (“fair coin”), that is outcome is perfectly consistent with fair coins,

Law of Large Numbers vs. Keiths

I told them, “no dice”. Or maybe I should have said, “no coins.” The Darwinists at Skeptical Zone fared so badly that even arch Darwinist Jeffrey Shallit felt it necessary to call his associates out on their folly.

The advantage of using the law of large numbers is it brings clarity to the probability arguments. It negates the Darwinists claim that “every pattern is just as improbable as another, therefore design is nothing special”. The 500-coins heads example illustrates how to apply the law of large numbers in identifying designs for non-functional ordered objects, and thus, certain patterns are indeed special because their very nature is at variance with the chance hypothesis.

It occurred to me, since the law of large numbers was such a fruitful way to refute the materialists on the question of non-functional ordered designs, how about we use the law of large numbers when dealing with other more challenging kinds of designed objects? Those ideas, Designer willing, will be explored in subsequent discussions.

NOTES:

For some history of debates with Darwinists over the Law of Large Numbers see:

SSDD: A 22-sigma event is consistent with the physics of fair coins?

Law of Large Numbers vs. Keiths

Siding with mathgrrl on a point, alternative to CSI V2

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

66 Responses to The Fundamental Law of Intelligent Design

  1. I pointed out based on the binomial distribution and expectation value of 50% heads, 100% heads is a violation of law of large numbers and hence a violation of the chance hypothesis from a practical standpoint.

    Any other individual sequence of 500 heads/tails also has the same probability. So the law of large numbers doesn’t get you anywhere. Your actual argument should be something about independent specification of a pattern. You’re the ID advocate, I shouldn’t have to help you out!

  2. Few notions in math are accorded the status of law. We have the fundamental theorem of calculus, the fundamental theorem of algebra, and the fundamental theorem of arithmetic — but the law of large numbers is not just a theorem, it is promoted to the status of law, almost as if to emphasize its fundamental importance to reality.

    This is nonsense. It is just a theorem. That it is called a law has to do with its history. It isn’t any kind of promotion of a theorem to a law.

    Moreover, by itself, it is a theorem in mathematics with no relevance to reality. It can have relevance when aspects of reality are modeled with a mathematical model for which the law of large numbers happens to be applicable.

    For reference, here’s a link to the Wikipedia page.

    If we have large set of fair coins, there is an expectation that approximately 50% of the fair coins will be heads after a vigorous shaking or flipping of the coins (a random process). That is, the expected value for the proportion of heads is 50%.

    The expected value is exactly 50%. If you tossed a bunch of coins, and expected exactly 50% to be heads, that would be foolish. “Expected value” is a technical expression in the mathematics, and is not what you should expect to happen.

    Instead of randomly tossing coins, lets randomly shuffle up the molecules in the atmosphere. The law of large numbers says you should get mild weather. However, we actually get thunderstorms, hurricanes, tornados, etc. It’s only on the average that it will be mild. Individual instances can be far from that average.

    Of course, with the weather, we don’t have completely random shuffling of molecules. We have other forces such as the coriolus force, the heat from the sun, the orbiting of the earth around the sun, the effects of volcanos, etc. Similarly, other forces affect chemical reactions that can occur. It’s just a mistake to assume that you will always get something near the average.

    You quote this:

    if you have 500 flips of a fair coin that all come up heads, given your qualification (“fair coin”), that is outcome is perfectly consistent with fair coins,

    Law of Large Numbers vs. Keiths

    However, keiths was talking about the outcome of a single tossing experiment (involving a sequence of 500 flips). As the Wikipedia article clearly points out, “the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times.” The LLN doesn’t say anything about the outcome of a single experiment. It is only relevant to the averaged outcome of many experiments.

  3. Dr. Matzke,

    To what do I owe the honor of a visit by world’s top lobbyist for Darwin?

    Your actual argument should be something about independent specification of a pattern. You’re the ID advocate, I shouldn’t have to help you out!

    .

    All coins heads is an independent specification. You’re the one who is making a clueless remark.

    But I chose this example to show that the chance hypothesis can be rejected in some cases even without an explicit independent specification.

    Any other individual sequence of 500 heads/tails also has the same probability. So the law of large numbers doesn’t get you anywhere.

    Wrong, the law of large numbers says the 500 coins should approach the expectation value of 50% heads, whereas 100% heads occupies the maximum possible number of standard deviations possible in principle (somewhere apporaching 22 sigma). Hence, as a practical matter, the chance hypothesis can be rejected. If you were in a chem lab and performed some basic experiment that was done a zillion times before by other experimenters but gave you results that were 22-sigma from expectation, wouldn’t you think you’d raise some eybrows? :-)

    Now if you want to argue 500 fair coins heads is consistent with predicted experimental expectation for a random process, go right ahead. If want to say that you accept the chance hypothesis for this case, feel free to record such an assertion.

    If not, I take it then you agree that I’ve successfully demonstrated the chance hypothesis, practically speaking, can be rejected.

    So what say you Nick, is chance a practical explanation for 500 fair coins heads or not?

  4. The LLN doesn’t say anything about the outcome of a single experiment. It is only relevant to the averaged outcome of many experiments.

    I mentioned every coin can be treated as an independent trial when the entire set is shaken or if the coins are individually shaken, so LLN applies. You’re not even reading what I wrote, much less refuting it. Keep trying, or simply acknowledge I was right.

  5. it [the law of large numbers] is a theorem in mathematics with no relevance to reality

    Neil, are you so determined to disagree you’ll say something so outrageous. That has to be the quote of the day.

  6. Matzke: Any other individual sequence of 500 heads/tails also has the same probability. So the law of large numbers doesn’t get you anywhere.

    I’m a total amateur on statistics. But I’ll give it a go anyway:

    There are much more individual sequences that produce 50% head than the one sequence that produces 100% head.

    When we produce many sequences, we can order the sequences in mathematical sets based on percentage.

    The set with 50% heads accommodates the most sequences; next will be the set with 49% head and the set with 51% head; next 48% and 52% ….; finally the set with 100% head and the set with 100% tail – most likely empty sets because they accommodate just one sequence.

    It follows that a sequence with 50% head has the highest probability and a sequence with 100% head (or a sequence with 100% tail) has the lowest probability.

    If I’m right, Matzke cannot say that “any other individual sequence of 500 heads/tails also has the same probability”. Because sequences which are accommodated by the ’50% heads set’ have the highest probability.

  7. Professor Matzke,

    If you apply the binomial theorem, the probability of 100% heads is (1/2)^500. Getting exactly 250 heads out of 500, while not very probable, is still 500! (factorial) times as large as all heads, an extremely large number.
    Incidentally, statistical analysis is exactly why Gregor Mendel is believed to have “enhanced” the results of his genetics experiments (confirmation bias). But I’m sure you already know that.

    Now, about that small blue planet that I challenged you with . . . ;-)

    -Q

  8. Instead of randomly tossing coins, lets randomly shuffle up the molecules in the atmosphere. The law of large numbers says you should get mild weather.

    Man, Neil, you’re on a roll. :roll: That’s another quote of the day. Everyone reading this with any modest science background knows you just made that up and that it isn’t true.

  9. Neil @ 2 . . .

    Every coin flip is a separate event.

    What you’re suggesting concerns averages of averages, which gets more complicated. Take a look at Stein’s paradox for more information.

    -Q

  10. Some of you materialists should be ashamed of yourselves.

    Would any of you, teaching the next generation assert with a clear conscience that if an individual found a configuration of 500 fair coins heads, that from a practical standpoint, they should consider chance as a possible mechanism?

    Even though 500 coins all heads is as probable as any other specific sequence, it is not an argument in favor of accepting chance as the reason the coins are all heads. The reason we accept random process to create sequences of 50% heads is that it is within expectation. So why reject 100% heads as being the result of chance but not 50% heads? Answer: The law of large numbers.

  11. is still 500! (factorial) times as large as all heads, an extremely large number.

    Small correction (I think):

    I got C(500,250) = 500!/[(250!)(250!)] = 1.17 x 10^149

  12. 12

    Neil Rickert:

    Moreover, by itself, [the law of large numbers] is a theorem in mathematics with no relevance to reality

    Tell that to the casino owners who bet tens of billions of dollars on the law every year. You would get a chuckle or two I suspect.

    Box,

    If I’m right, Matzke cannot say that “any other individual sequence of 500 heads/tails also has the same probability”. Because sequences which are accommodated by the ’50% heads set’ have the highest probability.

    No, Box. Nick is quite correct to point out that the series 500 heads in a row has the exact same chance of happening as any other 500 toss sequence. Sal knows that too, but it does not change his analysis. Nick is also correct that a simple probability calculation (i.e., “it is vastly improbable”) does not get you to a design inference. See my post here for an explanation.

    http://www.uncommondescent.com.....er-errors/

    Finally, Nick is right that a design inference under these circumstances requires a specification. But before he shot off his mouth, he should have checked the post more closely, because, as Sal explains above, he did provide a specification. Thus, shooting off his mouth in this case only made him look foolish.

  13. 13

    Nice post Sal. Maybe 500 doesn’t seem like a “large” number to Darwinists (though for purposes of demonstrating your point it certainly is). Try “one billion heads.” Hey Darwinists, if you found one billion heads in a row, would that be consistent with a chance hypothesis? After all, what Nick says about 500 is true about one billion, i.e., “any other individual sequence of [one billion] heads/tails also has the same probability.”

  14. 14

    Sal, as a poker player, an understanding of the law of large numbers is one of the keys to understanding my game. Indeed, it is fair to say that someone who does not understand the law can never be a good, much less a great, poker player.
    The best book on basic poker theory (Sklansky’s The Theory of Poker) uses the law at a fundamental level. Sklansky says that poker is not about winning any particular hand or any particular session. It is about making correct decisions, and if you consistently make correct decisions you will win in the long run even if you lose in the short run. That is why Sklansky says that you should not think of poker in terms of individual sessions. Instead, you should think of your entire poker life as one long session. That way, by making correct decisions, you skew the expected value in your favor and over a sufficient number of trials (tens of thousands of hands, not hundreds of hands), the law of large numbers say you are bound to be a net winner. It is the same basic theory that casinos use to fleece gamblers except in poker you have a chance to be the position of the casino.

  15. Barry,

    Clint Eastwood once said, “a man’s got to know his limitations.” I’ve never played a hand of poker in my life despite doing quite well in blackjack by counting cards and moving the expected value in my favor.

    I don’t play poker because I know I’d have to play against guys like you. :-) I’d rather play against an easier opponent.

    Sal

    PS

    I once took the Blackhawk casinos for a couple thousand before I scooted out of town. :-)

  16. 16

    “It is the same basic theory that casinos use to fleece gamblers”

    Sal, I have a confession. I do not always put my money where my mouth is. I have a weakness for craps. I know to a moral certainty that playing an optimal craps strategy means only that I will lose my money at the slowest possible rate.

    The optimal mathematical strategy is well known: The true odds of winning the “don’t pass” bet are 976:949. The house pays 1:1 on this bet for a 1.36% advantage. Therefore, I know that if I bet one million dollars (not that I have that kind of money!) one dollar at a time, at the end of the game it is mathematically certain that I would win only approximately 986,000 for a net overall loss of approximately 14,000.

    Why do I play? Because it’s fun! Duh. The fun comes from the short term variance, which gives the illusion that the game can be beaten.

    Example. My wife and I had tickets to David Copperfield a couple of months ago. We were early and I asked her if she wanted me to teach her how to play the game while we waited. She said OK, so I started playing with $200. The next shooter caught a hot hand, and 15 minutes later I cashed out with $650.00, and a good time was had by all.

    Of course, I know that unlike my poker career (in which superior play can create positive EV for the player), if I continue to play craps I will give that $450 back to the house plus a little more (assuming I continue to play optimally; if I don’t play optimally I will give back a lot more).

    So the key to playing craps? I should enjoy the short term variance, but should not fool myself about the long term inevitability, which means I should expect to lose in the long run and budget my play accordingly. My inevitable net losses will be just a “cost of entertainment” no different in principle from the money I paid for the David Copperfield tickets.

  17. Tell that to the casino owners who bet tens of billions of dollars on the law every year. You would get a chuckle or two I suspect.

    You are confusing a mathematical model with a theorem in mathematics. Look at the actual statement of the law of large numbers, as for example in the Wikipedia article. You cannot apply that to reality. You can apply it to a suitable mathematical model of some aspect of reality, which is how casinos use it.

  18. I know to a moral certainty that playing an optimal craps strategy means only that I will lose my money at the slowest possible rate.

    The optimal mathematical strategy is well known: The true odds of winning the “don’t pass” bet are 976:949. The house pays 1:1 on this bet for a 1.36% advantage. Therefore, I know that if I bet one million dollars (not that I have that kind of money!) one dollar at a time, at the end of the game it is mathematically certain that I would win only approximately 986,000 for a net overall loss of approximately 14,000.

    You can beat the casino craps game if there is a promotion in play, and I did just that with a partner.

    I had her bet the passline with the exact same amount that I would bet on the don’t pass. The trick was trying to keep the casino from figuring out we were in collusion. It’s perfectly legal, but casinos hate people using their brains. I dressed like a boy from the hood, and she like debutant.

    We would basically cancel out each other’s losses except when 2 sixes were rolled — in such case she would lose, I’d merely push. So there is a slight loss rate with almost non-existent variance.

    So why did we do this? We knew if we even played a few hours the casino would lavish us with lots of goodies that would exceed our loss, plus some cash back that would theoretically cancel our cash loss.

    If the comp accrual rate in terms of cash, food, hotels in addition to promotional coupons you get in the mail out does your expected loss, you can get a slight cash edge and get some nice free vacations.

    Now if the variance for one of the partners is particularly acute, that partner might get some generous offers in the mail to boot.

    The excitement here wasn’t so much craps, but getting away with the scheme and outsmarting the opponent.
    Those were some fun memories…

    I hope that might help you get some nice vacations and see some more nice shows.

    Sal

  19. Sal, I’m digging this post because it is clearly and succinctly worded.

    I understand the 50% probability predicted by the law of large numbers as it applies to binary coins. I have a few concerns/questions.

    1. The LLN does not say anything about the order of the set, only about the probability of heads, right? But it does predict that certain ordered sets (such as all heads) are less probable. What about other types of ordered sets, e.g., alternating heads/tails, or first half of set is all heads, etc.?

    2. What if, instead of coins, you used something with many more possibilities such as dice or lottery numbers or, in the case of genes, a huge number of possibilities? How would the LLN help the ID hypothesis in such cases?

    3. Or am I mistaken in item 2 above with regard to genes? Should we instead use a huge number of 4-digit “dices” to represent DNA codes? If so, what would the LLN predict as far the probability of the 4-letter code distribution in an average gene?

  20. Box,

    You have some good instincts. Having 50% or 49% or 48% heads would not cause any notice, but 100% does.

    Using the formula I provided here:
    http://www.uncommondescent.com.....air-coins/

    47.8% – 52.2% heads or would cover one standard deviation or 68% of all possible outcomes,

    43.4% – 56.6% would cover three standard deviations or 99.7% of the cases.

    Now if we used a much larger set of coins, say 1,000,000 — 99.7% of the cases would lie in a deviation of only 0.15% from expectation. That is to say, in a sample size that large, a proportion of heads above 50.15% would incline someone to suppose that there could have been some manipulation, at 50.5% heads (10 standard deviations), you’d almost be sure there was some shenanigans going on.

    A multi-sigma (multiple standard deviation) event allowed an online poker cheating ring to be identified:
    http://www.nbcnews.com/id/26563848#.Uq6VejaA3rc

    When that information was posted, Michael Josem, a mathematics-minded Australian poker player, charted NioNio’s results in comparison to the results of 870 “normal” accounts with at least 2,500 hands recorded by poker-tracking software. The result, seen at left, showed that NioNio’s win rate was 10 standard deviations above the mean, or less likely than “winning a one-in-a-million lottery on four consecutive days,” Josem said.

    In the case of 500 coins heads, that configuration approaches whopping 22 standard deviations — you’d really think something was rigged! But apparently some Darwinists will hold fast to the notion a 22-sigma deviation from the expectation of large numbers is perfectly consistent with random processes. :roll:

    Sal

  21. Querius:

    Every coin flip is a separate event.

    We are not concerned with physical events. We are concerned with sampling events in our mathematical model. In the case mentioned, with a sequence of 500 tosses, that full sequence is a single sampling event.

    Yes, you can have a different model where each coin toss is a sampling event. But that changes the question. The questions to be asked about this different model are different from the questions asked about the original model. Different questions have different answers.

  22. 2. What if, instead of coins, you used something with many more possibilities such as dice or lottery numbers or, in the case of genes, a huge number of possibilities? How would the LLN help the ID hypothesis in such cases?

    That’s the subject of future posts, it can be done, it just take a bit more work. I’ve been working on the problem for about a year. :-)

    For starters, consider this post about a casino cheating scandal that involved a non-random shuffle.

    http://www.uncommondescent.com.....bjections/

    The next level is analyzing the probability of syntactically correct and semantically correct constructs within the computers of living systems. You can frame the expectation in terms of the probability of making functionally viable Quine computing systems. Life is like a Quine computer. But I’m getting way way way ahead of myself here in the scope of this discussion.

  23. 1. What about other types of ordered sets, e.g., alternating heads/tails, or first half of set is all heads, etc.?

    For the alternating pattern

    H T H T H…

    Take set composed of every other coin, and you’d get

    H H H H ….

    the expected value of the set would also be 50%, but since the set is all heads you can reject the chance hypothesis.

    That’s how to deal with such patterns.

    The challenge is figuring out clever ways to apply the LLN.

    Sal

  24. => scordova, Mapou
    I would bring in probability distribution of event and event sequences into the mix. However, it is true LLN can be used to exclude chance events in many cases. But ,I am sure if Darwinist are not convinced with existing proofs, they are not going to consider LLN and probability distribution either

  25. F/N: The LLN of course is about fluctuations, and about definable clusters of accessible microstates — accessible specific outcomes — consistent with a given macrostate.

    (This is the heart of thermodynamics and particularly the second law. More to follow.)

    What happens is that there strongly tends to be a predominant cluster of states — in the coin example 50:50 H:T and neighbouring outcomes — that as a cluster overwhelmingly dominate the population of possibilities. So, on the presumption of fair coins — two-sided dice in effect — we have exceedingly good reason to expect that 1,000 coin toss or 500 coin toss outcomes will be in that predominant cluster, and will therefore be near-50-50 in no particular pattern.

    That is what gives a basis for LLN, and it is what drives the inference that an outcome far away from that pattern is a reliable sign that something other than chance was at work in a highly contingent situation. Namely design. So, if one sees all heads, or all tails or another special pattern like alternating tails and heads — notice the specifications that set target zones — one is entirely justified on reliable empirical inference to conclude that the best explanation of such ORDER is design. Given, that we have a known highly contingent situation. (The order of ions in a crystal of NaCl is not a highly contingent outcome and is best explained on mechanical necessity.)

    However, there is another relevant case.

    Now, coins can be seen as 1-bit devices and we can assign H and T to 1 or 0.

    From this we see that we can find our 500 coins in a case where they say have the ASCII code for the first 73 letters of this post or a similar message. (Notice, again, the specification.)

    The coins are going to be close to 50:50 distribution, but that is not the only issue, they are in an organised and functionally specific, code bearing pattern.

    Can this happen by chance? Strictly, yes — any distribution is in principle accessible to chance.

    Is chance the reasonable inference?

    Only if the known, overwhelmingly likely explanation for such code based patterns exhibiting FSCO/I has been ruled out on good grounds.

    And that is the real problem with the typical objections to the design inference that appeal to how chance can come up with any particular outcome: they are implicitly appealing to the practically impossible, because of an implicit a priori ideological presumption.

    Why do I say that with such confidence?

    Because of the other factor at work: sampling/ blond search opportunity.

    500 coins is right on the solar system level complexity threshold used for the design inference.

    Why?

    It turns out that on the gamut of atomic resources, 10^57 atoms [mostly H, He and locked up in good old Sol BTW . . . this is very generous] and on a scale of 10^17 s, near enough the typical estimated age of the observed cosmos, and searching at a rate where every atom is an observer and searches are taken at a rate comparable to the fastest ionic reactions [~ 10^-14 s], the sample size:population size of possible outcomes ratio for 500 coins, is as a one straw sample to a cubical haystack 1,000 light years across. That is the stack is as thick as the central bulge of our galaxy.

    If such a stack were superposed on our galactic neighbourhood and we were to move about at random and pick a one straw sized sample, the outcome would be all but certain: straw and nothing else.

    This is the reason why SC’s appeal to LLN is highly relevant.

    Given the sampling/ search resources of our solar system, no practical sampling of the space for 500 coins can amount to sufficient of a scope that it is reasonable to expect on chance, 500 H or a similar pattern or an outcome reflecting a code etc. The overwhelming bulk cluster of microstate outcomes sees to that.

    So, when we see the appeal to “any outcome is possible so there should be no surprise,” that boils down to an implicit admission of having already ruled out on other grounds that another possibility known to be capable of easily and repeatedly producing patterns and codes, was at work. Namely, design.

    (And BTW, the number of reliably observed and reported cases of chance tosses producing all H, all T etc for 500 or more coins is? . . . You guessed it, zip.)

    Bottomline: the issue is not the math of coins or LLN, but the a priori impositions that demand that we revert to a practically impossible outcome, as the alternative is ruled out of bounds a priori.

    And, this is not exactly a new point, it is one that has been made any number of times, and there has never been a substantial refutation. (Indeed, the very same 500 or 1,000 coin example is the opening case used in one of my favourite statistical thermodynamics introductory texts, by L K Nash. Yes, this is the underlying statistical reasoning that undergirds thermodynamics. Another hot button case for the objectors.)

    But then, we have seen how the ilk of objectors we are routinely dealing with react to self-evident truths, which are absolutely certain.

    So, the issue is to address what is reasonable, and to ring fence and red flag what is unreasonable.

    And it is patent that to expect or argue with a straight face that within the resources of the solar system, it is plausible to generate 500 H’s, etc through chance tosses of fair coins, is — with all due respect — utterly unreasonable.

    So, I think that it is time to face the facts of the evident rhetorical situation.

    KF

  26. Sal, I think you may have struck the mother lode, so to speak. Sure, it needs to be fleshed out and expanded further but I can see the beginning of something big.

    My only worry is that, in the case of a complex genetic code, we need to derive a principle based on the LLN that can be applied to calculate its probability and exclude chance as much as possible.

    One of the problems with the random mutation of genes is that it not only flips new coins into the set (new codes), but it keeps on flipping the old ones as well. I hope this makes sense.

  27. Mapou: Cf just above. The point has been on the table for a very long time, and has long been sufficiently clear for reasonable people [e.g. it is quite clear in WmAD's NFL . . . ], but this is not a reasonable time — it is a highly ideologically polarised time. KF

  28. PS: WmAD in NFL:

    p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.

    I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .

    Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .”

    p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”

    Any reasonably knowledgeable person reading this would and should immediately recognise the underlying issues and the cogency of this case.

    But for 15 years, this has met nothing but obfuscations, willful distortions and misrepresentations, and worse from precisely those who are educated enough to know what this is saying — and recall, every Biology major has had to do significant statistics. EVERY Physics and chemistry major has at least one good stat thermodynamics course under his or her belt.

    The objectors know sufficient, or should know more than sufficient.

    That is the smoking gun.

    KF

  29. Thanks, kairosfocus. I guess I am not well informed about this topic. I’ll keep on reading.

  30. scordova

    LLN is the law that tells us systems will tend toward disorganization rather than organization. It is the law of math that makes the 2nd law of thermodynamics a law of physics.

    Since evolution is systems tending toward organization, LLN and evolution are exactly the inverse and are incompatible in principle. Doubters about ID/evo must be well aware of the dramatic choice they face. They must choose between math/physics and evolution. If they believe evolution rather than ID then they deny math/physics, full stop.

  31. => I think scordova’s LLN doesn’t take sequence into consideration.I am sure you are aware that the chance of some sequence occurring is faster than other sequence.If you are talking of gene, then some sequence of A,T,C,G will occur sooner than other sequence.I think this should also be taken into account in our ID law.

  32. Mapou: Recall, we have here people perfectly willing to throw logic overboard — as we have seen. KF

  33. Barry #12
    No, Box. Nick is quite correct to point out that the series 500 heads in a row has the exact same chance of happening as any other 500 toss sequence. Sal knows that too, but it does not change his analysis.

    Thanks for pointing that out Barry. So, for once, Nick is right and I’m wrong :)
    Each individual outcome has the exact same chance of happening as any other outcome. However there are vastly more outcomes in the center of the spectrum:

    Scordova #20:
    47.8% – 52.2% heads or would cover one standard deviation or 68% of all possible outcomes,
    43.4% – 56.6% would cover three standard deviations or 99.7% of the cases.

    So the chance that an outcome is within 43.4% – 56.6% range is vastly more likely (99.7%) than the outcome being outside that range (0.3%).
    At first sight this conclusion seems contradictory to the fact that each individual outcome has the exact same chance of happening as any other outcome. Thanks to Scordova’s excellent explanations I understand now that it is not.

  34. Thanks for pointing that out Barry. So, for once, Nick is right and I’m wrong

    But the usually verbose Nick Matzke is strangely silent on a simple question:

    So what say you Nick, is chance a practical explanation for 500 fair coins heads or not?

    Nick could say, “no”, in which case we’ll have it recorded that the world’s leading Darwinist can’t even understand basic statistics.

    Nick could say, “yes”, in which case I’ll ask him why. To which he should say,

    “it’s far outside of expectation”, to which I’ll ask

    “you mean the expectation that the law of large numbers says we should expect in practice?” to which Nick should say:

    “yes, Sal”, to which I’ll say,

    “So I’m right, Nick?”

    To which Nick should say,

    “Yes, you’re right, Sal.

    Any wagers whether Nick will answer this simple question. Now, Barry, you’ve been really trying to put the screws to Nick about Niles Eldridge. Nick has been so generous and quick to respond. You might consider you post this question to Nick:

    Dr. Matzke,

    If you found 500 fair coins all heads on a tray, would you reject chance as an explanation. If yes, explain why. Thank you.

    Here is the problem. If Nick says, “no”, he’ll look like a fool.

    If Nick says, “yes”, he’ll have to explain why, and if he explains why, he’ll have to use exactly the line of argument I laid out. This puts Nick in a tough position, he’ll have to either:

    1. Say “no” and thus get disgraced
    2. Say “yes” and thus publicly agree with a creationist and thus get disgraced too . :-)

    Whatever he says, we’ll get a lot of mileage out of it.

  35. 33
    BoxDecember 16, 2013 at 6:37 am
    Barry #12
    No, Box. Nick is quite correct to point out that the series 500 heads in a row has the exact same chance of happening as any other 500 toss sequence. Sal knows that too, but it does not change his analysis.

    Thanks for pointing that out Barry. So, for once, Nick is right and I’m wrong :)
    Each individual outcome has the exact same chance of happening as any other outcome. However there are vastly more outcomes in the center of the spectrum:

    Scordova #20:
    47.8% – 52.2% heads or would cover one standard deviation or 68% of all possible outcomes,
    43.4% – 56.6% would cover three standard deviations or 99.7% of the cases.

    So the chance that an outcome is within 43.4% – 56.6% range is vastly more likely (99.7%) than the outcome being outside that range (0.3%).
    At first sight this conclusion seems contradictory to the fact that each individual outcome has the exact same chance of happening as any other outcome. Thanks to Scordova’s excellent explanations I understand now that it is not.

    This is the difference between looking at the outcome of a process as a specific sequence, versus the outcome as some aggregate. The law of large numbers does apply to things like expected percentages of heads, just like it applies to things like e.g. the mean.

    But creationism/ID advocates are almost always looking at specific DNA sequences, and sequences of coin flips etc. are used as analogies to that (horrible, silly analogies, but if the creationists acknowledged how silly the analogy was, they’d probably give up creationism).

    In fact, I’m sure we’ll get back that sooner rather than later. I doubt they intend to argue that ID predicts that we will see GGGGGGGGGGGGGGGG…x1000s in a DNA sequence, that would be evidence of ID.

  36. I’ve known Nick a long time, and here is one thing he might say.

    “I’d reject the chance hypothesis because I know humans can make the all-heads pattern”.

    The proper counter-response by an ID proponent:

    “But what is it about the all-heads pattern that would induce you to even consider humans in the first place, is it because the pattern lies far outside expectation? After all a human can also make a random looking pattern deliberately, so why does this pattern force you to consider an intelligent agency?”

    To which Nick should say:

    “Because the pattern is far outside expectation”

    To which an ID proponents would respond:

    “So there are indeed patterns in nature which would incline you to consider an intelligent agency at work if you were acquainted with such an intelligent agency”

    To which Nick should say:

    “Yes”

    To which I would say:

    “So some patterns are more special than others?”

    To which Nick should say:

    “Yes”

  37. Sal,

    If evolution was entirely due to chance variation, you would have a point, but it isnt (and you dont). Evolution is variation + selection.

    The LLN is irrelevant because it isnt a ‘fair’ coin. Selection makes sure of that.

  38. Graham2,

    By selection do you mean real natural selection or DFFM. You obviously are thinking evolution proceeds by DFFM (Darwin’s Falsified Fantasy Mechanism), it doesn’t.

    But thank you for you comment any way.

    Now, since it seems Nick is so reluctant to answer a simple question, and you’re one of the few Darwinists remaining at UD, perhaps you tell the readers what you think:

    1. for 500 fair coins found on a table, would you reject chance as an explanation for the configuration?

    2. for 500 fair coins, is the all-heads pattern particularly special for inclining you to reject chance as the mechanism that created the configuration?

    3. does this illustration indeed show there are some patterns that, if found in a particular system, would incline you to reject the chance hypothesis?

    4. for pre-biotic soups, where like coins the amino acids and DNAs would be heterochiral and not homochiral, would you reject chance as a mechanism for homochirality?

  39. If you invent a cartoon to replace evolution, then all you have left is design, so good luck.

  40. Graham2,

    Thank you for your response, but I was hoping you tell the readers if my procedure for rejecting chance as a mechanism of all fair coins heads was correct.

    I mean, I know Darwinists don’t like to ever be seen publicly agreeing with a creationist, but think about all those prospective science students out there wanting to learn basic statistical notions.

    So for the sake of science, will you tell them specifically what you agree or disagree about in my analysis of 500 fair coins heads.

    Is if fair to say you would reject chance as the mechanism because all coins heads is far away from expectation?

    Sal

  41. If I saw 500 heads, I would suspect interference by some external agency.

  42. It might be informative to see why this simple is example is problematic for materialists.

    Are there patterns (configurations of matter) which in principle would cause us to reject chance as the mechanism for creating the configuration (assuming the configuration cannot be reduced to law)? The answer is yes, the 500 fair coins heads illustration is one such example of many.

    But Nick would be reluctant to admit that such patterns might even exist in principle, because that admits the possibility such patterns could exist in nature :shock: The 500 coin example proves such patterns can exists at least in principle.

    Whether such pattern exist in biology is another story, but Nick, like so many Darwinists will fight to defend every inch of evolutionary territory. The thought that ID proponents have a chance at identifying such patterns in nature as I have done with 500 coins, must not really sit well with them.

    On the other hand, Nick realizes if he disagrees with me on the details of 500 coins illustration, he’ll ruin his credibility in way that is recorded on a public forum.

    So he’s in a bad position. His only recourse is to change the subject on the simple question, trivialize the illustration, or simply bail out of the debate — otherwise its checkmate.

  43. Graham2,

    If I saw 500 heads, I would suspect interference by some external agency.

    Thank you for your courageous response. But what is it about that particular pattern versus any other. Is it because the pattern in not consistent with the expectation of a random pattern? If so, then the pattern is special by its very nature.

    Thanks again for responding.

  44. Sal, Unlikely patterns (eg: life) are only problematic if you reject variation+selection. Its called ‘straw man’. All this guff about coins is irrelevant.

  45. No No No No. There is nothing ‘special’ about any pattern. We attach significance to it because we like patterns, but statistically, there is nothing special about it. All sequences (patterns) are equally likely. They only become suspicious if we have specified them in advance.

  46. Sal, Unlikely patterns (eg: life) are only problematic if you reject variation+selection.

    I do, but more importantly so does nature. I commend your valor in engaging me in this debate, which is more than I can say for Dr. Matzke who is usually so eager to steal the microphone to try to put me down.

  47. They only become suspicious if we have specified them in advance.

    Again I appreciate your participation because it raises issues I’m sure are in the minds of some.

    Consider the following illustration. To help understand the illustration, I use the following convention 1-H means coin #1 is heads, 252-T means coin number 252 is tails

    1-H 2-T 3-H 4-H 5-H 6-T 7-H 8-T 9-T ….

    251-H 252-T 253-H 254-H 255-H 256-T 257-H 258-T 259-T ….

    I basically repeat the pattern in the first 250 coins in the remaining 250 coins.

    If I presented this set of coins to you and if I worked hard at making it not be anything you’ve seen before, the pattern would not be technically specified in advance.

    Even though you have never seen it before, would that raise your suspicion that the total 500 coin configuration was not the pure product of chance and law?

    This obviously is relevant to cells of the same species because we have identical patterns in separate objects (each cell is an approximate copy of the other), much like they all came from an intelligently designed factory.

    Cells are not the product of chance and law alone but the product of other cells. But that raises the question, if cells today are not the product of chance and law, could chance and law make the first ancestral cell?

    So clearly, in biology we can identify patterns that can’t be explained in terms of chance and law alone. We would attribute the patterns to be the result of the factory-like machinery in parent cells — but this only defers the question of where the first cell came from, since we have already said cells in the current day don’t emerge by chance and law alone.

  48. we have already said cells … don’t emerge by chance and law alone

    You have just shot yourself in the foot. You then persist with this fantasy of transferring results of coin tosses to complex real-world processes that are a mix of, well, everything. You cant do it. Its cartoon stuff. I cant help you.

  49. this fantasy of transferring results of coin tosses to complex real-world processes that are a mix of, well, everything. You cant do it. Its cartoon stuff. I cant help you.

    By saying “the world is so complex we can’t model it” precisely shoots your foot not mine since you can’t demonstrate evolution actually accounts for those complexities, but instead promotes falsified fantasy as proven fact even without considering those complexities. So it’s your foot being shot by appealing to unaccounted for complexities, not mine.

  50. 50

    Sal, “how does evolution work?”

    Graham2, “it’s complicated and stuff.”

  51. Graham2:

    No No No No. There is nothing ‘special’ about any pattern. We attach significance to it because we like patterns, but statistically, there is nothing special about it. All sequences (patterns) are equally likely.

    What utter BS. We recognize patterns precisely because they are statistically significant. This is why computer programs like speech and visual recognizers work.

  52. 52
    CentralScrutinizer

    Neil Rickart: In the case mentioned, with a sequence of 500 tosses, that full sequence is a single sampling event.

    Dead ass wrong, as anyone who has studied the statistics of gambling will tell you. Each toss is independent from all previous tosses, because there is no memory in the system from one roll of the dice, or flip of the coin, to the next.

    Ignorant people who visit Casinos often labor under the delusion that a machine is “hot” or a craps table is “hot” because nobody has gotten a good outcome recently. But it’s wrongheaded. Since there’s no memory built in to the system, each play is completely independent of all previous plays. The odds do no changes from play to play.

    On the other hand, card games, such as Black Jack, are different. They have “memory” built into the game of a sorts because every card played rules out that same card in the deck. Card counters take advantage of this fact.

  53. Okay – I’m not a scientist – nor close to one. But from a layperson’s perspective, this argument seems to make sense on the surface, but fails on deeper investigation – I’ll tell you why it appears that way to us: We are told that the universe is infinite. For life to evolve, you mention the 500 coins heads up. And yes, by itself, that appears to require an outside agency. But if you took 500 quintillion (or some other ridiculously large number) boxes of 500 coins each, and shook them, aren’t the odds approaching 100% that at least one of them will end up with all heads or tails?

    Assume each star is a box of 500 coins – then no, it doesn’t require an outside hand to create life, because this is only one box out of an infinite number. I haven’t seen this comment pop up yet, so if there is a massive and fatal flaw in this argument, feel free to let me know – I believe I mentioned I’m no scientist (in fact, a creationist myself, but this argument for ID makes no sense to me, and never has).

  54. 54

    Jbarron,

    But if you took 500 quintillion boxes of 500 coins each, and shook them, aren’t the odds approaching 100% that at least one of them will end up with all heads or tails?

    Actually, no. J. That is not correct. If every atom in the universe were a table and you shook out a box of 500 coins onto each of those tables once every second since the big bang, you would not expect to get that sequence through sheer random chance even once.
    That is why Sal choose 500. It is right at the universal probability bound.
    Your (wrong) intuition is also why Darwinists are able to get away with their shell game. They say, “ust put the problem in a black box labeled “deep time and chance” and poof, problem solved.” Only when you actually step back and do the math do you realize that the problem is far from solved.

  55. For future reference for anyone possibly reading this discussion years from now, a new thread was started by Barry here:

    http://www.uncommondescent.com.....ck-matzke/

    Here is my summary. The question posed was:

    If you came across a table on which was set 500 coins (no tossing involved) and all 500 coins displayed the “heads” side of the coin, would you reject “chance” as a hypothesis to explain this particular configuration of coins on a table?

    Nick in his own words answering the more general question of rejecting chance as a mechanism:

    And our response was to say no, because there are many chance hypotheses, not just one, and a pattern like “all heads” does not therefore reject all chance hypotheses.

    Then there was discussion about the possibility of a two-head coin. If a two-headed coin was discovered as the mechanism of the 500 heads patter I said:

    two-headed coins is a rejection of the chance hypothesis

    to which Nick responded

    Not really.

    :shock:

    If we found the coins are 2-headed, then that is the mechanism that causes the pattern to be all heads, there is no reason whatsoever except the determination to disagree with me to say, “not really”.

    Even in a simple hypothetical case where chance cannot even in principle be the explanation, Nick will in insist:

    no, because there are many chance hypotheses

    Oh well, if even a two-headed coin is the cause of a 500 coin all-heads pattern, Nick will not reject chance as a possible mechanism for the pattern. :roll:

    Which suggests Nick will never ever really reject chance as a hypothesis as it pertains to design.

    PS

    I would have answered by saying:

    A 2-headed coin is not a chance hypotheses, so even if true, the 2-headed coin hypothesis would automatically reject the chance hypothesis.

    The hypothesis that coins being heads because they were already heads due to the manufacturing or packing process also automatically rejects the chance hypothesis.

    In addition, I reject all irrelevant chance hypotheses on principle.

    The one relevant chance hypothesis (subject to inspecting the coins) is the fair coin hypothesis (or even slightly biased coin hypothesis), is not consistent with expectation.

    Thus I reject the chance hypothesis as an explanation for the pattern since all relevant and irrelevant hypotheses are rejected.

  56. Okay – so assuming you did this once a second since the big bang, it won’t happen even once. I will assume that you have the math to prove that – I’m not a statistician. But to quote Douglas Adams: “Space is big. You just won’t believe how vastly, hugely, mind- bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space.”

    So, we can’t assume this is happening once a second. It’s happening millions of times a second. Hubble estimated 125 billion galaxies in 1999, and they’ve almost doubled that since. With 300 billion stars in a galaxy. And it’s not just the formation of these solar systems that count as shaking the box – it’s any and all events – an asteroid strike, a supernova, maybe a white hole spits out some matter. And this is all assuming that space isn’t, in fact, infinite.

    I do believe in Creation, mostly because the thought of random elements coming together to make life hasn’t ever made any sense – We can’t even do this on purpose yet, let alone prove it happened by accident. But from a layperson’s perspective, the “The numbers are just too big” argument doesn’t make sense – maybe because for us, all the numbers are too big.

  57. 57

    Jbarron,

    “The numbers are just too big” argument doesn’t make sense – maybe because for us, all the numbers are too big.

    There is a name for what you are doing. It is called “chance of the gaps.”

    Dembski explains the concept at a lay level here: http://www.leaderu.com/offices.....CEGAPS.pdf

    Some excerpts:

    Statistical reasoning must be capable of eliminating chance when the probability of events gets too small. If not, chance can be invoked to explain anything. Scientists rightly resist invoking the supernatural in scientific explanations for fear of committing a god-of-the-gaps fallacy (the fallacy of using God as a stop-gap for ignorance). Yet without some restriction on the use of chance, scientists are in danger of committing a logically equivalent fallacy—one we may call the “chance-of-the-gaps fallacy.”

    Dembski calculates the universal probability bound as follows:

    In the observable universe, probabilistic resources come in very limited supplies. Within the known physical universe there are estimated around 10^80 elementary particles. Moreover, the properties of matter are such that transitions from one physical state to another cannot occur at a rate faster than 10^45 times per second. This frequency corresponds to the Planck time, which constitutes the smallest physically meaningful unit of time. Finally, the universe itself is about a billion times younger than 10^25 seconds (assuming the universe is between ten and twenty billion years old). If we now assume that any specification of an event within the known physical universe requires at least one elementary particle to specify it and cannot be generated any faster than the Planck time, then these cosmological constraints imply that the total number of specified events throughout cosmic history cannot exceed
    10^80 x 10^45 x 10^25 = 10^150

    Summary: Your 300 billion galaxies and 13.7 billion years still do not get you to a place where anything with a probability of less than 1 in 10^150 can happen. The 500 heads probability is less than that.

  58. Barry Arrington:

    There is a name for what you are doing. It is called “chance of the gaps.”

    Chance of the gaps? LOL. I like that. I guess the Darwinists deserve to get a taste of their own medicine.

  59. Scordova noted

    Small correction (I think):
    I got C(500,250) = 500!/[(250!)(250!)] = 1.17 x 10^149

    Aargh. Yes, you’re right.

    And that’s why some folks need to speculate on a multiverse of a many trillions of universes to have even a reasonable chance. And then, considering the complexity of the DNA code, and the interlocking chemical cycles with in a cell, let alone the structures in an animal or plant . . .

    Chance and necessity begin to look pretty feeble.

    -Q

  60. Thanks for the math, Barry, and for that link – it was an interesting read, though I would put it as slightly higher than layperson understanding. Then again, I work in customer service, so maybe I’m biased (in the negative) towards laypersons such as myself, and our average level of understanding.

    I would love to see a reply on this from the other side – though I may be in the wrong place for this. From what I’ve read of the comment thread, most of the darwinians stay away from this thread, the replies have been – well, I don’t want to say lackluster and repetitive, but perhaps it isn’t too unfair to do so.

    This is probably a bit of confirmation bias – the thread is dominated by ID proponents, and the argument seems to go in circles – ID says “chance doesn’t account for it!” while Darwinians say “Yes it does!”; so it appears that the sides have split, and there seems to be a rapidly-dwindling pool of Darwinians reading/refuting these comments. If that seems to be the case – are you making these arguments to convince the average person, or other scientists?

    Just calling out, if there’s anyone out there on the opposite side, do you have a rebuttal in mathematical terms? Or an explanation of some glaring flaw in the argument, something that’s been overlooked? Are there any die-hard darwinians/statisticians still out there, who have some numbers to refute this? I am not being sarcastic or argumentative, I’m genuinely interested in a response.

  61. Jbarron,

    I hope indeed you are a creationist. Sometimes when I air my doubts about ID, it comes a across as me being unfairly critical of ID, but really, at the heart I’m asking “are you guys really really sure about this, this means a lot to me personally, and I don’t want to be wrong, I have too much at stake..”

    There will come a point that we cannot ultimately rule out every possibility. To rule out every possibility one would have to know everything, and at that point you’re God, and if you’re God you won’t have any need of proof or faith. In this world, we are somewhat like helpless children having to put our trust in ideas for which we don’t have complete proof.

    The way I deal with the question is this, “what would you wager your soul on given the evidence?” We make decisions all the time with incomplete evidence as to the best course of action from signing contracts, accepting job offers, having kids, etc.

    In that sense, no matter what statistics are offered, no matter how thorough the arguments, there might be some loose ends not even Einstein can tie. At some point there will have to be a mustard seed of reasonable faith as to which argument is more credible.

    Maybe rather than statistics, I’ll give you a simpler illustration. Start out with a house of cards, then subject the cards to random processes. Is it reasonable even with billions of galaxies that the cards will spontaneously reassemble into a house? That illustration is more to the point, but the reason I chose 500 coins is that to calculate the odds is very straight forward, I cannot do so with cards.

    In fact, the odds for life evolving from scratch are probably worse than a house of cards assembling spontaneously. At that point ID can only make some generous assumption in favor of chance, which means ID proponents are actually giving the Darwinists an enormous benefit of doubt to begin with.

    For what it’s worth, my Graduate Advisor in my physics program was a pioneer of quantum computing, and in the realm of quantum computing the thought of many-worlds is at least conceptually on the table. I personally do not believe in many-worlds as a reality, but more as a conceptual tool for understanding quantum mechanics — but the math formally doesn’t rule it out, and some have seized upon it as some sort of legitimate possibility.

    Only God knows the answer, but I’m stuck in the world I know, not in the many worlds of some speculative theory…

    For what it is worth, I pointed out multiple-universes is not necessarily an argument against ID either:
    http://www.uncommondescent.com.....t-happens/

    So, yes, formally speaking there is always a chance ID proponents have not accounted for something, and as a general principle, I don’t like to over play my hand. I don’t think I’ve ever categorically said, “there is absolutely no chance”, as that would be overplaying the hand I’ve been dealt in terms of data. I’ve said, “practically speaking, there is no chance”.

    I’ve wagered a lot in casinos, I wouldn’t wager that chance is the cause of life, and I certainly wouldn’t wager my soul on chance being the explanation of life.

    Like you, at some point, it is simply harder to believe all life is an accident, and I take it your question is posed because you want to make sure ID is right. I can only say, in good conscience, though I can’t personally demonstrate ID is absolutely true, I think I can demonstrate ID is a far better wager than mindless evolution — I’d wager my soul on it and that of others. As far as accepting ID as true, one has nothing to lose by being wrong and everything to gain by being right.

    PS
    You can see what evolutionists themselves have said about the benefit of evolution being true in the comment section:
    http://www.uncommondescent.com.....e-to-gain/

  62. are you making these arguments to convince the average person, or other scientists?

    These discussions for me are public diary of my exploration of the question of ID, and we are conversing with other individuals with pretty good math, physics, chemistry, and medical backgrounds. At least 3 UD authors are PhD physicists, 3 are PhD biologists, a few are medical doctors, and at least 2 I know of are PhD mathematicians, not to mention tons of us are engineers!

    If a famous evolutionary biologist like Dr. Nick Matzke won’t reject chance as a mechanism for the appearance of all coins-heads, it is evident the ID side will never agree with him, nor he with us.

    That’s fine, but we picked on him to show how determined he was to disagree with everything we say even to the point of saying things he would never say or teach his students.

    The reason so many of them are tossed from UD? It’s simply a waste of time to engage some of the worthless arguments going around. It’s better to engage cream of the crop Darwinists like Nick.

  63. @JBarron

    Jbarron: are you making these arguments to convince the average person, or other scientists?

    With me being somewhat in the middle, an undergrad in the field of biomedical science and aspirations of med school eventually, I can comment here.

    The arguments and discussions presented certainly aren’t friendly to the man on the street. Most, if not all, threads will presume knowledge in logic, history, biology (and more specifically, micro/cellular biology), mathematics (my Achilles Heel) and physics, with some philosophical understanding for argument’s sake.

    It may seem daunting at first but with some hard work, and patience, I feel it to be entirely possible for just about anyone (a priori bias notwithstanding) to grasp what is being discussed, and even participate.

    Ask questions. Make sure to read the links attached to posts. Search on items you may be cloudy on. Ask questions again to clarify any jargon or cloudy concepts.

    The only big thing that I’ve noticed to really draw the ire of the locals? Don’t try to BS your point into validity. It isn’t to say that I think you’d attempt such a thing, but it does happen. Often. It is called out and embarrassment can ensue.

    I hope this helps, and if I can help in any way I should be happy to do so.

  64. By the way, Jbarron,

    If you are really brave you might find some pro-evolution forum and ask them the odds of consciousness forming via evolutionary or chance processes. Ask them to give you the numbers. I posed this question here at UD:

    http://www.uncommondescent.com.....erialists/

    Ask them to provide the odds of molecules forming consciousness.

    When I place wagers in a casino, I always have some estimate of the odds. The only reason I place wagers with losing odds is to persuade the casino staff I’m and idiot so I don’t get thrown out for being too skillful.

    I once sat at a blackjack table in Las Vegas and refused to place bets until I recognized via card counting the odds were in my favor. I just sat there for 15 minutes doing nothing, and when the odds were in my favor, I pushed out a modest bet and won a few hands.

    The casino threw me out in about 3 minutes after I won 3 hands. :-)

    So ask the Darwinsits what are the odds of evolving consciousness. Ask them how material things give rise to the human spirit. In my opinion, this is like asking the odds that a square circle exists in Euclidean geometry.

  65. Thanks for the help guys! Scordova, I am a creationist, but it’s as a matter of faith, not reason. I went to school for Criminology (though I don’t work in that field), and am pretty heavy into the arts side of the higher education world (History/English) – my science skills are limited to basic understanding of concepts and terms.

    I enjoy hearing a good debate – one where both sides make excellent points. I don’t know that I am brave enough to go post on the evolutionary forums (though I enjoy reading them) – mostly because I know that I would be out of my depth there. I lack knowledge of their tenets, and the higher math, as I mentioned before, eludes me.

    Originally, I posted here to get a better understanding of the chance argument; and I have to say that it’s been educational – the article on the universal boundary was actually very enlightening in that regard. I remember hearing a statistician (Ron Pyke) speak at my church when I was younger; his brother was the minister there. He was the first real exposure I had to a scientific view of creationism, and it’s something that intrigued me since.

  66. There are so many interesting things in science to discover. It’s not all evolution and Latin terminology.

    For example, speaking of casinos, sometimes I wonder whether plants are playing their environment as if they were pre-programmed to bet (pardon my anthropomorphisms).

    Most people gamble until they lose most of their cash on hand (or if they’re smart, what they planned to lose). When plants collect energy, they can “skim” some of their winnings by storing energy in their starchy roots, insurance against hard times or bad luck. Or they can invest it in producing more leaves and becoming taller. Arguably, the trick for them is to remain in the game of life as long as possible.

    If they “choose” to invest their energy in leaves, will they purchase expensive, waxy, nasty tasting, insect-resistant leaves such as those in evergreen trees (a large bet), or cheap and yummy (to insects) throwaways as in deciduous trees?

    Gambling everything into leaves or height is a losing strategy. So, considering their environment, what proportions do plants maintain between roots, leaves, and height? What do seedlings do in a dark environment?

    Is there a mathematics or programming (with at least the appearance of intelligent design) to be discovered here?

    Do plants “bet” using a Fibonacci series with respect to successive growing seasons, good or poor? Would it work in a casino? ;-)

    -Q

Leave a Reply