Uncommon Descent Serving The Intelligent Design Community

Jerad’s DDS Causes Him to Succumb to “Miller’s Mendacity” and Other Errors

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Part 1:  Jerad’s DDS (“Darwinist Derangement Syndrome”)

Sometimes one just has to stop, gape and stare at the things Darwinists say.  

Consider Jerad’s response to Sal’s 500 coin flip post.  He says:  “If I got 500 heads in a row I’d be very surprised and suspicious. I might even get the coin checked. But it could happen.”  Later he says that if asked about 500 heads in a row he would respond:  “I would NOT say it was ‘inconsistent with fair coins.’”  Then this:  “All we are saying is that any particular sequence is equally unlikely and that 500 heads is just one of those particular sequences.” 

No Jerad.  You are wrong. Stunningly, glaringly, gobsmackingly wrong, and it beggars belief that someone would say these things.  The probability of getting 500 heads in a row is (1/2)^500.  This is a probability far far beyond the universal probability bound.  Let me put it this way:  If every atom in the universe had been flipping a coin every second for the last 14.5 billion years, we would not expect to see this sequence even once. 

But, insists Jerad, it could happen.  Jerad’s statement is true only in the trivial sense that flipping 500 heads in a row is not physically or logically impossible.  Nevertheless, the probability of it actually happening is so vanishingly small that it can be considered a practical impossibility.  If a person refuses to admit this, it means they are either invincibly stupid or piggishly obstinate or both.  Either way, it makes no sense to argue with them.  (Charity compels me to believe Jerad will reform his statements upon reflection.) 

But, insists Jerad, the probability of the 500-heads-in-a-row sequence is exactly the same as the probability of any other sequence.  Again, Jerad’s statement is true only in the trivial sense that any 500 flip sequence of a fair coin has the exact same probability as any other.  Sadly, however, when we engage in a non-trivial analysis of the sequence we see that Jerad’s DDS has caused him to succumb to the Darwinist error I call “Miller’s Mendacity” (in homage to Johnson’s Berra’s Blunder).*  Miller’s Mendacity is named after Ken Miller, who once made the following statement in an interview:  

One of the mathematical tricks employed by intelligent design involves taking the present day situation and calculating probabilities that the present would have appeared randomly from events in the past. And the best example I can give is to sit down with four friends, shuffle a deck of 52 cards, and deal them out and keep an exact record of the order in which the cards were dealt. We can then look back and say ‘my goodness, how improbable this is. We can play cards for the rest of our lives and we would never ever deal the cards out in this exact same fashion.’ You know what; that’s absolutely correct. Nonetheless, you dealt them out and nonetheless you got the hand that you did. 

Miller’s analysis is either misleading or pointless, because no ID supporter has ever, as far as I know, argued “X is improbable; therefore X was designed.” Consider the example advanced by Miller, a sequence of 52 cards dealt from a shuffled deck. Miller’s point is that extremely improbable non-designed events occur all the time and therefore it is wrong to say extremely improbable events must be designed. Miller blatently misrepresents ID theory, because no ID proponent says that mere improbability denotes design. 

Let’s consider a more relevant example.  Suppose, Jerad and I played 200 hands of heads up poker and I was the dealer.  If I dealt myself a royal flush in spades on every hand, I am sure Jerad would not be satisfied if I pointed out the (again, trivially true) fact that the sequence “200 royal flushes in spades in a row” has exactly the same probability as any other 200 hand sequence.  Jerad would naturally conclude that I had been cheating, and when I had shuffled the deck I only appeared to randomize the cards.  In other words, he would make a perfectly reasonable design inference.

What is the difference between Miller’s example and mine?  In Miller’s example the sequence of cards was only highly improbable. In my example the sequence of cards was not only highly improbable, but it also conformed to a specification.  ID proponents do not argue that mere improbability denotes design. They argue that design is the best explanation where there is a highly improbable event AND that event conforms to an independently designated specification. 

Returning to Jerad’s 500 heads example, what are we to make of his statement that if that happened he “might” get the coin checked.  Blithering nonsense.  Of course he would not get the coin checked, because Jerad would already know to a moral certainty that the coin is not fair, and getting it “checked” would be a silly waste of time.  If Jerad denies that he would know to a moral certainty that the coin was not fair, that only means that he is invincibly stupid or piggishly obstinate or both.  Again, either way, it would make no sense to argue with him.  (And again, charity compels me to believe that upon reflection Jerad would not deny this.) 

Part 2:  Why Would Jerad Say These Things? 

Responding to Jerad’s probability analysis is child’s play.  He makes the same old tiresome Darwinist errors that we have had to correct countless times before and will doubtless have to correct again countless times in the future. 

As the title of this post suggests, however, far more interesting to me is why Jerad – an obviously reasonably intelligent commenter – would say such things at all.  Sal calls it SSDD (Space Shuttle Denying Darwinist or Same Stuff, Different Darwinist).  I call it Darwinist Derangement Syndrome (“DDS”).  DDS is somewhat akin to Tourette syndrome in that sufferers appear to be compelled to make inexplicable statements (e.g., if I got 500 heads in a row I “might” get the coin checked or “It could happen.”).   

DDS is a sad and somewhat pathetic condition that I hope one day to have included in the Diagnostic and Statistical Manual of Mental Disorders published by the American Psychiatric Association.  The manual is already larded up with diagnostic inflation; why not another? 

What causes DDS?  Of course, it is difficult to be certain, but my best guess is that it results from an extreme commitment to materialist metaphysics.  What is the recommended treatment for DDS?  The only thing we can do is patiently point out the obvious over and over and over, with the small (but, one hopes, not altogether non-existent) chance that one day the patient will recover his senses. 

*I took Ken Miller down on his error in this post

Comments
It always happens in probabily discussions. It’s very annoying. *harrumph*
:-) Brits, what can you do? :-) (speaking from just outside of York)Jerad
June 27, 2013
June
06
Jun
27
27
2013
03:27 AM
3
03
27
AM
PDT
Jerad
I am not trying to speak to anything other than those simple, clear, fairly abstract and ideal circumstances.
And this is why this whole conversation is a nonsense unless we factor in our actual state of knowledge (as you have just done). In an "ideal circumstance" we might *know* with God's Eye (or Mathematician's Eye) knowledge, that the coin was fair, and was fairly tossed. In which case, no matter what the sequence, we would reject Design. But the whole point of making inferences is that we do NOT know, with God's Eye knowledge that the coin was fair, fairly tossed. So we have to weigh up the relative probabilities of a fair coin, fairly tossed, or something else. And as 500 Heads one of a tiny subset of Special sequences, and therefore extremely probable, almost any other explanation is more likely than "fair coin fairly tossed". It's really no more complicated than that. Which is why I suggested a Bayesian formalisation of the inference, where at least we make our state of knowledge explicit. If we do not, we end up in silly arguments where the only difference is the amount of knowledge assumed. Jerad isn't suffering from "DDS" any more than Barry is suffering from "IDS". But the whole conversation is suffering from people thinking other people are being stupid when they are simply making different but unspecified (sometime) assumptions about what we know to start with. It always happens in probabily discussions. It's very annoying. *harrumph*Elizabeth B Liddle
June 27, 2013
June
06
Jun
27
27
2013
03:19 AM
3
03
19
AM
PDT
Jerad: All you have managed to do is to underscore my point. KFkairosfocus
June 27, 2013
June
06
Jun
27
27
2013
03:18 AM
3
03
18
AM
PDT
Your error has been repeatedly pointed out, from several quarters. You have insisted on a strawman distortion that caricatures the situation by highlighting something that is true in itself — that on a fair coin assumption or situation any one state is as probable as any one other state.
No, that was not a strawman distortion, that was the topic of the 22 sigma thread I responded to.
But in the wider context, we are precisely not dealing with any one state in isolation, but partitioning of the space of possibilities in light of various significant considerations.
What wider state are you talking about? I haven't responded to any thread which was about anything other than mathematics. Intentionally so.
What is to be explained is the partition we find ourselves in, on best explanation. And zones of interest that are overwhelmed by the statistical weight of the bulk of the possibilities are best explained on choice not chance.
Whatever. You're talking about clusters or groups of outcomes again and I've already agreed they are more likely to happen.
In short, you refuse to recognise that there are such things as significant partitions that can have a significance above and beyond merely being possible outcomes. Also, you are suppressing the underlying issue, that there are two known causes of highly contingent outcomes, chance and choice. Where, what is utterly unexpected — in the relevant case on the gamut of atomic resources in our solar system for its plausible lifespan — on chance, indeed is so maximally improbable that it is reliably unobservable on chance, is easily explained and feasibly observable on choice.
You can pick or define clusters or zones or partitions of outcomes that are in your interest. Sure. And you have to pick a 'measure' which, in this particular case was number of heads or tails. On other measures the outcome of all Hs would NOT be so far from the mean. If you've got a particular situation you want me to address then bring it up.
So, in a context of inference to best causal explanation, it is morally certain that choice was involved not chance. That is, one would be foolishly naive or irresponsible on matters of moment, to insist on acting as though chance is a serious explanation where choice is at all possible.
It's like shouting at a storm. I've said, MANY TIMES, I would first try and find some explanation other than chance before I fell back on that for a highly unusual result.
And, in the wider context that surfaces the underlying issue behind ever so many objections on the credible source of objects exhibiting configurations reflecting islands of function in seas of non-function: there is a major ideological a priori bias acting in origins science in our day that needs to be exposed for what it is and how it distorts reasoning on matters of origins. Namely Lewontin’s a priori materialism.
Can I get a translation please? Have you found an error in my mathematics? Doesn't look like it. If you have a situation you'd like me to address that I haven't already done multiple times then bring it up.
It is also highly significant in the context of the wider contentious debate that you have twisted the logic of the design inference into pretzels. When we come across an outcome that we cannot directly observe the source of, we need to infer a best empirically warranted explanation. The first default is mechanical necessity leading to natural regularities (which includes the latest attempted red herring, standing wave patterns revealed by dusting vibrating membranes with sand or the like). This is overturned by high contingency. If a given aspect is highly contingent — and this includes cases which may reflect a peaked/biased distribution of outcomes, not just a flat one — the default is chance if the outcome is reasonably observable on chance. Choice comes in in cases where we have highly unlikely outcomes on chance that come from highly specific, restricted zones of interest that plausibly reflect patterns that purposeful choice can account for. All heads, HT in alternation, Ht patterns exhibiting ASCII code in English, etc are cases in point.
I've already stated many times my response to this. You don't agree with me so you're trying to intimidate me into backing down or agreeing with you by posting long, rambling paragraphs which make comprehension difficult. I'll say it one more time: IF I flipped a coin 500 times and got all heads I'd try very, very, VERY hard to find an explanation for it even though I know that outcome is just as likely as any other. I might not ever really believe it was due to chance. BUT, if I couldn't find some explanation then I would write it off as a fluke, a result which is physically and mathematically consistent with the situation and no need to invoke some other agency which I looked for and couldn't find!! Same with getting a randomly generated line of Shakespeare. I am not trying to distort anything, I am not trying to speak to anything other than those simple, clear, fairly abstract and ideal circumstances.Jerad
June 27, 2013
June
06
Jun
27
27
2013
02:40 AM
2
02
40
AM
PDT
PS: It is also highly significant in the context of the wider contentious debate that you have twisted the logic of the design inference into pretzels. When we come across an outcome that we cannot directly observe the source of, we need to infer a best empirically warranted explanation. The first default is mechanical necessity leading to natural regularities (which includes the latest attempted red herring, standing wave patterns revealed by dusting vibrating membranes with sand or the like). This is overturned by high contingency. If a given aspect is highly contingent -- and this includes cases which may reflect a peaked/biased distribution of outcomes, not just a flat one -- the default is chance if the outcome is reasonably observable on chance. Choice comes in in cases where we have highly unlikely outcomes on chance that come from highly specific, restricted zones of interest that plausibly reflect patterns that purposeful choice can account for. All heads, HT in alternation, Ht patterns exhibiting ASCII code in English, etc are cases in point.kairosfocus
June 27, 2013
June
06
Jun
27
27
2013
01:51 AM
1
01
51
AM
PDT
Jerad: Let us observe your crucial strawmannising step:
Any particular single sequence is just as likely as any other single sequence in a truly ‘fair’ or random selection process. And, as I’ve said already, groups or clusters of outcomes will always have a higher probability than any single outcome. What are you arguing against?
Your error has been repeatedly pointed out, from several quarters. You have insisted on a strawman distortion that caricatures the situation by highlighting something that is true in itself -- that on a fair coin assumption or situation any one state is as probable as any one other state. But in the wider context, we are precisely not dealing with any one state in isolation, but partitioning of the space of possibilities in light of various significant considerations. What is to be explained is the partition we find ourselves in, on best explanation. And zones of interest that are overwhelmed by the statistical weight of the bulk of the possibilities are best explained on choice not chance. In short, you refuse to recognise that there are such things as significant partitions that can have a significance above and beyond merely being possible outcomes. Also, you are suppressing the underlying issue, that there are two known causes of highly contingent outcomes, chance and choice. Where, what is utterly unexpected -- in the relevant case on the gamut of atomic resources in our solar system for its plausible lifespan -- on chance, indeed is so maximally improbable that it is reliably unobservable on chance, is easily explained and feasibly observable on choice. So, in a context of inference to best causal explanation, it is morally certain that choice was involved not chance. That is, one would be foolishly naive or irresponsible on matters of moment, to insist on acting as though chance is a serious explanation where choice is at all possible. And, in the wider context that surfaces the underlying issue behind ever so many objections on the credible source of objects exhibiting configurations reflecting islands of function in seas of non-function: there is a major ideological a priori bias acting in origins science in our day that needs to be exposed for what it is and how it distorts reasoning on matters of origins. Namely Lewontin's a priori materialism. It is as simple as that. KFkairosfocus
June 27, 2013
June
06
Jun
27
27
2013
01:40 AM
1
01
40
AM
PDT
Namely, you are looking at the bare logical possibility of a given single state of 500 coins as an outcome of chance and suggest that any given state is as improbable as any other on Bernouilli-Laplace indifference.
Yes, that is what I am addressing. And if I've doen something incorrectly then please point it out.
But that is not what we are looking at in praxis.
That is all I was doing. Just discussing the mathematics.
What we have in fact, is the issue of arriving as a special — simply describable or functionally specific, or whatever is relevant — state or cluster of states, vs other, dominant clusters of vastly larger statistical weight. I am sure you will recognise that in an indifference [fair coins, here] situation, when we have such an uneven partition of the space of possibilities, clusters of overwhelming statistical weight — which will be nearly 250 H:250T, in no particular pattern, will utterly dominate the observable outcomes.
As I already said above in different terms. I don't know what you're arguing against. Obviously a fairly jumbled mix of 500 Hs and Ts is more likely than any single outcome including all Hs. So?
What happens is that the state 500 H, that of 500 T, or a state that has in it a 72 or so character ASCII text in English, are all examples of remarkable, specially describable, specific and rare outcomes, deeply isolated in the field of possibilities.
Any particular single sequence is just as likely as any other single sequence in a truly 'fair' or random selection process. And, as I've said already, groups or clusters of outcomes will always have a higher probability than any single outcome. What are you arguing against?
So, if you are in an outcome state that is maximally improbable on chance, in a special zone that a chance based search strategy is maximally unlikely to attain, that is highly remarkable. Especially in a situation where there is the possibility of accessing such by choice contingency as opposed to chance contingency.
Why don't you specify a null and an alternate hypothesis and a confidence interval you'd like to test? Or be more clear what you're getting at.
In short, you have been tilting at a strawman.
I've been addressing a very particular mathematical point. If you can find any fault with what I've actually said then please point it out.
So, while it is strictly logically possible that lucky noise has caused all of this, that is by no means the best, empirically warranted, reasonable explanation. Indeed, it is quite evident on analysis of relevant scientific investigations, that a great many things in science are explained by investigating the sort of causal factors that are empirically reliable as causes of a given effect, then once that has been established, one treats the effect as a sign of its credible, reliably established cause. Text of posts by unseen posters is a good simple case in point.
How come everyone misses the point that I've made MANY TIMES that I would be extremely careful to first root out any bias or influence in the system before I attributed an outcome to chance?
And, if you or I were to come across a tray of 500 coins with all heads uppermost, or alternating heads and tails, or ASCII code for a text in English, that would be on its face string evidence of choice contingency, AKA design as best and most reasonable — though not only possible — explanation. That is patent. So, why the fuss and bother not to infer the blatantly reasonable?
What are you arguing against? If design was detectable then I assume I would discover that BEFORE ascribing a highly unusual outcome to chance!! Design would be a bias in the system, making it not 'fair'. You are the first one to accuse your opponent of attacking a strawman but you seem to be doing so here. Nothing I've said has been overturned, I was addressing a pretty basic mathematical issue, I've been very clear that ascribing chance is my last fall back for a highly organised outcome AFTER first making very, very, very sure there was no other detectable influence. I don't get it. Should I just repeat myself over and over again?Jerad
June 27, 2013
June
06
Jun
27
27
2013
12:17 AM
12
12
17
AM
PDT
Jerad: With al due respect, you are conflating two very different things, and refusing to acknowledge the relevance of one of them. Namely, you are looking at the bare logical possibility of a given single state of 500 coins as an outcome of chance and suggest that any given state is as improbable as any other on Bernouilli-Laplace indifference. (Funny how this is popped up when it suits, and discarded by your side when it isn't; henfce ever so many silly talking points about how you can't calculate probabilities you require, nyah nyah nyah nyah nah! [That itself leaves out of the reckoning the most blatant issue of all: sampling theory.]) But that is not what we are looking at in praxis. What we have in fact, is the issue of arriving as a special -- simply describable or functionally specific, or whatever is relevant -- state or cluster of states, vs other, dominant clusters of vastly larger statistical weight. I am sure you will recognise that in an indifference [fair coins, here] situation, when we have such an uneven partition of the space of possibilities, clusters of overwhelming statistical weight -- which will be nearly 250 H:250T, in no particular pattern, will utterly dominate the observable outcomes. What happens is that the state 500 H, that of 500 T, or a state that has in it a 72 or so character ASCII text in English, are all examples of remarkable, specially describable, specific and rare outcomes, deeply isolated in the field of possibilities. So, if you are in an outcome state that is maximally improbable on chance, in a special zone that a chance based search strategy is maximally unlikely to attain, that is highly remarkable. Especially in a situation where there is the possibility of accessing such by choice contingency as opposed to chance contingency. This is the context too of the second law of thermodynamics, which points out that on chance based changes of state, the strong tendency is to migrate from less probable clusters of configs to more probable ones. (In communication systems, it is notorious that noise strongly tends to corrupt messages. And so we have an actual pivotal technical metric, signal to noise ratio, that recognises the importance and ready identifiability of the distinction between signals and noise on typical characteristics, so much so that we can measure and compare their power levels on a routine basis as a figure or merit for a communication system. but, strictly, on logical possibility, noise can mimic any signal, so why do we confidently make the distinction? Because, we have confidence in the sampling result that our observations of noise in systems will overwhelmingly come from the overwhelming bulk of possibilities.) In short, you have been tilting at a strawman. In the wider context, it is obvious that the root problem is that there is a strong aversion to the reality of choice contingency as a fundamental explanation of outcomes. Let me just say that it is for instance strictly logically possible that by lucky noise on the Internet, every post I have ever made here at UD is actually the product of blind chance causing noise on the network. However, the readers of UD have had no problem inferring that the best explanation for posts under this account is that here is an individual out there who has this account. So, while it is strictly logically possible that lucky noise has caused all of this, that is by no means the best, empirically warranted, reasonable explanation. Indeed, it is quite evident on analysis of relevant scientific investigations, that a great many things in science are explained by investigating the sort of causal factors that are empirically reliable as causes of a given effect, then once that has been established, one treats the effect as a sign of its credible, reliably established cause. Text of posts by unseen posters is a good simple case in point. And, if you or I were to come across a tray of 500 coins with all heads uppermost, or alternating heads and tails, or ASCII code for a text in English, that would be on its face string evidence of choice contingency, AKA design as best and most reasonable -- though not only possible -- explanation. That is patent. So, why the fuss and bother not to infer the blatantly reasonable? Because of something else connected to it. namely, this is not isolated form something very important that has been discovered over the past 60 years or so. That is, that the living cell has in its heart, digital code a associated execution machinery implemented in C-chemistry nanomachines. If we were to take this on the empirically grounded reliable inference that he best explanation for codes and code-executing machines, on billions of cases in point without observed exception is design, then that would immediately lead tot he conclusion that the living cell is best explained as designed. But those of us who do that, are commonly held up to opprobrium, to the point where I have found myself recently unfairly compared to Nazis. (And no apologies or retractions have been forthcoming when the outrage and the denial of having harboured such have been exposed.) This is because, origins science is in the grips of an a priori ideology of evolutionary materialism dressed up in a lab coat. Under those circumstances, of institutionalised question-begging in favour of an ideology that is actually inherently self-refuting, it is not surprising that common sense inductive reasoning is routinely sent to the back of the bus on whatever convenient excuse. So, I think the issue is to get the ideologically loaded and polarising a priori's fixed, then we can look back at the science, underlying inductive logic and actual evidence on a more objective basis. KFkairosfocus
June 27, 2013
June
06
Jun
27
27
2013
12:01 AM
12
12
01
AM
PDT
Are you familiar with the related results of statistical thermodynamics, which ground the 2nd law of thermodynamics? Namely that here are some things that are so remote, so beyond observability on relative statistical weight of clusters of partitioned configs that hey are not spontaneously observable on the gamut of a lab or the solar system or the observed cosmos? You have just done the equivalent of suggesting that you believe in perpetuum mobiles as feasible entities
If you can find something wrong with my mathematics then please point it out. I don't see how you can argue with the fact that any given sequence of Hs and Ts, including all Hs or all Ts or HTHTHTH . . . . or HHTTHHTTHHTT . . . or whatever sequence you'd like to specify are all equally likely to occur if the generating procedure is truly 'fari'. Obviously any class of outcomes is more likely to occur than any given single sequence. And obviously classes closer to the 'mean' (depending on what your measure is) are more likely to occur. But just because 'we' assign meaning or significance to certain outcomes or classes of outcomes doesn't change the mathematics.Jerad
June 26, 2013
June
06
Jun
26
26
2013
10:40 PM
10
10
40
PM
PDT
Jerad: Are you familiar with the related results of statistical thermodynamics, which ground the 2nd law of thermodynamics? Namely that here are some things that are so remote, so beyond observability on relative statistical weight of clusters of partitioned configs that hey are not spontaneously observable on the gamut of a lab or the solar system or the observed cosmos? You have just done the equivalent of suggesting that you believe in perpetuum mobiles as feasible entities. KFkairosfocus
June 25, 2013
June
06
Jun
25
25
2013
07:46 AM
7
07
46
AM
PDT
Dr Liddle: The fundamental issue is that we are dealing with large config spaces and blind samples (for sake of argument). Once we can define narrow zones of interest [so, partition the space on something separately specifiable than to list out the configs in detail that we want . . . ), and once we have rather limited resources -- with W = 2^500, 10^57 atoms in our solar system for 10^17 s is very limited, we have a situation where not on probability but on sampling theory, we have very little likelihood of capturing such zones of interest on any blind process within the reach of resources. We have no right to expect to see anything but the overwhelming bulk partition. In the case of 500 coins, the distribution is very sharply peaked indeed, centred on 250 H:250 T. 500 H is so far away form that that it is a natural special zone (and notice how simply it can be described, i..e. how easy the algor to construct this config is). In more relevant cases, we have clusters, which I have described as Z1, z2, . . . zn, where our sampling resources are again constrained. For the 500 bit solar system case, we are looking at samplinghte equivalent of 1 straw size blindly in a haystack 1,000 LY across. Even if the stack were superposed on our galactic neighbourhood, with 1,000's of star systems, since stars are several LY apart and are as a rule much smaller than a LY across, we are in a needle in haystack challenge on steroids. And, notice, I am not here demanding thatonly one state be in a zone, or that there be just one zone. Nope, so long as there is reason to se that zones are idolated and search resources are vastly overwhelmed, we are in a realm where the point holds. This then extends to the genome, where a viable one starts at 100 - 500,000 base pairs, and multicellular body plans are looking at 10 - 100+ mn bases, dozens of times over on the scope of the solar system. Where, also we know that functional specificity and complexity joined together are going to sharply confine acceptable configs. As can be seen from just requisites of text strings in English. Such gives analytical teeth tot he inductive point that the only known, observed source of FSCO/I is design. KFkairosfocus
June 25, 2013
June
06
Jun
25
25
2013
07:42 AM
7
07
42
AM
PDT
Hmm. I would conclude “design” until an explanation surfaces. And I do not think that chance is a plausible explanation given the astronomical odds. This is where the faith comes in. You are willing to allow chance stand as a “plausible explanation” even though the odds are ridiculously low. This to me is not a scientific or a rational conclusion.
We'll just have to agree to disagree on that then I guess.
Let’s say you were unable to examine the details of the experiment because it took place 1 million years ago. All you have are the odds to go by. You can either accept chance as a plausible explanation or you can posit some type of design. Which would you choose?
Without other data or information pointing to the existence of a designer present at the time with the necessary skills and opportunity then I'd say chance is a more parsimonious explanation as it posits no undefined or unproven cause. I would also point out that accepting design as a plausible explanation is already heading down the path of defining and limiting the skills and motivations of the designer. Something that I've been told over and over again ID does not do.
Which is the more rational or the more probable explanation?
Having no independent evidence of a designer then I'd go with chance. OR, just say we don't know. I do not see how you can think an undefined and unproven designer is a more rational explanation. That's just faith. I have nothing against faith but I don't think it should be promoted as scientific. Especially when, although admittedly highly improbable, chance is 'consistent with the laws of mathematics' and physics and known to happen.Jerad
June 25, 2013
June
06
Jun
25
25
2013
06:04 AM
6
06
04
AM
PDT
As I said, if I got 500 Hs or, indeed, any prespecified sequence on the very first try I’d be suspicious and I would check for anything that had affected the outcome. But if I found nothing ‘wrong’ then I’d conclude it was a lucky fluke. There’s no faith involved.
Well, I guess I'm a bit more simple minded than Dr. Dembski. And I bet you would be too if you were playing a poker hand and ran into someone with that kind of "luck" opposing you.
I would also NOT conclude ‘design’ since, as stated by Dr Dembski, first you have to allow for any and all non-design explanations. And chance is such a plausible explanation.
Hmm. I would conclude "design" until an explanation surfaces. And I do not think that chance is a plausible explanation given the astronomical odds. This is where the faith comes in. You are willing to allow chance stand as a "plausible explanation" even though the odds are ridiculously low. This to me is not a scientific or a rational conclusion. Let's say you were unable to examine the details of the experiment because it took place 1 million years ago. All you have are the odds to go by. You can either accept chance as a plausible explanation or you can posit some type of design. Which would you choose? Which is the more rational or the more probable explanation?tjguy
June 25, 2013
June
06
Jun
25
25
2013
04:20 AM
4
04
20
AM
PDT
Neil and Jerad, The law of large numbers is well-accepted in mathematics. Thus, I don't think Barry is misusing probability with respect to the coins. I wrote on the issue here: The Law of Large Numbers vs. KeithSscordova
June 24, 2013
June
06
Jun
24
24
2013
08:24 AM
8
08
24
AM
PDT
JWTruthInLove, Awesome find of Shallit's essay!scordova
June 24, 2013
June
06
Jun
24
24
2013
07:48 AM
7
07
48
AM
PDT
(above cross-posted at TSZ, with some typos fixed).Elizabeth B Liddle
June 24, 2013
June
06
Jun
24
24
2013
07:00 AM
7
07
00
AM
PDT
I don't think I've ever seen a thread generate so much heat with so little actual fundamental disagreement! Almost everyone (including Sal, Eigenstate, Neil, Shallit, Jerad, and Barry) is correct. It’s just that massive and inadvertent equivocation is going on regarding the word “probability”. The compressibility thing is irrelevant. Where we all agree is that "special" sequences are vastly outnumbered by "non-special" sequences, however we define "special", whether it’s the sequence I just generated yesterday in Excel, or highly compressible sequences, or sequences with extreme ratios of H:T, or whatever. It doesn't matter in what way a sequence is "special" as long as it was either deemed special before you started, or is in a clear class of "special" numbers that anyone would agree was cool. The definition of “special” (the Specification) is not the problem. The problem is that “probability” under a frequentist interpretation means something different than under a Bayesian interpretation, and we are sliding from frequentist interpretation (“how likely is this event?”) which we start with, to a Bayesian interpretation (“what caused this event?”) , which is what we want, but without noticing that we are doing so. Under the frequentist interpretation of probability, a probability distribution is simply a normalised frequency distribution - if you toss enough sequences, you can plot the frequency of each sequence, and get a nice histogram which you then normalise by dividing by the total number of observations to generate a "probability distribution". You can also compute it theoretically, but it still just gives you a normalised frequency distribution albeit a theoretical one. In other words, a frequentist probability distribution, when applied to future events, simply tells you how frequently you can expect to observe that event. It therefore tells you how confident you can be (how probable it is) that that the event will happen on your next try. The problem is arises when we try to turn frequentist probabilities about future events into a measure of confidence about the cause of a past event. We are asking a frequency probability distribution to do a job it isn't built for. We are trying to turn a normalised frequency, which tells us the how much confidence we can have of a future event, given some hypothesis, into a measure of confidence in some hypothesis concerning a past event. These are NOT THE SAME THING. So how do we convert our confidence about whether a future event will occur into a measure of confidence that a past event had a particular cause? To do so, we have to look beyond the reported event itself (the tossing of 500 heads), and include more data. Sal has told us that the coin was fair. How great is his confidence that the coin is fair? Has Sal used the coin himself many times, and always previously got non-special sequences? If not, perhaps we should not place too much confidence in Sal’s confidence! And even if he tells us he has, do we trust his honesty? Probably, but not absolutely. In fact, is there any way we can be <absolutelysure that Sal tossed a fair coin, fairly? No, there is no way. We can test the coin subsequently; we can subject Sal to a polygraph test; but we have no way of knowing, for sure, a priori, whether Sal tossed a fair coin fairly or not. So, let’s say I set the prior probability that Sal is not honest, at something really very low (after all, in my experience, he seems to be a decent guy): let’s say, p=.0001. And I put the probability of getting a “special” sequence at something fairly generous – let’s say there are 1000 sequences of 500 coin tosses that I would seriously blink at, making the probability of getting one of them 1000/2^500. I’ll call the observed sequence of heads S, and the hypothesis that Sal was dishonest, D. From Bayes theorem we have: P(D|S)=[P(S|D)*P(D)]/[ P(S|D)*P(D)*+ P(T|~D)*P(~D)] where P(D|S) is what we actually want to know, which is the probability of Sal being Dishonest, given the observed Sequence. We can set the probability of P(S|D) (i.e. the probability of a Special sequence given the hypothesis that Sal was Dishonest) as 1 (there’s a tiny possibility he meant to be Dishonest, but forgot, and tossed honestly by mistake, but we can discount that for simplicity). We have already set the probability of D (Sal being Dishonest) as .0001. So we have: P(D|S)=[1*.0001]/[1*.0001+1000/2^500*(1-.0001)] Which is, as near as dammit, 1. In other words, despite the very low prior probability of Sal being dishonest, now that we have observed him claiming that he tossed 500 heads with a fair coin, the probability that he was being Dishonest, is now a virtual certainty, even though throwing 500 Heads honestly is perfectly possible, entirely consistent with the Laws of Physics, and, indeed, the Laws of Statistics. Because the parameter (P(T|~D) (the probability of the Target given not-Dishonesty) is so tiny, any realistic evaluation of P(~D) (the probability that Sal was not Dishonest) , however great, is still going to make the term on the denominator, P(T|~W)]P(~W), negligible, and the denominator always only very slightly larger than the numerator. Only if our confidence in Sal’s integrity exceeds 500 bits will we be forced to conclude that the sequence could just or more easily have been Just One Of Those Crazy Things that occasionally happen when a person tosses 500 fair coins honestly. In other words, the reason we know with near certainty that if we see 500 Heads tossed, the Tosser must have been Dishonest, is simply that Dishonest people are more common (frequent!) than tossing 500 Heads. It’s so obvious, a child can see it, as indeed we all could. It’s just that we don’t notice the intuitive Bayesian reasoning we do to get there – which involves not only computing the prior probability of 500 Heads under the null of Fair Coin, Fairly Tossed, but also the prior probability of Honest Sal. Both of which we can do using Frequentist statistics, because they tell us about the future (hence “prior”). But to get the Posterior (the probability that a past event had one cause rather than another) we need to plug them into Bayes. The possibly unwelcome implication of this, for any inference about past events, is that when we try to estimate our confidence that a particular past event had a particular cause (whether it is a bacterial flagellum or a sequence of coin-tosses), we cannot simply estimate it from observed frequency distribution of the data. We also need to factor in our degree of confidence in various causal hypotheses. And that degree of confidence will depend on all kinds of things, including our personal experience, for example, of an unseen Designer altering our lives in apparently meaningful and physical (increasing our priors for the existence of Unseen Designers), our confidence in expertise, our confidence in witness reports, our experience of running phylogenetic analyses, or writing evolutionary algorithms. In other words, it’s subjective. that doesn’t mean it isn’t valid, but it does mean that we should be wary (on all sides!) of making over confident claims based on voodoo statistics in which frequentist predictions are transmogrified into Bayesian inferences without visible priors.Elizabeth B Liddle
June 24, 2013
June
06
Jun
24
24
2013
06:45 AM
6
06
45
AM
PDT
Chance can't do anything chance can not flip the coins.....Andre
June 24, 2013
June
06
Jun
24
24
2013
03:52 AM
3
03
52
AM
PDT
Which is more rational to believe? Which takes more faith to believe? 1. That 500 coins were tossed and they landed in exactly the order you predicted ahead of time by pure chance? Or 2. That there was monkey business involved? If you have the faith to believe that it happened by pure total chance, fine, we just don’t think that is rational given the odds.
As I said, if I got 500 Hs or, indeed, any prespecified sequence on the very first try I'd be suspicious and I would check for anything that had affected the outcome. But if I found nothing 'wrong' then I'd conclude it was a lucky fluke. There's no faith involved. I would also NOT conclude 'design' since, as stated by Dr Dembski, first you have to allow for any and all non-design explanations. And chance is such a plausible explanation.Jerad
June 24, 2013
June
06
Jun
24
24
2013
02:30 AM
2
02
30
AM
PDT
BB: For examples of contradictory other hands, watch what happens when an evo mat advocate is pressed on the want of a root tot eh tree, and how the evidence of what chem and physics applies in warm little ponds does not point to credible possibility of OOL. Very fast, they will pull the switcheroo, that OOL strictly is not part of the theory of evo. (This has happened ever so many times here at UD, and I suspect Talk Origins will exemplify same, etc.) KFkairosfocus
June 24, 2013
June
06
Jun
24
24
2013
01:44 AM
1
01
44
AM
PDT
PS: Let me here reproduce the core argument from 48, just to show the point: _______________________ [Clipping 48 in the DDS mendacity thread, for record:] >>It seems people have a major problem appreciating: (a) configuration spaces clustered into partitions of vastly unequal statistical weight,and (ii) BLIND sampling/searching of populations under these circumstances. It probably does not help, that old fashioned Fisherian Hyp testing has fallen out of academic fashion, never mind that its approach is sound on sampling theory. Yes it is not as cool as Bayesian statistics etc, but there is a reason why it works well in practice. It is all about needles and haystacks. Let’s start with a version of an example I have used previously, a large plot of a Gaussian distribution using a sheet of bristol board or the like, baked by a sheet of bagasse board or the like. Mark it into 1-SD wide stripes, say it is wide enough that we can get 5 SDs on either side. Lay it flat on the floor below a balcony, and drop small darts from a height that would make the darts scatter roughly evenly across the whole board. Any one point is indeed as unlikely as any other to be hit by a dart. BUT THAT DOES NOT EXTEND TO ANY REGION. As a result, as we build up the set of dart-drops, we will see a pattern, where the likelihood of getting hit is proportionate to area, as should be obvious. That immediately means that the bulk of the distribution, near the mean value peak, is far more likely to be hit than the far tails. For exactly the same reason why if one blindly reaches into a haystack and pulls a handful, one is going to have a hard time finding a needle in it. The likelihood of getting straw so far exceeds that of getting needle that searching for a needle in a haystack has become proverbial. In short, a small sample of a very large space that is blindly taken, will by overwhelming likelihood, reflect the bulk of the distribution, not relatively tiny special zones. (BTW, this is in fact a good slice of the statistical basis for the second law of thermodynamics.) The point of Fisherian testing is that skirts are special zones and take up a small part of the area of a distribution, so typical samples are rather unlikely to hit on them by chance. So much so that one can determine a degree of confidence of a suspicious sample not being by chance, based on its tendency to go for the far skirt. How does this tie into the design inference? By virtue of the analysis of config spaces — populations of possibilities for configurations — which can have W states and then we look at small, special, specific zones T in them. Those zones T are at the same time the sort of things that designers may want to target, clusters of configs that do interesting things, like spell out strings of at least 72 – 143 ASCII characters in contextually relevant, grammatically correct English, or object code for a program of similar complexity in bits [500 - 1,000] or the like. 500 bits takes up 2^500 possibilities, or 3.27*10^150. 1,000 bits takes up 2^1,000, or 1.07*10^301 possibilities. To give an idea of just how large these numbers are, I took up the former limit, and said now our solar system’s 10^57 atoms (by far and away mostly H and He in the sun but never mind) for its lifespan can go through a certain number of ionic chemical reaction time states taking 10^-14s. Where our solar system is our practical universe for atomic interactions, the next star over being 4.2 light years away . . . light takes 4.2 years to traverse the distance. (Now you know why warp drives or space folding etc is so prominent in Sci Fi literature.) Now, set these 10^57 atoms the task of observing possible states of the configs of 500 coins, at one observation per 10^-14 s. For a reasonable estimate of the solar system’s lifespan. Now, make that equivalent in scope to one straw. By comparison, the set of possibilities for 500 coins will take up a cubical haystack 1,000 LY on the side, about as thick as our galaxy. Now, superpose this haystack on our galactic neighbourhood, with several thousand stars in it etc. Notice, there is no particular shortage of special zones here, just that they are not going to be anywhere near the bulk, which for light years at a stretch will be nothing but straw. Now, your task, should you choose to accept it is to take a one-straw sized blind sample of the whole. Intuition, backed up by sampling theory — without need to worry over making debatable probability calculations — will tell us the result, straight off. By overwhelming likelihood, we would sample only straw. That is why the instinct that getting 500 H’s in a row or 500 T’s or alternating H’s and T’s or ASCII code for a 72 letter sequence in English, etc, is utterly unlikely to happen by blind chance but is a lot more likely to happen by intent, is sound. And this is a simple, toy example case of a design inference on FSCO/I as sign. A very reliable inference indeed, as is backed up by literally billions of cases in point. Now, onlookers, it is not that more or less the same has not been put forth before and pointed out to the usual circles of objectors. Over and over and over again in fact. And in fact, here is Wm A Dembski in NFL:
p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .” p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”
(And, Stephen Meyer presents much the same point in his Signature in the Cell, 2009, not exactly an unknown book.) Why then do so many statistically or mathematically trained objectors to design theory so often present the strawman argument that appears so many times yet again in this thread? First, it cannot be because of lack of capacity to access and understand the actual argument, we are dealing with those with training in relevant disciplines. Nor is it that the actual argument is hard to access, especially for those who have hung around at UD for years. Nor is such a consistent error explicable by blind chance, chance would make them get it right some of the time, by any reasonable finding, given their background. So, we are left with ideological blindness, multiplied by willful neglect of duties of care to do due diligence to get facts straight before making adverse comment, and possibly willful knowing distortion out of the notion that debates are a game in which all is fair if you can get away with it. Given that there has been corrective information presented over and over and over again, including by at least one Mathematics professor who appears above, the collective pattern is, sadly, plainly: seeking rhetorical advantage by willful distortion. Mendacity in one word. If we were dealing with seriousness about the facts, someone would have got it right and there would be at least a debate that nope, we are making a BIG mistake. The alignment is too perfect. Yes, at the lower end, those looking for leadership and blindly following are jut that, but at the top level there is a lot more responsibility than that. Sad, but not surprising. This fits a far wider, deeply disturbing pattern that involves outright slander and hateful, unjustified stereotyping and scapegoating. Where, enough is enough.>> ______________ Prediction: this too will be studiously ignored in the rush to make mendacious talking points. (NR, KS, AF et al just prove me wrong by actually addressing this on the merits. Please.) KFkairosfocus
June 24, 2013
June
06
Jun
24
24
2013
01:18 AM
1
01
18
AM
PDT
And if you want to do 20 amino acids in a specified sequence here is more fun! http://www.random.org/sequences/ Good Luck with chance and random! You will quickly learn the only workable solution is by doing a very specific arrangement using a mind! Knock ourselves out!Andre
June 24, 2013
June
06
Jun
24
24
2013
01:02 AM
1
01
02
AM
PDT
Onlookers: Observe how studiously Darwinist objectors have ignored the issues pointed out step by step at 48 above. It is patent that mere facts and reason are too inconvenient to pay attention to in haste to make favourite talking points. Which reminds me all too vividly about the exercise over the past month in which direct proof of the undeniability of a patent fact, that error exists suddenly turned into rhetorical pretzels. We are dealing here with ideological agendas all too willing to resort to mendacity by continuing a misrepresentation, not reason and certainly not reason guided by a sense of duty to accuracy and fairness. Be warned accordingly. KFkairosfocus
June 24, 2013
June
06
Jun
24
24
2013
12:56 AM
12
12
56
AM
PDT
BB: In short, EVERY time Darwinists appeal to the tree of life icon -- starting with Darwin himself (the ONLY diagram in Origin as originally published) -- they imply a root. The utter absence of a plausible explanation for the root, highlighted by the sort of thing we see with the MU experiment in textbooks, is a smoking gun. Indeed, it is worse than that, as we are talking about origin of digital info bearing coded systems and the machines that process in co-ordination, for which the only credible, empirically warranted explanation is design. Then, design sits at the table from the root up, so design is available at every step of the tree of life, and it is the only thing that can in light of empirical verification of capacity explain the origin of major body plans dozens of times over needing 10 - 100 mn + bits of additional info, each. So is the sort of rhetorical game above that ignores what was pointed out, step by step at 48 above. KFkairosfocus
June 24, 2013
June
06
Jun
24
24
2013
12:52 AM
12
12
52
AM
PDT
Assuming the coins are fair go have fun, you can do up to 200 at one time.... see if you will ever get a 200 heads only! http://www.random.org/coins/Andre
June 24, 2013
June
06
Jun
24
24
2013
12:49 AM
12
12
49
AM
PDT
Correction to my comment above. The indices are wrong: the probability that the first flip matches F1, times the probability that the second flip matches F2, times the probability that the third flip matches F3, … times the probability that the Nth flip matches Fn.keiths
June 24, 2013
June
06
Jun
24
24
2013
12:35 AM
12
12
35
AM
PDT
JWTruthInLove, Shallit's post is entitled "Confusion Everywhere", which is appropriate since he himself is confused. He writes:
The example is given of flipping a presumably fair coin 500 times and observing it come up heads each time. The ID advocates say this is clear evidence of "design", and those arguing against them (including the usually clear-headed Neil Rickert) say no, the sequence HH...H is, probabilistically speaking, just as likely as any other.
Which is correct if the coin is fair, not just "presumably fair". And that is what eigenstate specified in the quote that started this whole debate:
Maybe that’s just sloppily written, but if you have 500 flips of a fair coin that all come up heads, given your qualification (“fair coin”), that is outcome is perfectly consistent with fair coins, and as an instance of the ensemble of outcomes that make up any statistical distribution you want to review. That is, physics is just as plausibly the driver for “all heads” as ANY OTHER SPECIFIC OUTCOME.
Eigenstate is correct. Take any specified sequence of coin flips {F1, F2, ... Fn} where each Fi is either H (heads) or T (tails). The probability of getting that precise sequence when flipping a fair coin is equal to: the probability that the first flip matches F0, times the probability that the second flip matches F1, times the probability that the third flip matches F2, ... times the probability that the Nth flip matches Fn. The coin is fair, meaning that the probability of a match is the same whether Fi is H or T: exactly 1/2. Therefore, the probability of matching any specific sequence of length n is exactly the same, regardless of its content: (1/2)^n. Now if you drop the stipulation that the probability distribution is known and fair, then the question becomes much more interesting. However, Sal is still wrong.
This is an old paradox... The solution is by my UW colleague Ming Li and his co-authors. The basic idea is that Kolmogorov complexity offers a solution to the paradox: it provides a universal probability distribution on strings that allows you to express your degree of surprise on enountering a string of symbols that is said to represent the flips of a fair coin.
Two problems with that statement: 1. We don't need a probability distribution, because we already have one. Eigenstate specified that the coins were fair, and we know what that distribution looks like. 2. Even setting #1 aside, Kolmogorov complexity cannot act as a proxy for (lack of) surprise. Consider my example above involving social security numbers. If I roll my SSN, I'm surprised because it is my SSN, not because of its Kolmogorov complexity.
But the ID advocates are also wrong, because they jump from "reject the fair coin hypothesis" to "design".
Yes, as I pointed out earlier:
Given that we observe a sequence of 500 heads, which explanation is more likely to be true? a) the coins are fair, the flips were random, and we just happened to get 500 heads in a row; or b) other factors are biasing (and perhaps determining) the outcome. The obvious answer is (b). In the case of homochirality, Sal’s mistake is to leap from (b) directly to a conclusion of design, which is silly. In other words, he sees the space of possibilities as {homochiral by chance, homochiral by design}. He rules out ‘homochiral by chance’ as being too improbable and concludes ‘homochiral by design’. Such a leap would be justified only if he already knew that homochirality couldn’t be explained by any non-chance, non-design mechanism (such as Darwinian evolution). But that, of course, is precisely what he is trying to demonstrate. He has assumed his conclusion.
I suspect that Shallit will agree with all of this once he realizes that this entire debate has been about a case in which the coins are known to be fair, not just "presumably fair".keiths
June 24, 2013
June
06
Jun
24
24
2013
12:23 AM
12
12
23
AM
PDT
Jerad@80
Specify any sequence of Hs and Ts 500 long and if you set it as your target you might not ever get it. They’re all equally likely and unlikely.
When you throw a coin 500 times, the probability of having some kind of an outcome is 100%. But the probability of getting the outcome you need, is astronomically small, so small as to be virtually zero. Is it impossible? Mathematically speaking, of course not. But, like everyone mentioned, if you got it on the first chance, everyone would "know" you cheated. And yet, Darwinists have to believe that it happened without any monkey business! That takes faith! Which is more rational to believe? Which takes more faith to believe? 1. That 500 coins were tossed and they landed in exactly the order you predicted ahead of time by pure chance? Or 2. That there was monkey business involved? If you have the faith to believe that it happened by pure total chance, fine, we just don’t think that is rational given the odds.tjguy
June 23, 2013
June
06
Jun
23
23
2013
11:48 PM
11
11
48
PM
PDT
Excellent post! It is amazing to see what Darwinists are willing to believe! And yet they accuse us of having faith. This is just a simple ploy to try and avoid the implications of the astronomically small odds of their creation story being true. So, in the origin of life, we have the problem of chirality - which is not the only problem by any means, but it fits this illustration. All amino acids used to make proteins in life are left handed acids, but in nature they appear with a 50-50 mix of both right handed and left handed molecules. So a Darwinist has to believe that all the amino acids used in the original cell, the first life, just happened by pure chance to be left handed molecules. Here is a specified pattern that has to be met for life to exist. Now most Darwinists have enough sense NOT to just blindly accept that this happened by chance because they know that looks really ridiculous. So they look for other explanations. No good explanations have been forthcoming yet, but that doesn't stop them from trying. Which is fine, but the evidence we have available to us now, points to intelligent intervention because I doubt anyone is really willing to say that it happened by pure chance.tjguy
June 23, 2013
June
06
Jun
23
23
2013
11:40 PM
11
11
40
PM
PDT
Even darwinist mathematicians think that Neil & Friends are wrong:
Confusion Everywhere So Rickert and his defenders are simply wrong.
JWTruthInLove
June 23, 2013
June
06
Jun
23
23
2013
11:23 PM
11
11
23
PM
PDT
1 2 3 4

Leave a Reply