Uncommon Descent Serving The Intelligent Design Community

The TSZ and Jerad Thread, continued

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Part of me feels like letting the TSZ thread go to a full 1,000 comments, but then my sense of responsibility to UD’s bandwidth budget kicks in.

So, let us continue the discussion of the topics from the thread on TSZ issues and Jerad’s concerns continue here.

To prime the pump, let me clip two posts in the thread:

______________

>>912

KF (911) – ooo, spooky

Are you unable to see that when those individual configs come in clusters that are functionally distinct, it is relevant to think about the relative statistical weights of the clusters?

Hitting a cluster would have a higher probability than hitting a single configs but only because a cluster consists of many configs. [a –> The precise point, now work on the implications of this] A purely blind random search means every config is equally likely so groups or clusters of configs would have higher probability, how high would depend on how big they are not their functionality. [b –> Strawman, I never said that he likelihood of finding directly depended on functionality or not, just that because the constraints of multiple well matched parts arranged correctly to function means FSCO/I comes in narrow sectors of the space W. And to see why this is so I gave mechanical and molecular nanotech cases.]

Consider a Cardinal spinning reel. The atoms and parts it is made of can be in certain functional configs, or non-functional ones, including scatered all over the earth. Obviously there are far more non-functional than functional ways. If the individual way is equiprobable, the non functional cluster is far more likely on a blind pick than a functional one.

Yup, under the assumption there are more non-functional configs than functional configs. [c –> Bit this is illustrative of the general pattern as well, and BTW this is why the pricked cell Humpty Dumpty experiments are also relevant.]

This is the reasoning behind the 1,000 LY cubical hay bale and our galactic neighbourhood. The star systems are special zones, but the space between so dominates that a blind 1-straw size sample will all but certainly come up straw. In fact, the likelihood of so getting anything but straw is negligibly different from zero. Where of course our solar system is in effect only able to take a one straw sample of the config space for just 500 bits.

Operating under the assumption that there are few functional states, of course. [e –> Not a dismissible assumption, which is tantamount to saying question begging. I explained and exemplified why I asserted that FSCO/I naturally comes in narrow zones T in W. if you dispute this, which I have many cases for, you need to show the counterexamples, all you have done is say yes under the assumptions. Not an assumption a fact, the atoms of the Cardinal were originally scattered all over the planet, but showed no function until intelligence led to their assembly into that famous fishing reel] But we don’t know the number of functional configs. I agree, it’s probably small compared to the whole space. [f –> Grudging concession, but the crucial one, what follows is given the exponential nature of the config space, that samples on the gamut of solar system or even cosmos that are blind are maximally improbable to hit on the functional clusters.]

The best explanation for seeing the 500 coin BB in a special state, under such circumstances, is that we are not looking at a blind sample.

Given a single random selection out of the whole config space where every config is equally likely then no, you cannot assume it was not a blind sample AFTER getting a config you find surprising or meaningful. [f –> Oh yes you can, if say you saw the first 72 letter for this post in ASCII code, given the utter unlikelihood of finding such a functional cluster by chance as opposed to the very many more non functional ones.]

If you got 5 or 10 or 250 meaningful configs on successive independent random samples THEN you might have an argument that the sample was biased, the null hypothesis is wrong. Or even 250 functional configs out of 400 random samples. [g –> Irrelevant. You know or should know that given the overwhelming imbalance in the statistical weight of the clusters, FSCO/I will to all but absolute certainty be unobservable on blind sampling. AND in the material case, life forms we start at about 100 – 1,000 k bits, that is 200 – 2,000 times over getting samples from the odd and isolated zone of 500 bits apiece.]

If you argue that we effectively have 1000s of random samples that turned out to be functional life forms then you are a) not accounting for samples that turned out not to be functional (we would have no record of those) and b) arguing against a proposition that is not being made by evolutionary theory. [h –> Strawman.] You would be assuming [i –> Strawman, and turnabout of the reasonable burden of empirical warrant.] there exist islands of function in the life config space and that some of our existing life forms come from different islands. Even if there are different islands how do you know our existing life forms are from different ones? [j –> Check out the was it 6,00 protein fold domains to see islands of function as empirically warranted, and then move on up to the 10 – 100 million bits of fresh FSCO/I to make new body plans dozens of times over, then cf the characteristic pattern of sudden appearances, stasis and gaps int eh fossil record. The evidence of islands is there if you are willing to look it in the eye.]

Instead, we know from observation that say coins arranged in the first 72 or so ASCII characters for this comment would be very easily explained on design. And if the Mars rover were to run into a crater with a wall and an inscription or diagram on it, we would instantly and properly infer to design.

A diagram on a wall is not coin tosses or living systems so the analysis is different. [k –> And did you notice how we have consistently shown how to reduce FSCO/I to coded strings, which are equivalent to text on the wall? Where also DNA code in the living cell to assemble proteins and to regulate is text strings, equivalent to writing on the wall.] You have to be very, very sure the diagram has meaning. [l –> You don’t have to know the meaning, once you see a diagram pattern it would be proof positive. Cf the Voynich manuscript, discussed in IOSE.]

People see Jesus’s picture on pieces of toast all the time but that doesn’t mean it was put there or designed. [m –> Well within the FSCO/I limits, cf the IOSE discussion of Man of the Mountain vs Mt Rushmore, you have not done your homework.] Consider the config space of a piece of toast, all the possible ‘looks’ you could get. I bet the space has cardinality bigger than 2^1000. And yet, every so often, a Jesus toast pops up. Paradolia can be very misleading. [n –> Do you see the problem of S = 0 as default, i.e lacking functional specificity? Burn marks on toast are not equivalent to a diagram and you know it.]

Can you see why I have argued as just above? Can you agrre that the argument is reasonable? Why, or why not?

I only disagree that a single randomly selected config out of a huge config space can imply design. [o –> Strawman, the point was that random selection will not credibly hit on FSCO/I, for reasons given in detail. ] The math just doesn’t support that contention.

If not, then kindly explain to us the logic used in Fisherian hypothesis testing on whether an observation is in the bulk or the far skirt of a distribution premised on the null hyp.

Those kinds of analysis are based on (hopefully) large samples and come with confidence intervals. If you want to set up that kind of hypothesis testing please do so. [p –> you are slipping and sliding to a strawman, the issue is, the point of the analysis is that random samples will predominantly come from the bulk not the far tails, which are special zones.] BUT, the point of the confidence interval is to indicate that the conclusion can STILL be wrong. [q –> And any inductive conclusion can be wrong, hence best CURRENT explanation, however there are any number of such that are morally certain, e.g the sun will rise tomorrow, and error exists. That you will not pick up deeply isolated special zones on a sample comparable to 1 straw to a hay bale as thick as our galaxy is the same.]

And again, medical trials and the kind of situations that use Fisherian analysis are not based on a single sample. Your confidence interval in such a situation would be very nearly zero. [r –> Strawman, you are ducking the point that samples taken at random are overwhelmingly likely to come from the bulk not narrow special far skirt zones. You are obviously going our of your way to avoid acknowledging this well known point.]

Just to pick up what caught my eye, do you not know that a living cell is encapsulated and has smart gates that control what comes in or goes out? That, it is a metabolising device, and that it self replicates on a vNSR, using codes and algorithms executed through molecular nanotech devices?

I just didn’t get the reference in the context of talking about sample spaces and random searches.

Similarly, you have been TAUGHT that all the evidence supports common descent, and that such is only to be explained on NATURAL CAUSES. In fact design is compatible with common descent, in several possible ways, but the evidence does not substantiate blind watchmaker naturalistic common descent.

I see no need for the designer hypothesis. [r –> Personal perception has nothing to do with objective warrant.] I agree there are aspects where design and undesigned could look the same depending on the intent of the designer. But I don’t think you can look at life on earth, with no other examples of life on other planets, and claim life is designed without making more complicated arguments and/or finding more evidence. [s –> remember, a self replicating automaton that uses CODED algorithms to control NC machines assembled using molecular nanotech. What empirically warranted chance and necessity model have you got to explain such, and what serious counter do you have to the billions of test cases and needle in haystack analysis that warrant that FSCO/I is a reliable sign of design?] You can hypothesise that it is of course. But you can’t prove it by making simple probabilistic arguments. [t –> Selective hyperskepticism, you are choosing an explanation without empirical warrant of adequacy over one with such warrant, on clearly ideological grounds.]

Routinely, on billions of cases, FSCO/I is seen as caused by design.

Quite true, regarding inanimate outcomes and when there is a designer present with the requisite skills and equipment. [u –> Irrelevancies, as algorithmic code is algorithmic code; you have no empirically warranted mechanism, and wish to object to that which does have empirical warrant.]

This is backed up by needle in the haystack analysis as in the main comment. Indeed, it would be far more reasonable on the evidence to infer to common design, which is perfectly compatible with what we see and is the empirically reliable cause of FSCO/I. The cell is chock full of FSCO/I.

I disagree. I don’t think you have proven the case mathematically. [v –> this is the proof that you are not examining on the correct grounds of warrant. Inductive matters are not amenable to deductive proof. But you wish to impose an inappropriate standard because the empirically grounded best warranted explanation does not fit your worldview preferences] Now you might be able to by using more complicated Fisherian-type methods. I’d recommend Bayesian myself, that carries a lot more weight. But you haven’t done that yet. [w –> Strawman, the issue is that a blind sample comes form the bulk with high odds, this you cannot deny.]

But this is all just my opinion. I’m not trying to inflict my views on anyone. I am trying to answer your posts with little rancour or putting words into your mouth. I’m not always successful of course (being a dopey human being really) but I am trying to be civil.

I don’t expect us to ever really agree and I’m not trying to influence anyone. But I will answer queries as best I can given my time constraints. If I’ve missed any or misinterpreted any then let me know and I will make another attempt when I can. Today is not looking good though. Oh well.

{ –> I thought it necessary to do a quick note on points, sorry if rough around the edges, gotta get ready to go now. KF]
>>

>> 922

KF (916):

Pardon a quick and dirty markup at 912. Gotta go.

Please do not apologise! I know you’re busy and, anyway, I prefer that method of response.

I keep wondering why you keeping replying considering how recalcitrant I am!!

KF (912):

Just a couple of general points: I agree that the number of viable/functional/interpretable configs in the kind of config spaces we are talking about is very likely to be a very small compared to the whole space. And that most of the time, a single random sample is going to return garbage. Those are given as far as I am concerned. If I ever gave the impression I was disputing that then I apologise for my poor exposition.

samples on the gamut of solar system or even cosmos that are blind are maximally improbable to hit on the functional clusters.

‘[S]amples on the gamut of the solar system’ doesn’t make sense to me but it’s not a big deal. ‘[M]aximally improbable’ doesn’t make sense to me either. The maximum improbability would be a probability of zero which no thing in a sample space (of the type we’re discussing) would have. Each config in our discussed config spaces would have a very, very, very small probability of being selected in a random search but it would never be zero.

Me:

Given a single random selection out of the whole config space where every config is equally likely then no, you cannot assume it was not a blind sample AFTER getting a config you find surprising or meaningful.

KF:

Oh yes you can, if say you saw the first 72 letter for this post in ASCII code, given the utter unlikelihood of finding such a functional cluster by chance as opposed to the very many more non functional ones.

I’m sorry but that is just not right. In a purely random search each config is just as likely as any other. Each has a miniscule probability of being picked if the config space is large. This kind of situation is exactly why medical trials are based on large trials with multiple subjects and control groups. And then you generate p-values and confidence intervals. That’s an accepted way to use mathematics to make decisions of alternative over null hypothesis.

I think we both agree that a diagram found on Mars would have to be more compelling than some vague blobs so there’s no need to go over those points really. Obviously we’d both say something that looked like the London Underground Map was designed no matter where it was found.

Strawman, you are ducking the point that samples taken at random are overwhelmingly likely to come from the bulk not narrow special far skirt zones. You are obviously going our of your way to avoid acknowledging this well known point.

I think I’ve already shown that I agree with this. It’s the conclusion after getting a specified and complex pattern where we differ.

But you wish to impose an inappropriate standard because the empirically grounded best warranted explanation does not fit your worldview preferences

That is not true. I am suggesting a method of analysis quite common when trying to prove an alternate over a null hypothesis. To be sure your alternate hypothesis is correct you have to establish that an event was not just a random occurrence by repeating the ‘trial’ many times.

(As the null hypothesis is the ‘default’ hypothesis I am picking the design hypothesis to be the alternate but it’s possible to do the same analysis the other way around. But the testing would be different.)

If you roll a 20-sided fair die each side is equally likely to come up on any given roll. It’s only after multiple rolls that you will empirically see (as opposed to figuring it out analytically) the probability distribution of the outcomes. If the die is fair/random then after 100s of rolls you should see each outcome occurring about 5% of the time. But on any given roll you have no idea what’s going to come up. And any given sequence of outcomes is just as likely as any other. So a sequence of 1, 1, 1 on three rolls is just as likely/unlikely as 1, 2, 3 or 3, 3, 3 or 2, 4, 6 or any sequence of the numbers 1 – 20 you want to pick. IF the die is weighted and not really fair/random you will only be able to determine that after multiple rolls.

I have some weighted dice. The tend to come up 6. But not every time. It usually takes people 4 or 5 rolls before they believe there’s something going on. But they don’t blink an eye when a 6 comes up first.

I agree that if I randomly generated a sequence of 504 0s and 1s, converted it to ASCII text and found that I’d got anything which made sense as an English phrase I’d be extremely surprised. But one trial is not enough to establish that the procedure is anything other than random. You have to do many.

Write a program to do the above and see what you get. Do an experiment!!>>

Remember, there is an offer on the table to Jerad (and/or whoever) to do a 6,000 word essay on the evidence that grounds in your mind the blind watchmaker thesis and makes the design theory proposal unnecessary.

Okay, let us continue . . .