The TSZ and Jerad Thread, continued

Part of me feels like letting the TSZ thread go to a full 1,000 comments, but then my sense of responsibility to UD’s bandwidth budget kicks in.

So, let us continue the discussion of the topics from the thread on TSZ issues and Jerad’s concerns continue here.

To prime the pump, let me clip two posts in the thread:

______________

>>912

KF (911) – ooo, spooky

Are you unable to see that when those individual configs come in clusters that are functionally distinct, it is relevant to think about the relative statistical weights of the clusters?

Hitting a cluster would have a higher probability than hitting a single configs but only because a cluster consists of many configs. [a --> The precise point, now work on the implications of this] A purely blind random search means every config is equally likely so groups or clusters of configs would have higher probability, how high would depend on how big they are not their functionality. [b --> Strawman, I never said that he likelihood of finding directly depended on functionality or not, just that because the constraints of multiple well matched parts arranged correctly to function means FSCO/I comes in narrow sectors of the space W. And to see why this is so I gave mechanical and molecular nanotech cases.]

Consider a Cardinal spinning reel. The atoms and parts it is made of can be in certain functional configs, or non-functional ones, including scatered all over the earth. Obviously there are far more non-functional than functional ways. If the individual way is equiprobable, the non functional cluster is far more likely on a blind pick than a functional one.

Yup, under the assumption there are more non-functional configs than functional configs. [c --> Bit this is illustrative of the general pattern as well, and BTW this is why the pricked cell Humpty Dumpty experiments are also relevant.]

This is the reasoning behind the 1,000 LY cubical hay bale and our galactic neighbourhood. The star systems are special zones, but the space between so dominates that a blind 1-straw size sample will all but certainly come up straw. In fact, the likelihood of so getting anything but straw is negligibly different from zero. Where of course our solar system is in effect only able to take a one straw sample of the config space for just 500 bits.

Operating under the assumption that there are few functional states, of course. [e --> Not a dismissible assumption, which is tantamount to saying question begging. I explained and exemplified why I asserted that FSCO/I naturally comes in narrow zones T in W. if you dispute this, which I have many cases for, you need to show the counterexamples, all you have done is say yes under the assumptions. Not an assumption a fact, the atoms of the Cardinal were originally scattered all over the planet, but showed no function until intelligence led to their assembly into that famous fishing reel] But we don’t know the number of functional configs. I agree, it’s probably small compared to the whole space. [f --> Grudging concession, but the crucial one, what follows is given the exponential nature of the config space, that samples on the gamut of solar system or even cosmos that are blind are maximally improbable to hit on the functional clusters.]

The best explanation for seeing the 500 coin BB in a special state, under such circumstances, is that we are not looking at a blind sample.

Given a single random selection out of the whole config space where every config is equally likely then no, you cannot assume it was not a blind sample AFTER getting a config you find surprising or meaningful. [f --> Oh yes you can, if say you saw the first 72 letter for this post in ASCII code, given the utter unlikelihood of finding such a functional cluster by chance as opposed to the very many more non functional ones.]

If you got 5 or 10 or 250 meaningful configs on successive independent random samples THEN you might have an argument that the sample was biased, the null hypothesis is wrong. Or even 250 functional configs out of 400 random samples. [g --> Irrelevant. You know or should know that given the overwhelming imbalance in the statistical weight of the clusters, FSCO/I will to all but absolute certainty be unobservable on blind sampling. AND in the material case, life forms we start at about 100 - 1,000 k bits, that is 200 - 2,000 times over getting samples from the odd and isolated zone of 500 bits apiece.]

If you argue that we effectively have 1000s of random samples that turned out to be functional life forms then you are a) not accounting for samples that turned out not to be functional (we would have no record of those) and b) arguing against a proposition that is not being made by evolutionary theory. [h --> Strawman.] You would be assuming [i --> Strawman, and turnabout of the reasonable burden of empirical warrant.] there exist islands of function in the life config space and that some of our existing life forms come from different islands. Even if there are different islands how do you know our existing life forms are from different ones? [j --> Check out the was it 6,00 protein fold domains to see islands of function as empirically warranted, and then move on up to the 10 - 100 million bits of fresh FSCO/I to make new body plans dozens of times over, then cf the characteristic pattern of sudden appearances, stasis and gaps int eh fossil record. The evidence of islands is there if you are willing to look it in the eye.]

Instead, we know from observation that say coins arranged in the first 72 or so ASCII characters for this comment would be very easily explained on design. And if the Mars rover were to run into a crater with a wall and an inscription or diagram on it, we would instantly and properly infer to design.

A diagram on a wall is not coin tosses or living systems so the analysis is different. [k --> And did you notice how we have consistently shown how to reduce FSCO/I to coded strings, which are equivalent to text on the wall? Where also DNA code in the living cell to assemble proteins and to regulate is text strings, equivalent to writing on the wall.] You have to be very, very sure the diagram has meaning. [l --> You don't have to know the meaning, once you see a diagram pattern it would be proof positive. Cf the Voynich manuscript, discussed in IOSE.]

People see Jesus’s picture on pieces of toast all the time but that doesn’t mean it was put there or designed. [m --> Well within the FSCO/I limits, cf the IOSE discussion of Man of the Mountain vs Mt Rushmore, you have not done your homework.] Consider the config space of a piece of toast, all the possible ‘looks’ you could get. I bet the space has cardinality bigger than 2^1000. And yet, every so often, a Jesus toast pops up. Paradolia can be very misleading. [n --> Do you see the problem of S = 0 as default, i.e lacking functional specificity? Burn marks on toast are not equivalent to a diagram and you know it.]

Can you see why I have argued as just above? Can you agrre that the argument is reasonable? Why, or why not?

I only disagree that a single randomly selected config out of a huge config space can imply design. [o --> Strawman, the point was that random selection will not credibly hit on FSCO/I, for reasons given in detail. ] The math just doesn’t support that contention.

If not, then kindly explain to us the logic used in Fisherian hypothesis testing on whether an observation is in the bulk or the far skirt of a distribution premised on the null hyp.

Those kinds of analysis are based on (hopefully) large samples and come with confidence intervals. If you want to set up that kind of hypothesis testing please do so. [p --> you are slipping and sliding to a strawman, the issue is, the point of the analysis is that random samples will predominantly come from the bulk not the far tails, which are special zones.] BUT, the point of the confidence interval is to indicate that the conclusion can STILL be wrong. [q --> And any inductive conclusion can be wrong, hence best CURRENT explanation, however there are any number of such that are morally certain, e.g the sun will rise tomorrow, and error exists. That you will not pick up deeply isolated special zones on a sample comparable to 1 straw to a hay bale as thick as our galaxy is the same.]

And again, medical trials and the kind of situations that use Fisherian analysis are not based on a single sample. Your confidence interval in such a situation would be very nearly zero. [r --> Strawman, you are ducking the point that samples taken at random are overwhelmingly likely to come from the bulk not narrow special far skirt zones. You are obviously going our of your way to avoid acknowledging this well known point.]

Just to pick up what caught my eye, do you not know that a living cell is encapsulated and has smart gates that control what comes in or goes out? That, it is a metabolising device, and that it self replicates on a vNSR, using codes and algorithms executed through molecular nanotech devices?

I just didn’t get the reference in the context of talking about sample spaces and random searches.

Similarly, you have been TAUGHT that all the evidence supports common descent, and that such is only to be explained on NATURAL CAUSES. In fact design is compatible with common descent, in several possible ways, but the evidence does not substantiate blind watchmaker naturalistic common descent.

I see no need for the designer hypothesis. [r --> Personal perception has nothing to do with objective warrant.] I agree there are aspects where design and undesigned could look the same depending on the intent of the designer. But I don’t think you can look at life on earth, with no other examples of life on other planets, and claim life is designed without making more complicated arguments and/or finding more evidence. [s --> remember, a self replicating automaton that uses CODED algorithms to control NC machines assembled using molecular nanotech. What empirically warranted chance and necessity model have you got to explain such, and what serious counter do you have to the billions of test cases and needle in haystack analysis that warrant that FSCO/I is a reliable sign of design?] You can hypothesise that it is of course. But you can’t prove it by making simple probabilistic arguments. [t --> Selective hyperskepticism, you are choosing an explanation without empirical warrant of adequacy over one with such warrant, on clearly ideological grounds.]

Routinely, on billions of cases, FSCO/I is seen as caused by design.

Quite true, regarding inanimate outcomes and when there is a designer present with the requisite skills and equipment. [u --> Irrelevancies, as algorithmic code is algorithmic code; you have no empirically warranted mechanism, and wish to object to that which does have empirical warrant.]

This is backed up by needle in the haystack analysis as in the main comment. Indeed, it would be far more reasonable on the evidence to infer to common design, which is perfectly compatible with what we see and is the empirically reliable cause of FSCO/I. The cell is chock full of FSCO/I.

I disagree. I don’t think you have proven the case mathematically. [v --> this is the proof that you are not examining on the correct grounds of warrant. Inductive matters are not amenable to deductive proof. But you wish to impose an inappropriate standard because the empirically grounded best warranted explanation does not fit your worldview preferences] Now you might be able to by using more complicated Fisherian-type methods. I’d recommend Bayesian myself, that carries a lot more weight. But you haven’t done that yet. [w --> Strawman, the issue is that a blind sample comes form the bulk with high odds, this you cannot deny.]

But this is all just my opinion. I’m not trying to inflict my views on anyone. I am trying to answer your posts with little rancour or putting words into your mouth. I’m not always successful of course (being a dopey human being really) but I am trying to be civil.

I don’t expect us to ever really agree and I’m not trying to influence anyone. But I will answer queries as best I can given my time constraints. If I’ve missed any or misinterpreted any then let me know and I will make another attempt when I can. Today is not looking good though. Oh well.

{ –> I thought it necessary to do a quick note on points, sorry if rough around the edges, gotta get ready to go now. KF]
>>

>> 922

KF (916):

Pardon a quick and dirty markup at 912. Gotta go.

Please do not apologise! I know you’re busy and, anyway, I prefer that method of response.

I keep wondering why you keeping replying considering how recalcitrant I am!!

KF (912):

Just a couple of general points: I agree that the number of viable/functional/interpretable configs in the kind of config spaces we are talking about is very likely to be a very small compared to the whole space. And that most of the time, a single random sample is going to return garbage. Those are given as far as I am concerned. If I ever gave the impression I was disputing that then I apologise for my poor exposition.

samples on the gamut of solar system or even cosmos that are blind are maximally improbable to hit on the functional clusters.

‘[S]amples on the gamut of the solar system’ doesn’t make sense to me but it’s not a big deal. ‘[M]aximally improbable’ doesn’t make sense to me either. The maximum improbability would be a probability of zero which no thing in a sample space (of the type we’re discussing) would have. Each config in our discussed config spaces would have a very, very, very small probability of being selected in a random search but it would never be zero.

Me:

Given a single random selection out of the whole config space where every config is equally likely then no, you cannot assume it was not a blind sample AFTER getting a config you find surprising or meaningful.

KF:

Oh yes you can, if say you saw the first 72 letter for this post in ASCII code, given the utter unlikelihood of finding such a functional cluster by chance as opposed to the very many more non functional ones.

I’m sorry but that is just not right. In a purely random search each config is just as likely as any other. Each has a miniscule probability of being picked if the config space is large. This kind of situation is exactly why medical trials are based on large trials with multiple subjects and control groups. And then you generate p-values and confidence intervals. That’s an accepted way to use mathematics to make decisions of alternative over null hypothesis.

I think we both agree that a diagram found on Mars would have to be more compelling than some vague blobs so there’s no need to go over those points really. Obviously we’d both say something that looked like the London Underground Map was designed no matter where it was found.

Strawman, you are ducking the point that samples taken at random are overwhelmingly likely to come from the bulk not narrow special far skirt zones. You are obviously going our of your way to avoid acknowledging this well known point.

I think I’ve already shown that I agree with this. It’s the conclusion after getting a specified and complex pattern where we differ.

But you wish to impose an inappropriate standard because the empirically grounded best warranted explanation does not fit your worldview preferences

That is not true. I am suggesting a method of analysis quite common when trying to prove an alternate over a null hypothesis. To be sure your alternate hypothesis is correct you have to establish that an event was not just a random occurrence by repeating the ‘trial’ many times.

(As the null hypothesis is the ‘default’ hypothesis I am picking the design hypothesis to be the alternate but it’s possible to do the same analysis the other way around. But the testing would be different.)

If you roll a 20-sided fair die each side is equally likely to come up on any given roll. It’s only after multiple rolls that you will empirically see (as opposed to figuring it out analytically) the probability distribution of the outcomes. If the die is fair/random then after 100s of rolls you should see each outcome occurring about 5% of the time. But on any given roll you have no idea what’s going to come up. And any given sequence of outcomes is just as likely as any other. So a sequence of 1, 1, 1 on three rolls is just as likely/unlikely as 1, 2, 3 or 3, 3, 3 or 2, 4, 6 or any sequence of the numbers 1 – 20 you want to pick. IF the die is weighted and not really fair/random you will only be able to determine that after multiple rolls.

I have some weighted dice. The tend to come up 6. But not every time. It usually takes people 4 or 5 rolls before they believe there’s something going on. But they don’t blink an eye when a 6 comes up first.

I agree that if I randomly generated a sequence of 504 0s and 1s, converted it to ASCII text and found that I’d got anything which made sense as an English phrase I’d be extremely surprised. But one trial is not enough to establish that the procedure is anything other than random. You have to do many.

Write a program to do the above and see what you get. Do an experiment!!>>

Remember, there is an offer on the table to Jerad (and/or whoever) to do a 6,000 word essay on the evidence that grounds in your mind the blind watchmaker thesis and makes the design theory proposal unnecessary.

Okay, let us continue . . .

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

800 Responses to The TSZ and Jerad Thread, continued

  1. Okay, picking up from here, hope this thread behaves a bit more perkily. KF

  2. Jerad:

    I’ll have a look but I was speaking of a very particular experiment in hopes to make my own limited point about probabilities clearer.

    What sort of experiment do you have in mind? I can perhaps create a working program if you can describe the requirements.

  3. Yeah!! Thank you!! :-) :-) I shall gleefully participate as long as you put up with me. Gotta do dinner and homework and dog walk first tonight but . . .

    :-) :-) :-)

    thank you

  4. For comments related to posts made at TSZ which are not addressed to the relevant issues raised by gpuccio I suggest we use the Junk for Brains thread.

  5. gpuccio:

    I will stick to Lizzie’s GA, to make the discussion as simple as possible. Please, refer to my definitions of IS and NS in post #856.

    HERE

  6. gpuccio, you make some great points in your post @909 in the original thread.

    Among them. Differential reproduction requires a functional cause. Lizzie’s GA assumes function for all her strings, regardless of where they are in “function space.” I say this because, on what other basis does her algorithm select which genome will be removed from the population and which genome will remain and be copied? She assigns each of them a “fitness” value.

    Frankly, i think that value should be between 0 and 1, as shown in Joe Felsenstein’s example which is in proportion to their chance to leave offspring. I’m not sure I’m stating this clearly.

    But if these are functional, then wy do they get to just wander all over the “function space” looking for a different function? You are really on to something my friend. That’s not natural selection.

    My view is that if it is functional (or even if it isn’t – drift) it should have some percentage chance of having offspring in future generations. Fully 50% of her organisms have no opportunity whatsoever to contribute to future generations. And that ‘s not a “fitness” of .5, lol.

  7. Mung (2):

    What sort of experiment do you have in mind? I can perhaps create a working program if you can describe the requirements.

    KF and I have been hashing out sample spaces and probabilities. I’ll try and summarise, I hope fairly.

    Let’s say you had a HUGE sample/configuration space. Like all the possible sequences of 0s and 1s of length 504. Take any one of those 504 bit sequences, break it into 7-bit chunks, interpret each chunk as an ASCII character, and see what kind of character string you get.

    KF says: if the character string you get after randomly picking the sequence of 0s and 1s turns out to be the first 72 characters in this post then that implies design was involved. Or the search was biased.

    I say: all of the 2^504 sequences are equally likely under a random search and while it is highly unlikely that you’d get any coherent English phrase out of a randomly selected sequence it could happen. And one random pick is not enough to establish a design influence.

    I suggested that KF test out what could happen with a program. So . . .

    Have a program generate random sequences of 0s and 1s of length 504.

    Interpret those sequences as ASCII characters.

    Reproduce the results.

    Repeat. And store preferably. Print out at least.

    I think it would be really interesting to look at a couple hundred iterations at least. Just to see what comes up.

    If you’re not having too much fun tormenting the TSZ folks that is.

  8. Okay:

    Let me do a markup of Jerad’s 2nd comment:

    _____________

    >> KF (916):

    Pardon a quick and dirty markup at 912. Gotta go.

    Please do not apologise! I know you’re busy and, anyway, I prefer that method of response.

    I keep wondering why you keeping replying considering how recalcitrant I am!!

    KF (912):

    Just a couple of general points: I agree that the number of viable/functional/interpretable configs in the kind of config spaces we are talking about is very likely to be a very small compared to the whole space.

    [a --> That is a key first step]

    And that most of the time, a single random sample

    [b --> whoa there, the "single random sample of relevance in the first instance, is the solar system whirling away for 10^17 s, with 10^57 atoms doing a fresh state every 10^-14 or so s, as fast as chem rxns basically get. In the second, you are talking about seeing say the first 72 ascii characters of this post being emitted by a 504 coin string + scanner apparatus (which is eminently constructable); just one instance of such, for good reason would lead to the inference that the best, empirically warranted observation is that someone did that by design, noting that the SS whiling away at that rate for its lifespan would only sample the equivalent of 1 straw to a cubical hay bale 1,000 Light years -- as thick as our galaxy -- across. For excellent reason, relative statistical weights of clusters, the special zones would be invisible to such. Here, for those who need it is a sketch of the scanner:

    ))--|| 504 coin string||--> 504 bit report on pushing the button

    is going to return garbage. Those are given as far as I am concerned. If I ever gave the impression I was disputing that then I apologise for my poor exposition.

    samples on the gamut of solar system or even cosmos that are blind are maximally improbable to hit on the functional clusters.

    ‘[S]amples on the gamut of the solar system’ doesn’t make sense to me but it’s not a big deal.

    [c --> Again explained, think of the solar system turned into a coin tray scanning machine with reporting devices, in real terms, the SS is a site in which chemistry of OOL and OO body plans was said to happen on blind chance and necessity. Search resources, gamut of observable cosmos moves up to 10^80 atoms.]

    ‘[M]aximally improbable’ doesn’t make sense to me either. The maximum improbability would be a probability of zero which no thing in a sample space (of the type we’re discussing) would have.

    [d --> negligibly different from zero chance of observation on the gamut of accessible resources as has been repeatedly discussed, cf here at wiki: ". . . the probability of a monkey exactly typing a complete work such as Shakespeare's Hamlet is so tiny that the chance of it occurring during a period of time even a hundred thousand orders of magnitude longer than the age of the universe is extremely low (but not zero).".]

    Each config in our discussed config spaces would have a very, very, very small probability of being selected in a random search but it would never be zero.

    [e --> not strictly zero, but so close as to be practically so. This is the statistical basis for the 2nd law of thermodynamics.

    Me:

    Given a single random selection out of the whole config space where every config is equally likely then no, you cannot assume it was not a blind sample AFTER getting a config you find surprising or meaningful.

    KF:

    Oh yes you can, if say you saw the first 72 letter for this post in ASCII code, given the utter unlikelihood of finding such a functional cluster by chance as opposed to the very many more non functional ones.

    I’m sorry but that is just not right.

    {f --> Oh, yes it is, and you did not wait for 200+ posts to decide that text appearing over my name is not the result of lucky noise on the Internet tossing off one of those statistical miracles you want to rely on.]

    In a purely random search each config is just as likely as any other. Each has a miniscule probability of being picked if the config space is large.

    [h --> Strawman, you know or should know that the issue is predominant cluster vs narrow special zones as has been repeatedly pointed out and linked on. Each specific state may for practical purposes be equiprobable, but the states near 50:50 H/T in no particular order so dominate that it is utterly unreasonable to infer this is a credible explanation of seeing the first 72 ascii codes for this post. Or another string in English.]

    This kind of situation is exactly why medical trials are based on large trials with multiple subjects and control groups. And then you generate p-values and confidence intervals. That’s an accepted way to use mathematics to make decisions of alternative over null hypothesis.

    [j --> Yes, we know med trials and the like, with 5% tails or maybe 1% etc. We are talking here of the entire atomic resources of our solar system whiling away for its lifespan only being able to sample 10^87 chem rxn time states of 10^57 atoms total,leading to a comparative number as 1 straw to a hay bale 1,000 LY across. We have excellent reason to conclude that such a sample, if blind, would be absolutely dominated by the predominant cluster: nonsense strings near 50:50 distribution. And BTW, that sample is all our effective cosmos for chemical interactions could do, never mind that 98% of the atoms are locked up in the solar fusion furnace.]

    I think we both agree that a diagram found on Mars would have to be more compelling than some vague blobs so there’s no need to go over those points really. Obviously we’d both say something that looked like the London Underground Map was designed no matter where it was found.

    [k --> Okay, agreement. Now, ask why. The answer is, FSCO/I, courtesy a nodes-arcs diagram and its bit string equivalent info content; courtesy say AutoCAD.]

    Strawman, you are ducking the point that samples taken at random are overwhelmingly likely to come from the bulk not narrow special far skirt zones. You are obviously going our of your way to avoid acknowledging this well known point.

    I think I’ve already shown that I agree with this. It’s the conclusion after getting a specified and complex pattern where we differ.

    [l --> Recall, the needle in haystack analysis is secondary, it helps us see WHY FSCO/I is so strong a signature of design. The primary argument is that we know per a massive and reliable base of billions of cases, the source of FSCO/I when we can directly see it being formed. Design. This is inference to best empirically grounded explanation in light of tested sign. And this is in perfect accord with the in-principle of say geodating on isochrons. We know per current investigations the causal factors involved in a process and their effects, including characteristic signs. So when we see traces from the remote past that parallel the signs we can observe in the present we infer to like causes like and deduce a date. I am pretty sure you accept the geo timeline produced by this and other similar means. Only problem, the signs from radiodating are known to be less reliable than FSCO/I. See the inconsistency problem on degree of warrant demanded?]

    But you wish to impose an inappropriate standard because the empirically grounded best warranted explanation does not fit your worldview preferences

    That is not true. I am suggesting a method of analysis quite common when trying to prove an alternate over a null hypothesis. To be sure your alternate hypothesis is correct you have to establish that an event was not just a random occurrence by repeating the ‘trial’ many times.

    [m --> Of course, first, the original point was that Fisher's investigation was premised on how reasonable samples cluster on the bulk not the far skirts so if the special rare zones crop up too much, trouble. Next, I had drawn the parallel on unobservable fluctuations from thermodynamics, the root of the statistical grounding for 2nd law of thermodynamics. Namely due to the predominant cluster, some things will not come up under spontaneous circumstances in our observation. That is the context of the 1 straw to a 1,000 LY cubed haystack example. Do all the tests you want, you are in that ball park relative to the space for 500 bits. If you present me or any reasonable person with a case where you claim the equivalent of the first 72 or so ascii characters for this post popping up by chance and necessity without design, I will conclude on very good grounds that you are pulling a fast one. To date, courtesy Wiki, here are the results of random documentation tests:

    One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, "VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[25]

    A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

    RUMOUR. Open your ears; 9r”5j5&?OWTY Z0d…

    In short, a space of 10^50 possibilities is searcheable like this, but that is a long shot indeed from one of 10^150 possibilities.]

    (As the null hypothesis is the ‘default’ hypothesis I am picking the design hypothesis to be the alternate but it’s possible to do the same analysis the other way around. But the testing would be different.)

    If you roll a 20-sided fair die each side is equally likely to come up on any given roll. It’s only after multiple rolls that you will empirically see (as opposed to figuring it out analytically) the probability distribution of the outcomes. If the die is fair/random then after 100s of rolls you should see each outcome occurring about 5% of the time. But on any given roll you have no idea what’s going to come up.

    [n --> The die in this case has 10^150 sides, and the overwhelming number are all blank. Tiny -- relatively speaking -- clusters of sides are written with 72 or so ascii character strings in a language. Toss the die and lo and behold the one with the first 72 characters for this post pops up. I would immediately conclude, loaded die, for good reason.]

    And any given sequence of outcomes is just as likely as any other. So a sequence of 1, 1, 1 on three rolls is just as likely/unlikely as 1, 2, 3 or 3, 3, 3 or 2, 4, 6 or any sequence of the numbers 1 – 20 you want to pick. IF the die is weighted and not really fair/random you will only be able to determine that after multiple rolls.

    [ s --> Cf above.]

    I have some weighted dice. The tend to come up 6. But not every time. It usually takes people 4 or 5 rolls before they believe there’s something going on. But they don’t blink an eye when a 6 comes up first.

    [t --> 6-sided dice are not comparable to 10^150 sided dice, with the overwhelming number of faces blank.]

    I agree that if I randomly generated a sequence of 504 0s and 1s, converted it to ASCII text and found that I’d got anything which made sense as an English phrase I’d be extremely surprised. But one trial is not enough to establish that the procedure is anything other than random. You have to do many.

    [u --> I got some prime commercial real estate on George Street, Plymout M/rat, to sell you. Interested?]

    Write a program to do the above and see what you get. Do an experiment!!

    [v --> Been there, done that; try here, only 100 coins though so you get bigger fluctuations and this gives fractional coins values too. +/-10 at 100 is reasonable as estimate, try for 1,000 with 1/sqrt n as a metric on reduction of fluctuations.>>
    _____________

    It should be clear why the differences are consistently emerging.

    KF

  9. PS: This story shows how hill climbing (loose sense) algorithms with targets give a superior result to above. By ID of course. But that part does not get headlined. If you are going to copy off Shakespeare hire a decent typist!

  10. Jerad:

    Interpret those sequences as ASCII characters.

    Do you want me to include all ASCII characters or just those which are letters, and perhaps the space character and period?

  11. Today’s Junk for Brains winner, keiths.

  12. Here you go Jerad, first 100 (lol):

    0$ó¢ìø³›õ”®=2cçøCuµðü]Šø÷°­°‡Â0Ê~^²ðaaÖŠ§ôü½PX͆úèŠ

    }Ú8^$Õ;g Pë?3úrŽÕ-V—ô2•T§çÎjì¹$:M 5›Õ¢“ZŒU:Õyüa”ñÐSQ`Ì

    fHêåî’¤2¼ÿM7ˆ*žÄÞWÄö TP0䚣—¦ò!”u7ÃjVPk°ÒçÁ¾&Œoؐî­ç‰¢¯:

    A>•ÒG˜âúŽ ŠÉe”9óäËÙ
    &ø^ër¼¼02%ÍR¢y?mª¡a,,ž~ñVõ¬üÂÂ8%~²À€Â°TK‰A•¬»ÙÀI³.ø“eÃœ>´i_PÝ°

    €à5Ýüslc¨ê£$.,‡v
    Tø±ÄXÞJæöaq¢;òêp%=S C¡Æš¬âóor‚4Å{ä]î1 ƒÔЯ¾¹ìBÙRV5IÀKÏ1‚š(Ò

    …öžrÞ‰½y¦…Âsd
    ˜N@w (uW
    J¸’Ó?jð„;Ó|)É½7ë}
    “Gn0õ¬D1“1s–|˜^+¦ÀE8‹3¯ûPØv<ÒsÂ
    ÛsîTþ ×]¡©ÄŠa…|
    ë{§“®›8ÎåÄ

    6¦út„OV“)·`¨Ë¨ 8[J(-ÝĆ=íŠÌ›¼°S'7ÒãÍș샺Žëà´äjB¬Hé“•tû=Òµ

    êÿk]Pº,ˆ´\ŸUºäÔéE8!˜}ÍLNÔï‰l©ßjñ;w¥­âðò„#â”í+,uŽÂӁ1

    <Ì™wiôÆ*mJ‘=.Á„n#  µ½ÓŽ¤q“mYñT*ú×Ô 

    ¦Z(˜YÁÒr
    SM]!Ë\;ND$+I9k5Dۺڄ賈bèáS°Ë2¥É‹¤/†› {Œþví¢¨ÔšýýðI0·0¸•ÔU

    RçÛa<I§«*ðj\ƒdã¼’\÷Ö\fƺ1Ö@ÃÔ¡/Iñ[
    æ!
    žÒ6Êø¸•Wö—hšÿ}‡êiâ

    Df&×a¾´@“g U®è1 ·x°WV}fx^ÃÆYAý³ 2öŠ;à#+ljºÁ¶U¥ 3ß­Xÿì%»

    ²%Bv
    È+g|º¹"Œ¢Û[…7‡£4ãn0kÀ5‚f|mÿ,^}==gŸ86§îûœ5§˜µ!

    fÌ}`óØáì ˆ7<‡’hß/–b
    ¦éÐ:•øæÉÞÁŒßKP$’éšTÎ4ËZlÄñ’z$Çú®n‡ ¾díB

    +ë˜K/xXËÉØ›”ȁGMXdâgÞE“'Ü0Úï"óyðêïWBµ¥Lã€`XJ}xXûô´.YU…žP×

    \{‰kÙ Á4Ñ4ÀÁ@Q;dÒùéžþ¶Lµˆ¦²óàIZ¨þqž8dýå™S¶^ÆóªE³(å —kȆF

    `6ïÑâa 9°®ÂU]ãÎAš\U{¥û Æ·¾µ@]±Ð…¥òŒÓrào­Ý.öTàéYA‡·FðmIO¡˜Ô

    /Ļ圠ühæbn‚Ã\¸";œ¶ô
    ðCÓm›ëi?\îž/<'ôEÞ!KUz”g$QJû£‡þ`¸¦

    ó'Ê€åô¡”›Ò$LžŠ<zùåÙáz’‡m™Og(½;A 0=à oUµ„c\<iùê¾=¤BÚ–OÄÉá

    !Ò¾µÆËyä·¶Í
    gVq’0ÝÒ°èm%œ˜*Ö@çÖûá3X}, YO/&â„«tÁ¶9ð)4ŠÝVãt£•ö É<K]ÜR

    *§Vã»6'°†Ô1
    ÚõƒVÙN¼;.à*Õ!¡ŸpXH(nïäŸæÒQóö¹À€BkDÕh›¦(â„¢AëwçŽÂ«C+ÈçL>âèð
    (1¯è¿°£ù˜Ø!

    “t×ÖÐ’ÇGE×,OºµÁ(y*†ö£lýwgGb°ùD0Ö:h3a/êeá!À?ªÌ:µú0D<ÅKÎ2

    Y mÄ«Ö‚±?—Îé³ sNI»¯¿‡‘†ÇSöI8!Pá+Kiͱm·á›.”:KètËú†mÀ€â€™R›CÃŒ8P0A$fìZ1 21ITÊ’Å“{6Þ¾žóä=@”±çÁPŽæÅ G‡t3փ¶ê

    eË3ä4š¯þ½ƒp2“bFDƒ?|r¶ÄúEÃqt¬ÀUCq&ðŽÃØv6ˆ1$€Æ”NUÚìá‘Iý·Œê

    ~3–«ê^iç܃:»RSŠ‡Ã €æª[lnC?Ç=?OÀ‘¹ê‡Κˆ9àwCHGAZ¶Ö}dPi

    àxýŒlœê¸j‚ÔîênD]FU
    ØYì¨ü d‡  ™(Ý—¼Ç#V›Rñ@ÊZ÷§§žybxp¬P9æÝøA“ÿ—`Èõ:Odɸ΂ù•1<

    øøjüì|£ßÀ Ÿ†êÛ;€fYÊ[©Ä@%|Ód9Ÿ‰µeöˆþ/ø7˜,ædEƒ–a4©ï¶í

    ¼kÜ’^Άcê­ÉCµÃ0‰…ˆìÕNúqD‡*SƒzuâÏ

    Ìt PøHIÝd4œSžÈyžyXíqeùötµÎþ$¬è£¤Z»Èæ@*íi„Vâ/É´ÈtwtV!

    Â/âuŒ-áЃ8šféÍ)w¬«†óÃVÑ
    pâyVa„ã9èUg•ÝÅ™CÝî4Žl”ke¦µZm8
    y»áF•Íò­¤Þ—*jjHRÔ
    •ƒÄY#©ÀŠýDÜ$p7¼ý×kº[WzF>- }g`ÉŠ†ÆÕjÑ'õàÌ

    ¿EN‰|áß”P§ ¸éþ¸Êì è­‰–,]†•Eý¯mìÞåëþÐe’Näp¦
    ³{Ù9¬@ÚãI%

    »È4¬…ƒ)¼ÿò ­Îõ%&C]áþ{x ,\ ¬$±CN”ç(´ÛhläºG5ý½n’”2»wŽØR

    ˜¹¢¥cˆ¤2ÈÏDÑ-oh$IŠ×5Ÿ¦ßE¸¤›_>.ë«ß6¾¤áèY”opYqùâ@uX ¯x¼àáB£

    öñ)©fQo&†cü;¸V†~\Πg˜¿Ïƒ“h7cØòçbFCVÉK¬G(b1u’jÒñq, äP§

    çŒñ’.—€ïý:/
    +2PÎìs€Ða÷I«ÐæzhP!žò¡k¦KËÀS/m å–V(† ßQ÷|Ô­M-®ªÓB<õê*%¿C¹

    d­Ê`ºrÆËËàßï°¸¥pÀ˨§G‘¢Û6’ï;¡wÆA°€^PFi;«ÉÎ! )C¦%Þné½

    Ù¡ñõÅ`bh"ÞæA­ev“9G^RšëáŠê(ߤ0f¸’ñÈ/ò+É6g㦉ì·l
    ÅíÖ Šß–t

    |³^¶j™Ÿ+ˆ ‹ég˜ ã·M*¦¾Â¡„ŠºÊ&kD2ôy5‹ý Hë
    bÒìËA‘ýœÙÒw‹Â´ …Ý

    ™Mkߤu˜²ËÛyPž-ÖƁmSNÇ£u"¾| ŸëÓÕ]3Ï q†ºßã2bG )··ˆH Ñ>ukÙ/‘Ž £Ð[„

    r¦í¹þ-ÓþëÎ%‚9íŸL@Ž¯UFfQ0>¨àl¶¬ÌZ>˜ÂËËi•™ä„¡‡}abµ~u­·ÄÈ+

    k揉Œ*Aœˆ ™ÙsD{lÉ'~žþ—T½[i\(°Û›üäm~Ï
    tJ"¦×?c‚#£ÑÆؐ¨g°"ÉÀÆCž >jx2“Þ6¨ý»S„{¯“ôå­+ÍΝÍ/OáãYÏ£

    &¤ÔÞöÂ.Pº´PK›Ï}1^”ö7¡ˆÿ.,üG(%TMî,æ«L¨›Ì~šÌLS‚Õ:¾KRSÉcrJY

    µÕœ+ÎÓÓ«2P:IÌ^¥Ø-ÍãÞYŒË“1EÉIv¶Œòž _©bø–ø)ÆË?oöPa¦![Õ ‘÷

    SÌ»VõɐqÆK&ò G=×õ5£?{œ‘X±¢¸â’¦j§¸I´ÈÞoþ¦ÓaiJS—q(Ò– í
    °xÁZ

    Ó$tÅÿ¶OÐw£®(ŒÀÇ9B*àV.÷²‰ÌÀ•Ú‰ú|4ä6ÂAÿº­#³
    èž×M2æÍÊ*gÔ¦‚

    ü®çbÐP‚0yM*O±²¼¼E•>´Í}À®¿0JŒv}éñÀsšÏÀõÞ¯©0BnÒÄd`ís±ý.

    Èß“¸«BjX†°k.
    ö;.o1ÆЙ“î–ò#v]ÁûŸ^/U†aÂü‡ðXôœkz\ ,Bò:e£ó

    ×Œƒ$À”ŠÚ~û¡É>šhÛt:ÊàçÕ
    ó˼U‰¶½;‘æ §Å&£Ö†¨|= Š`žÈ)+KKãcu†

    ýÆŠ“DMÍÑuÁz—†ÝùD¹?ÂÈÔÓÓ©¡.j’Î`&nmva³LûA6tœbò‚ ¤++Ñ¡zŸèˆÔí

    ع Ø(㨶Ùeà8ùfYÏîEÐ…qZRtEAšQV–;˪¹t™³êËòMxÐdÖþȈ¦·
    ‘˧

    ¢¾ØYÏ´ød¨ùV„й-Œg™FŽ×Cc@‘‡`ïØÊCÃØb’
    „Òå¹Eȱä¼O£Êýc–(ž¡…™óÉ`¦

    ÉH§>—òGu@ãa™§^ƶ:|‚ƒÚ¨ó“ôÉ›ö£JÂßOؤn%LËœx{ÓÄЄ…â`ÈN>ød>‘¡ dg

    @í²æt;ŒHxІÅ™MGܱ¯éûÜ$tž¹÷Ø x9ÿºI›îÈ_EP©•µãÖj¯¿ÒƒK

    ·$7mý.£)kpf…6ùêòÁ{3Ö¯«h6ƒbTs{ƒ +û-¢Eö ¢IhQÿþ͒ɱ“Å
    ‡Ô3±ƒ#
    •oûP™l7*ze¡P¯üáUoIœÀûß,ÕÞ’ÑqDñ¶n>.…-m zº…

    Õ¬€3#Eßâa7`€ç8ÑÇ`

    N¹’Bg‰kŠÐœlæðÉD>¦Ä„á—C¶*Å:ã› 
    ¾ã*fº ¸\ž
    ~Fø¸™SuÑí
    X=&,†

    Êò¢Ã¡FŸª^U¯dú(æ¥:)a¢`yÔ ð©ÉqHÃ&›óëÅ` Ãl®«<®Ó“qu…#j<–0j4_òí

    ÔøêA£ (½ï8gg­ü´ØÊ5œ×-\Z Lêô¡rLçŸX0Ð൲£ú)KàÌz> 1=^Vå}EÐ[u’4B—,6üqöàÜ%Ù‚sZ’
    ”)N…ÞÔÄ#µ

    “Nµ¶éÄÄI+€¿WúÚOÿuø™€g%+Z_Úgƒ _+¶î½’çOž&³ áØ

    ÛòdØ¡ %ëGn€»„ìԝ¡Ðߎq®õÆc#ÊÞ!:ã‰w}äö ÔQè¥[‡°Ó5Ó-.Ù!’¿×

    »¤<” ÅŠ€'ܹÜäÞ'6ՏÀ8IîÑú
    {˜mÜOó憜
    2ÁÐÓa´¥*¤Ép:*Ç8W»œŸŒeÔv—ðéZE²Ê܇±u ×H`â.WXú[š.@

    /¾“ùöG@ÍŒ•nø™§Ï¶*Ò´îð¨2ìÿ¤ç‡

    dD‡Î`_væ~Aþl’jÏD²m~,dNù/Š™ÌaÅ­)Ù¼¨Wh-I\

    ±ÖRøõ‘¢ùŠK·”Y’T¬ÂÝAI”Ó€÷a9ÈÏ;ø¾ÖU÷ŽÊüQ

  13. Allan Miller@TSZ:

    The house jackpot merely stops the simulation, and plays no part in the ‘selection’ process – it just announces ‘dFSCI’ according to some threshold.

    There is no house jackpot.

    There is a predefined target (in Lizzie’s own words):

    MaxProducts=0; % keeps track of whether the target has been reached
    while MaxProducts < 1.00e+58

    To assert that it “plays no part” is disingenuous at best, since the whole point of the exercise is to generate a string that meets or exceeds the target by selecting strings that come ever closer, increasing the number of such strings as a percentage of the overall population, and mutating them in the hopes of generating one that meets or exceeds the target.

  14. gpuccio,

    At least Allan seems to be addressing your points and Lizzie’s program. I will give him that. I also appreciate his manner and apparent willingness to concede certain points.

    Allan Miller:

    The problem appears to start with the ‘target’ – >10^60. But I think he may be mixing Lizzie’s term ‘house jackpot’ with ‘target’.

    I doubt it. See my post above.

    …fitness of strings is not evaluated with respect to the ‘target’.

    So the whole point of the ‘fitness function’ isn’t to identify strings closest to the target?

    Then what does this code mean?

    Products=sortrows(Products,2);
    MaxProducts=max(Products(:,2));
    WinningCritters=Products(Ncritters/2+1:end,1);

    I’m new to MatLab, but here’s my comments on what I think the code does:

    It sorts all the organisms by how close they are to the target value, stores the highest value so she can later test to see if the target has been reached, and then takes the 50 who have a product closest to the target so that they can be replicated.

    In any world I inhabit that is “evaluating with respect to the target.”

    Some members of the gene pool (by virtue of having a higher product than others around at the same time) are more likely to have digital babies than others

    Because their products are closer to the target.

    but it serves its intended purpose of illustrating how apparent ‘dFSCI’ – an ‘unlikely’ target – can be located by stumbling around a fitness landscape guided only by differential reproduction of the members of any current population – ie, by NS.

    Guided by the intelligent knowledge with respect to which products are closest to the target.

    It compares those available against each other,

    That’s only part of the story. It does compare them, but the result is to put them in a sorted order according to which the 50 with products closest to the target can be identified and retained.

    In the final analysis, just another glorified Weasel program. :)

  15. These people over at TSZ, most of them anyways, are so funny.

    Do they not understand that the reason you start with a randomly generated genome in a GA is to spread the “organisms” far and wide over “the landscape”?

    What sense then does it make to use a single fitness function that applies to all of them alike? Why are they all searching for the same thing?

  16. Jerad:

    I agree that if I randomly generated a sequence of 504 0s and 1s, converted it to ASCII text and found that I’d got anything which made sense as an English phrase I’d be extremely surprised. But one trial is not enough to establish that the procedure is anything other than random. You have to do many.

    That appears to be inconsistent with your statement in the original thread.

    It’s possible, highly improbable for a lengthy string. But, in this case, I’d suspect the situation was fixed, i.e. the string was not randomly generated.

    And that was with a sample size of 1.

  17. KF (8):

    Much easer to scan the thread now. Thanks.

    I know what you mean by gamut but it’s not really standard statistical nomenclature. Nor is maximally improbable. [-->'twas never intended to be, but to communicate to the reader] But I know what you mean in both cases so, moving on . . .

    KF: Oh yes you can, if say you saw the first 72 letter for this post in ASCII code, given the utter unlikelihood of finding such a functional cluster by chance as opposed to the very many more non functional ones.

    Me: I’m sorry but that is just not right.

    KF: Oh, yes it is, and you did not wait for 200+ posts to decide that text appearing over my name is not the result of lucky noise on the Internet tossing off one of those statistical miracles you want to rely on

    I see that you’d be willing to make a design inference based on one highly improbable result but I could not. [--> A result that is credibly otherwise empirically unobservable on the gamut of our solar system, per all but zero probability, on inference to best explanation. Maybe we should go deer hunting on the flanks of the Blue Mountains sometime, now that J'ca has a deer population courtesy escapees after Hurricane Gilbert. ONE clear deer track is a sign pointing to deer never mind that someone could fake it or some possible chain of circumstances could possibly make it by happenstance. We are dealing with empirically grounded moral certainty, not abstract proofs beyond all dispute, noting that post Godel, not even math meets the latter criterion] I wouldn’t find that a good statistical reason to reject non-design. You can invoke straw bales the size of the galaxy or whatever but it’s still not a valid decision to reject the null hypothesis based on one sample. [--> translation, I will pick the all but impossible over the empirically well warranted if it is in a context where my metaphysical preferences are at stake. You will note, that a single inscription on Mars would suffice to demonstrate to our satisfaction that someone was there with a civilisation, a point you agreed to in the earlier thread. You have a right to your metaphysical preferences, but you have no right to impose them on others as that which has cornered the market on that which can be termed science. Not that you are doing that, but others with power unfortunately are.]

    If you present me or any reasonable person with a case where you claim the equivalent of the first 72 or so ascii characters for this post popping up by chance and necessity without design, I will conclude on very good grounds that you are pulling a fast one.

    That’s the way the math works. All outcomes in a random search are equally likely/unlikely. [--> I note how you consistently back away from the issue of predominant cluster vs isolated and narrow zone. Maybe, this is in part a reflection of diverse backgrounds, being trained in stat mech gives me a healthy respect for such clusters. Perhaps the strongest laws in physics, those of thermodynamics, rest on this pattern of reasoning.]

    The die in this case has 10^150 sides, and the overwhelming number are all blank. Tiny — relatively speaking — clusters of sides are written with 72 or so ascii character strings in a language. Toss the die and lo and behold the one with the first 72 characters for this post pops up. I would immediately conclude, loaded die, for good reason.

    I would roll the die again. And again. Then I’d start working on the mathematical argument. [--> There was a little mistake in that, forgive me: the die is not possible as a die of 10^150 sides cannot be constructed out of a cosmos of 10^80 ATOMS. But then, that underscores the issue of gamut of available resources and what can be done. What is possible is the coin tray and scanner or the equivalent, which puts us in the position of a 72 bit reporting engine that suddenly reports a complete 72 characters in English. The random text generation exercises show why such an infinite monkeys result is not possible, but the intelligently designed cumulative monkey program has now produced almost all of Shakespeare. See the difference between chance and necessity and intelligent guidance?]

    I watched someone roll a 4-sided die once (a tetrahedron with flattened corners) and it landed on one of the ‘points’ and stayed there. It happens. [--> irrelevant.]

    I got some prime commercial real estate on George Street, Plymout M/rat, to sell you. Interested?

    Hey, I was born in Plymouth . . . Wisconsin.

    Maybe, send me a picture and the Lat/Long and I’ll check it out. I prefer to do some research before I make a decision.

    [--> The point is the street is under 20 ft of volcanic ash.]

    I think we’ve come to an impasse here to be honest. I think we’ve both expressed our views multiple times and pretty well. If you want to let it rest here that’s fine with me. [--> I hear your view, but actually, we are at a pivotal point. KF]

  18. Mung (10):

    Do you want me to include all ASCII characters or just those which are letters, and perhaps the space character and period?

    Whatever, it’s gonna be mostly garbage either way. You might go for years before you got a phrase or sentence.

    I agree with KF that for even a 504 bit sequence of 0s and 1s the chances of converting a random generated sequence into ASCII and getting anything legible in English is very small. But, I wouldn’t infer design if it happened, even on the first go.

  19. computerist (12):

    Yup, mostly garbage, just like you’d expect. Are you sure those are really random though? I was thinking there’d be more variety. But randomness can be clumpy.

  20. Mung (17):

    Me: I agree that if I randomly generated a sequence of 504 0s and 1s, converted it to ASCII text and found that I’d got anything which made sense as an English phrase I’d be extremely surprised. But one trial is not enough to establish that the procedure is anything other than random. You have to do many.

    That appears to be inconsistent with your statement in the original thread.

    Me:It’s possible, highly improbable for a lengthy string. But, in this case, I’d suspect the situation was fixed, i.e. the string was not randomly generated.

    And that was with a sample size of 1.

    Yes, I think the situations are very different.

    The scenario you presented included a preprepared envelop with a text string written down. While it is still possible that you could match a target string with a random string on the first go it’s much, much, much more likely you’ve been set up by someone.

    And I guess that gets to the heart of the matter really. I’m much more likely to assume fraud than design. I’d want to be very, very sure before I inferred cosmic design but I do admit it’s a possibility. I know there are people/agents, like magicians, who enjoy fooling people and performing seemingly impossible acts on demand. I haven’t seen any evidence I find credible that there was a designer around much before humans learned to spray paint out of their mouths onto cave walls.

    I will gladly infer design in the manner of a devious human agent!! I find that extremely plausible. But I’d still want to prove the case by repeating the testing with extremely strict controls.

  21. EL:

    However, before I go further: would an ID proponent like to tell me whether, if I succeed in hitting the jackpot, I have satisfactorily refuted Dembski’s case?

    No:

    1- You are starting with that which requires an explanation in the first place

    2- You don’t appear to understand what Dembski is saying

    3- You are using artificial selection

  22. Mung:

    Go for the full 128 character set, do please.

    Also, if you can set the coin length, that would help, a manual push the button mode (and icon) would be great if you want to go that far.

    KF

  23. The TSZ ilk are pitiful. After all this handwaving about natural selection, their “theory” boils down to this:

    Somethings happened in the past and things keep happening and here we are,

  24. off-topic but worth a mention-

    I posted about the record ice extent for Antarctica and as some sort of “refutation” one of the TSZ ilk posts about the loss of sea ice in the ARCTIC.

    Mung, do you have something for that?

  25. KF (18):

    I see that you’d be willing to make a design inference based on one highly improbable result but I could not.

    [–> A result that is credibly otherwise empirically unobservable on the gamut of our solar system, per all but zero probability, on inference to best explanation. Maybe we should go deer hunting on the flanks of the Blue Mountains sometime, now that J’ca has a deer population courtesy escapees after Hurricane Gilbert. ONE clear deer track is a sign pointing to deer never mind that someone could fake it or some possible chain of circumstances could possibly make it by happenstance. We are dealing with empirically grounded moral certainty, not abstract proofs beyond all dispute, noting that post Godel, not even math meets the latter criterion

    I am happy to infer deer where there’s deer known to be about. Deer poo would be even better, harder to fake.

    ‘We are dealing with empirically grounded moral certainty . . . ‘ Not sure what morals have to do with a mathematical discussion. I’ll look it up to be sure but I think that’s not quite what Godel had to say. I’ll check though.

    I wouldn’t find that a good statistical reason to reject non-design. You can invoke straw bales the size of the galaxy or whatever but it’s still not a valid decision to reject the null hypothesis based on one sample.

    [–> translation, I will pick the all but impossible over the empirically well warranted if it is in a context where my metaphysical preferences are at stake. You will note, that a single inscription on Mars would suffice to demonstrate to our satisfaction that someone was there with a civilisation, a point you agreed to in the earlier thread. You have a right to your metaphysical preferences, but you have no right to impose them on others as that which has cornered the market on that which can be termed science. Not that you are doing that, but others with power unfortunately are.

    I’ll pick the mathematically sound choice, nothing to do with metaphysics. An inscription on Mars does not mean there was a civilisation there, there are other possibilities: time travel, crashed aliens, secret mission by another earth country. All about as likely as a lost civilisation on Mars from what we know now. If the Rovers turn up a map or an inscription let me know.

    That’s the way the math works. All outcomes in a random search are equally likely/unlikely.

    [–> I note how you consistently back away from the issue of predominant cluster vs isolated and narrow zone. Maybe, this is in part a reflection of diverse backgrounds, being trained in stat mech gives me a healthy respect for such clusters. Perhaps the strongest laws in physics, those of thermodynamics, rest on this pattern of reasoning.

    I’m not backing away from that. I agree that subsets of the sample/config space with greater numbers are more likely to have a config from them picked in a random sample. Most searches in the real world are not random samples.

    I’m only discussing random selections out of huge samples spaces because of the way you use such examples in your argument for the improbability of such a search hitting a functional config. I think it would be extremely improbable, but not impossible. And if it did happen (and the modern evolutionary theory says it only had to happen once and then RM + RS kicks in) it’s not an indication that the sampling technique was biased.

    I probably should at least peruse the thinking about OoL and the first replicator so I get an idea of how complicated it might have had to be. I hate chemistry. :-(

    I would roll the die again. And again. Then I’d start working on the mathematical argument.

    [--> There was a little mistake in that, forgive me: the die is not possible as a die of 10^150 sides cannot be constructed out of a cosmos of 10^80 ATOMS. But then, that underscores the issue of gamut of available resources and what can be done. What is possible is the coin tray and scanner or the equivalent, which puts us in the position of a 72 bit reporting engine that suddenly reports a complete 72 characters in English. The random text generation exercises show why such an infinite monkeys result is not possible, but the intelligently designed cumulative monkey program has now produced almost all of Shakespeare. See the difference between chance and necessity and intelligent guidance?

    But such a result IS possible. Just highly, highly unlikely. If you randomly generate 504 0s and 1s and interpret them as 7-bit ASCII characters you could get a sensible English text string. There are millions and million of those. The first 72 characters of every page from every book ever written for example. Even with millions I agree that getting one of them is highly unlikely. But not impossible.

    Maybe, send me a picture and the Lat/Long and I’ll check it out. I prefer to do some research before I make a decision.

    [–> The point is the street is under 20 ft of volcanic ash.

    My interest is waning. But then, look at Pompeii.

  26. Jerad:

    I think you may need to look back above, where I cited the result of extensive tests done by random document generation exercises, and reported by Wiki:

    One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the “monkeys” typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t” The first 19 letters of this sequence can be found in “The Two Gentlemen of Verona”. Other teams have reproduced 18 characters from “Timon of Athens”, 17 from “Troilus and Cressida”, and 16 from “Richard II”.[25]

    A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

    RUMOUR. Open your ears; 9r”5j5&?OWTY Z0d…

    This shows how spaces of order 10^50 are indeed searchable [Borel was wrong to that extent], only problem is such a space is 1 in 10^100 of the scope for 500 bits.

    So, why is it that I think that a config space, W, of 10^150 possibilities with isolated zones of interest T1, T2, . . . Tn where in cumulative total the T’s are much, much less than W, is in effect so unsearchable for E’s from the T’s that for practical purposes T’s are unobservable on the gamut of our solar system on blind chance and mechanical necessity (i.e. no intelligent direction as was so in the case where Shakespeare has been reproduced nine characters at a time)?

    Simple, I respect the significance of clustering of states, and the fact of an overwhelmingly predominant cluster.

    That is, the search for a needle in a haystack where there is sufficient stack [even on the gamut of our whole solar system], is predictably fruitless.

    Yes, for our purposes we can take any given state as equiprobable, and so in effect any given state is utterly unlikely to ever be found. That is, if we were to devote the resources of our solar system to it for its existence to date, with our coin tray exercise or the equivalent, WE WOULD PREDICTABLY NEVER WIN THE LOTTERY TO FIND A GIVEN STATE.

    But, if states come in clusters, the probability of the relevant event shifts. For instance, define E as E is any set of results from a toss of the coin tray. We will hit E on the first throw and every throw thereafter.

    Now, it is not hard to show that W is absolutely dominated by strings of near 50:50 distribution in no particular order. So it would be easy to hit that state, too. Indeed, it would be hard NOT to hit it. That is our straw in the 1,000 LY on the side hay stack.

    The problem is to hit the star systems in the haystack, i.e. anything but straw. In short, I am putting up a needle in the haystack exercise on steroids. It is notoriously and proverbially hard to find a needle in a haystack.

    Why?

    Because this is the complement to the easy problem, finding hay in the haystack at random. If that is easy as the hay is utterly dominant [in the 1,000 LY stack, there will on average be a light year of nothing but straw in any direction from any typical point . . . ], because it is the cluster of the bulk of possibilities, then it will be correspondingly very hard indeed to by blind chance and/or necessity, NOT find hay but needle or whatever.

    Years ago, I used to set up a thought exercise to make the point clear.

    Set up a bristol board sized chart of a bell-distribution, say Gaussian. Mark say 1/2 SD wide stripes with the peak in the middle. Back it on some backing and get yourself some darts and a step ladder.

    Go high enough that you have a more or less even distribution.

    Drop darts, one, two, thirty or so, a hundred. One dart can be anywhere but smart money is if it hits it is in the bulk not the far skirt — probability is proportional to the area of the stripe though the odds of hitting any equal area will be the same.

    Clustering and relative statistical weight at work.

    After about 30 hits, we predictably will have a reasonable picture of the distribution, through the implications of that same clustering.

    100 or so hits will likely cut into the 5% tails, maybe the 1% ones, but if you ran the tails out to say =/- 5 SD’s, it will be quite hard to by chance hit the far tails with a reasonable number of drops. of course, as more and more hits happen, the sampling will begin to pick up more and more deeply isolated zones.

    All of this is simple and obvious, and it is the grounds of Fisherian inference testing. The far stronger chance of hitting the bulk is such that hitting the tail becomes unlikely on the sort of range of opportunities that will be reasonable and typical. So if far tails are showing up where they should not, that says maybe the assumptions that are in the null are violated.

    I know, I know, there will be endless debate points on bays vs Fisher, etc, and there has been a debate on how dare you suggest something is wrong in the infamous Caputo case where 40 of 41 elections supposedly picked on a coin put D first on the ballot, which is known to have a favourable biasing effect. Yes, yes, I know I know, there will be all sorts of theoretical reasons trotted out as to why Bayesian inference and perhaps likelihood reasoning is superior etc etc. The fact remains that the basic point of the Fisherian approach is reasonable and it worked well enough that for decades, it dominated.

    So the side issues are side tracks away from the pivotal point: samples that are blind trend to pick up the bulk of a distribution early and that which is unusual as a rule comes up later on if at all. That is just what the case with the random documents generation exercises above shows.

    So, we have a very good reason, on the relative rarity of fluctuations, to expect the bulk to dominate, until we go so far along that it is reasonable for isolated special zones to begin to be picked up in the overall run of samples.

    Now, what happens when not even a solar system gives you enough resources to get to the point where you can reasonably expect to go beyond the bulk?

    To see, let’s look at the needles in a haystack discussion in IOSE, starting a bit above the just linked:

    xiv: . . . we can present a key fact, one that Weasel actually inadvertently demonstrates. That is: in EVERY instance of such a case of CSI, E from such a zone of interest or island of function, T, where we directly know the cause by experience or observation, it originates by similar intelligent design. And, given the long odds involved to get such an E by pure chance — you cannot have a hill-climbing success amplifier until you first have functional success! — that is no surprise at all.

    (The Internet and the major libraries of the world, together, have billions of successful tests of this claim. On years of experience with suggested counter examples, they are consistently dubious or outright errors, as a rule being illustrations of the very point they were meant to oppose. E.g. the drawings of canals on Mars from 100 years ago, if they were of real canals on Mars would be evidence of a Martian civilisation. Alas, they are inaccurate, and instead are drawings that were intelligently designed to show what the astronomers of that time thought they saw on Mars.)

    xv: Why should this be so? Let us consider: in the 10^17 or so seconds on its conventional timeline the 10^57 or so atoms of our solar system (our practical “world”) will have gone through maybe as many as some — oops, corrected 12:06:01 — 10^117 Planck-time quantum states. (We note, it takes about 10^30 such for the fastest chemical reactions, and many more for the organic chemistry type reactions relevant to so much of cell based life.) But 10^150 possibilities is 10^33 times as much as that, so our solar system could not search out more than a negligible fraction of 10^150 possibilities. Where, we can see that a string of 500 bits has 2^ 500 = 3.27*10^150 possible configurations. For just 500 bits [[~ 72 ASCII characters], on the gamut of our solar system, there is just too much haystack to reasonably expect to find the proverbial lost needle.

    xvi: To understand this better, let us work back from how it takes ~ 10^30 Planck time states for the fastest chemical reactions, and use this as a yardstick, i.e. in 10^17 s, our solar system’s 10^57 atoms would undergo ~ 10^87 “chemical time” states, about as fast as anything involving atoms could happen. That is 1 in 10^63 of 10^150. So, let’s do an illustrative haystack calculation:

    Let us take a straw as weighing about a gram and having comparable density to water, so that a haystack weighing 10^63 g [= 10^57 tonnes] would take up as many cubic metres [--> i.e. 10^57]. The stack, assuming a cubical shape, would be 10^19 m across. Now, 1 light year = 9.46 * 10^15 m, or about 1/1,000 of that distance across. If we were to superpose such a notional 1,000 light years on the side haystack on the zone of space centred on the sun, and leave in all stars, planets, comets, rocks, etc, and take a random sample equal in size to one straw, by absolutely overwhelming odds, we would get straw, not star or planet etc. That is, such a sample would be overwhelmingly likely to reflect the bulk of the distribution, not special, isolated zones in it.

    So now, the root error is redefining the problem from the real issue, observationally distinguishable clusters of states, to one of picking individual states. Yes, individual states are equiprobable, but clusters are NOT.

    And so, it is highly reasonable to observe the pattern of likely outcomes and to think on the clusters.

    A Royal Flush, individually is as probable as any other hand, and the four are collectively four times as probable, but there are so many other possible hands that the odds of such are about 0.00015%. That means the odds of not getting a Royal flush are very high. But 4 in 2.6 mn is a winnable lottery on the sampling resources here on earth, so we would not find getting one even on the first try excessively improbable. That is why getting several in a row would be needed to trigger our suspicions.

    But, why be suspicious?

    After all, getting ANY four particular hands in a run would be exactly as improbable in the new sample space, clusters of say five poker hands.

    And the fallacy surfaces: we are comparing clusters, not individual hands. And Royal Flushes are a definite zone T that is very special indeed in W the set of possible Poker hands, so the best explanation for several in a row — that is the odds are sinking, five in a row is of order [4/2.6*10^6]^5 — is that some4one has artfully picked a sampling frame that makes such hands all but certain.

    So, when we look at the solar system, we see the same problem, a cluster of about 72 ASCII characters in English such as the opening letters etc from this post, comes form a vastly improbable special zone, one that is independently describable, and so specific and functional. NOT landing in such a zone but in the bulk of the distribution is also describable but utterly non specific and non functional.

    As a working out of the binomial distribution will soon enough show, that cluster would utterly dominate the set of possibilities. So, we are dealing with in effect an unwinnable lottery — lotteries have to be designed to be winnable, BTW — and so if we have won, that is suspicious so long as design is a POSSIBLE explanation. For, lucky noise on that order is not normally credible.

    This brings us to the heart of the problem.

    The real game is that the evolutionary materialists who have dominated institutional science, education and pop sci promotions in recent decades have injected a question-begging ideological a priori, even into the definition of science and its methods. It is because they have rigged the game to make it seem that a designer is not possible — or at least is not a “properly” scientific possibility, that we are seeing the debate we have been having.

    So, the real issue is, is a designer possible in the relevant context, OOL etc?

    The obvious, simple answer is, that with the work of Venter et al in hand, that is so. For, it is highly credible that in several generations of work, we will be able to have a molecular nasnotech lab capable of building from scratch a nanotech, molecular machine based encapsulated, gated, metabolic and self replicating entity that uses a vNSR mechanism to effect this last. No physical impossibility blocks the way, the basic techniques have been demonstrated by Venter et all, it is a cost and further development challenge not a roadblock issue.

    So, why should it be deemed effectively impossible that say an advanced molecular nanotech lab seeded earth with original life? Surely that would be sufficient to explain the OOL on earth.

    And if that is possible, we should be willing to accept that evidence that is best explained on design should be allowed to point to design as best current explanation.

    Regardless of whose feathers are getting quite ruffled now.

    But, but but — don’t you really mean that God is the designer, and aren’t you injecting the supernatural into science’s hallowed secular halls?

    Nope, Barbara Forrest has been playing you like a piano.

    Right from the beginning of modern design theory 25+ years ago, the technical work has been careful to distinguish between inference to design as causal process on empirically warranted sign, a scientific enterprise, and speculations as to who the designer[s] of life on earth may have been.

    But, it is convenient for ideological reasons for rhetors like ms Forrest to pretend otherwise and erect grand conspiracy narratives dripping with hints of the Inquisition being re-established.

    [Actually, it still exists, over in Rome as the Congregation on the Faith or something like that. No more thumbscrews, though; doesn't fit the ambiance of air conditioned seminar rooms that are the vogue in Rome these days, what with former college philosophy profs being made pope and all. And, nope there is utterly no danger of such ever being imposed again. Though, if you entertain the taqqiya laced blandishments of IslamIST Muslim Brotherhood spokesmen -- who have a declared intent of establishing their religion as supreme in a global empire over the next 100 years or so -- you might find yourself facing a swordman about to chop off your head for being a rebellious kuffir who will not submit. No wonder more moderate Muslims in Algeria were the first to warn that such are Islamo-Fascists.]

    We need not waste more time on Ms Forrest’s conspiracy theories.

    The issue is clusters and it is reasonable to look at the divergent probability of clusters and to compare the alternative hypotheses to explain outcomes. Arriving at FSCO/I on blind chance and mechanical necessity is just short of being outright utterly impossible. Arriving at it on design is not.

    So, if we see a case of FSCO/I; on inference to best current empirically grounded explanation, design is the best explanation.

    It is as simple and as reasonable as that.

    KF

  27. Joe, (25)

    The Washington Post article does point out that the Antarctic ice expanse is 5 – 10% above the 1979 – 2000 average whereas the Arctic ice expanse is almost 50% below it’s average over the same period. They also look at the trends in both regions.

    The jet stream maybe shifting and England, where I live, may get, on average, colder because of that. But that doesn’t mean the earth isn’t warming overall.

  28. KF (27):

    I agree with just about everything you said (and I really enjoyed the parenthetical snide aside about the Catholic Church) and I think some of it was very nicely put too.

    I agree with you that the cluster of non-sense strings is IMMENSE compared the the cluster/subset containing ‘sense’ strings. I agree that you could randomly search for years and only get garbage. Years and years. I’d be a fool to deny that. It’s completely obvious.

    But, you still could get the first 72 characters of the KJV or any other sensible string on any give try. Two in a row would be almost unconceiveably improbable. Three in a row . . . Or even 3 out of 10 . . . might not ever happen. But it could. Purely randomly.

    I must look up quantum tunnelling . . . there’s some stupefying improbability associated with it. But it does happen I gather.

    Anyway, I haven’t got anything new to add or say. I’ve never even finished reading Dr Forrest’s book by the way although I have heard her interviewed a couple of times. We can argue about whether the Discovery Institute has an agenda if you wish. I listen to ID: the Future whenever it comes out and I have corresponded with Casey Luskin although it was a long time ago. I do try and keep up with some of what the ID community is saying and publishing. But not because I think they’re trying to overthrow science. I just want to know how they see the situation.

    There’s a good reason why Royal Flushes are the highest card possible in poker and that’s why I never play 5 card stud. Too boring. Give me some cumulative selection any day! :-)

  29. To Zachriel (at TSZ):

    You say:

    In an evolutionary algorithm, Hamlet would represent all the multidimensional mountains and valleys that make up the fitness landscape. And yes, a map contains a lot of specified information.

    It’s an old issue that we have certainly discussed many times. But, just to have the pleasure of debating at least a little with you again, could you please explain how your concept of multidimensional fitness landscape can help find the right functional sequence for a new protein domain with a new protein structure and a new biochemical function, for instance a new enxymatic activity, from unrelated DNA strings? What kind of multidimension will help you to explain that? Just to understand…

    I know: as your friends say, “evolution has no target”. I agree. That’s why it finds no targets, indeed!

    And yet we have the little porblem of those thousands of functional targets that are the existing functional proteins, so different from non targeted aminoacid strings that are good for nothing.

  30. To All:

    Dr Dembski has a new post up at Evolution News and Views:

    http://www.evolutionnews.org/2.....64871.html

    Intelligent design, as the study of patterns in nature that are best explained as the product of intelligence (such patterns exhibit specified complexity), subsumes many special sciences, including archeology, forensics, and the search for extraterrestrial intelligence. None of these sciences concludes — full stop — with a designer. Rather, once design is inferred, a host of new questions arise. Given an archeological artifact, for instance, what is its function, what group of people was responsible, and what technologies did they have available? Given a death by unnatural causes, who was the perpetrator, how did he do it, and what might have been his motive? Given an intelligently produced radio signal from outer space, where are these aliens, what are they trying to communicate, and are we ever likely to encounter them directly?

    More generally, once the possibility of design detection is raised, the following questions readily present themselves:

    Detectability problem — How is design detected? (answer: specified complexity)
    Functionality problem — What is a designed object’s function?
    Transmission problem — How does an object’s design trace back historically? (the search for narrative)
    Construction problem — How was a designed object constructed?
    Reverse-engineering problem — How could a designed object have been constructed?
    Perturbation problem — How has the original design been modified and what factors have modified it?
    Variability problem — What degree of perturbation allows continued functioning?
    Restoration problem — Once perturbed, how can the original design be recovered?
    Constraints problem — What are the constraints within which a designed object functions well and outside of which it breaks down?
    Optimality problem — In what way is the design optimal?

    And a lot more.

  31. Yup, mostly garbage, just like you’d expect. Are you sure those are really random though? I was thinking there’d be more variety. But randomness can be clumpy.

    Those are very much random.

    If you have python, here is a very simple script you can play around with:

    import random

    def bin_str_to_ascii(bin_str):
    c_ascii = ''
    l_ascii = ''
    count = 0
    for c in bin_str:
    count = count + 1
    c_ascii += c
    if count % 8 == 0:
    l_ascii += chr(int(c_ascii, 2))
    c_ascii = ''
    count = 0
    return l_ascii

    def randomize(s):
    l = list(s)
    random.shuffle(l)
    return ''.join(l)

    start = ''.join(random.choice(('0','1')) for _ in xrange(504))
    for x in xrange(1, 100):
    print bin_str_to_ascii(randomize(start)) + '\n'

    Here is a few hundred more (this time with delimiter ========= to indicate start of sequence):

    =========âý\Uº’ã2Ã]&¿éÿî>Hùá$Ioá«ñ©½>ç‹Pþífn`MË
    ›+¯®W”G[“Œ0Š+;m

    =========9€qcrg¹ðØÿJnâÄPUm^žŸªn÷òÑqîuòi,ß‚6)s.õ·ÜéöQ³…(Rìºá¾R|¯yÖ6gF

    =========þµ&ÊÈ6
    ŠcÙøŽ6ûií°5hq¨ûý®ÍoE^98öá¼$ŸÜô=ÕµHҴ؇þµjjn–·^ÊÐ

    =========] ]øw5³•†iHý>®âÎúNÉå◾Šήí óë®rÒ>Qì‹Åãk´ñ4mª
    ´˜íâ–rûùKÙ

    =========²0GÉê¾ZZæçõwƒ°kôô.ò¸ñ´(¯2`£–øTd¿õçw¥Ÿí³.ÊÓîƒv£]ÏÉ€Ê1yáù>¡

    =========?lýî¾?ôjÛÔVf8h²‡ÑèR{6n›¬Þ¯èo¼YLt4OôŽ^¨yæ`õ5yð>HŒLÅ#OGŠ

    =========²Ý«&ÌZù¶
    =========î:Ÿo?™g’xí%ZŸù$ñ:Š¶qKŸ½Ñ|+^ϐ凁ß%ýEÎ!Æ^Ù«!ŠUô

    =========BHíÛ9Jëß^îå l˶vôPCð%O™ŸÁ¬

    =========ñ8Jw=w‘±>»¡›*!øþš©»’éó®ÞAÄ[9IDÄéÿéÍ;c~·ãNSªs£3k#6 Áþ£þPû

    =========ºâB­=¬
    =========~_‚Ñ÷sj‹â¿âjÏdJÖit•Z$ç}0À¿õ\Ù¯rfWÀrìpïRv—Q&T­íZÚ¹þõÍ"íWcâ„¢$y”W۔L——Ä‹»KŸ…íñÆÌBVã-îp¿ÆÃÓ/õ

    =========&Øš
    û
    =========‹®yóÙ7&}í§~Õ›N#9š¾aô×îÖ«8dº<Õ¸–Ò¾d=MÂ¥°ìßŲÍ`Š,,u]Ì-©{°‘ìŸ~s

    =========£}<huA]ɝ?Aoê´°PJy“‹ÊžžÙMÇ„ÏïÿãIÁ7ŽÞZé ŸËZ¦%T€ðsìÀL2Aå§îsÍÅy½õ>}+¶î•

    =========…ÏN·£’’Æý3×ϱË‹‹ïœ¯!Å Ô漤øsÙ·‚ž+8:q°ˆkUÆŒ@ö›ÿÛ•Ÿ÷v-Ìüõ´“
    +¥èë{

    =========7ÛƒáfQ~‰ó¨ÁMžÑã ¤™bóýþiä
    \ÄÏïrK^¨±÷<úv°ÍlÑÚXm+½…ªÿæ¤j’ãšP\4_¹

    =========1ŒÈ#Pð–×Qs`GÁ¼£…ˆ»…Z³~6‹ø3š¶ºs ËÜgñÓÓ÷½iWƒ4Vb÷4ö/”i6ñc¿pW

    =========›Yû³wmµø¦ÂÆz§†ÙÿïÀòæ¡Užèl½ö2RO9ÜoïÙÝ ²#iÁÌ´YÈa
    Ë¥Ô±÷$sçž`WC

    =========ù—-
    Zºp 4<é…Ù’b÷øß)·×¯ïŒçx}&‰Tâ…Ó«ýÒíiccü€Ÿ 70½–t%³™ƒ<¡ð

    =========Cèsr;·^5£wmó^áwX*; –p7늓°Â|n=W±©¿œÿc)

    =========Ø¿¹>ãyÙ
    ËPïk\ü=0„GPŸg¡Üޝã_B{»§‰èLj1ûßý¯¨ÔMn@Ö³zoKÎdŸ1
    Bb_

    =========°®¶æÚp§¼‹Cž³ÖrøÐsv3Ãð“—Ëœ”W`ýÿI§»’—¦øöãWŠòµO ²ãÊÖŠèš±´ß¨&

    =========”–즐^ýtL¡´~º.1Mí4£µ¸¦˜ãA`ÂéJP÷âwÊ´G¿¿Èb³xBþeuûäª,n~‹Óf

    =========$
    =========¯ÂøØ’%z%Е·‚Ò„¹_‰èÃïñd¯7ôk8ì…ËWG8ïùj/Ž?
    }cGs›-;¢ßóT;W³ë

    =========zc× 6?‚˜ÏúåëfÞ•deö¯òÉ9DRŸ‡âÍQà¦ýÛ-ù÷@ÍAÀ³O¦÷’õW¦y§@ôW§

    =========aø[jšEžG*Ôõƒë¾g¹ÇÃÒeºpåß×·yÔº‡-¸Ã_3Ãn*ŒÝi©ôY„ÞîqL/ó Ƚ–Zm

    =========š¦Œ.fyüVßâÕv=7ÃSLLc™ ÿEç¯^mt[ÙG¯Tn{É¢tèÈÕµl⿤Mk³ùmÿ

    =========6\ŽýC1Dý
    =========úfØ–<vÅ“@½ªºôñÛ¢Áœ¼TœÞ_ÖÏSÚtpš¿-× ’t­}y3Íl’gaþ—ŽC5Ÿe¿B{£*³øŽ˜

    =========OlÎÑ
    °$›¤·“=Ý­ò¹žsOUýǧßç ƒ’Â[;ÅMnÓýÓWq,É ·ÏNH©ÁῬWßY$.|:[

    =========qq¾ñâð@ùø³kxŒšÖÃÒÿ{¾C„‹=ÿðû²3¤FŠO´±
    øòþûµ!G©cƒvï;VtÅ69P†

    =========ª¤íôшû›VOA•Åõxõßj¤¯ã®K9X?K"·Å~Z¾f…×°£o{Ľ°¶™©Žf€%3ñÃÔJ

    =========¦ÇnNZ5»ö¡Qtþ½c®»Æ+ké4ÍÑÃ>6ftñýð›œO,BŠX“.l?)ß½¢ÒÅSK†ùh¯d1$Z]–

    ========= ±RïGŠqMÞsS©Wž‰ÖÝŽõH›åc5[¬•¯¾¾mæΏ³hœ¾ ©·‹œuì-PËŽýÑ›_

    =========ÿƒ¡IX#ñµ¡ýM"|ù{êíË¿Úa(‘D®³“(æÃ6ïºÕrÈtCä±ööÑîû}H3¯ÛÝòÇ N>)

    =========a>K%>BÅ
    =========D^6,.
    $PßÁ½½èjÿØ{õ³¹F'L‘×zj_ÙÕ3›)vdèL_ »Ç3Oîü÷×ې¹¦+Úæä

    =========±Ëí­]ݘtÂjãþéæ?†_°ˆ­åÒùP+¤k·J&ÒªØ4:·ú×wã1)¶FK²žlXòžW–ÑÒ—

    =========§p]÷[~Tï×€¾ÿ™{øä"­};Å“¬—‰,©ÿ‡×4Ã…b„xø°ë ²IŠÏ

    =========÷ŠÒw&ä±¥ÔòÊäÏÀî„!Dû”n½A|EŸÏOÞû×ùÎkÝþí^$!äQ#ºß°ä,‡ð²

    =========aë[ïðk«šÄ»®Ö3¹Ûœ;§Ä·ïNhq&øÒ,“2%;ƦkI„*ñ̏ªúû0ë¶Í®‡®ªØ PÿÛº&QqÝÞ•óÒ}XµïôÖñ=}Öâ‡h©®ð<¡‚¨

    =========«ÌhØÁWæRÝaZI—ÅÈd뫾`ω…Ê_ú„SÄû¿¼k5F•œ¾Ü/åÖ-|Š

    =========_žä¿s¶¢„Ä<Õ d~8²NÂ¥HnϯÚmN
    û×:ñ·E|¡DYÃœ)a¼ûô¼Ñ
    ã±z„?%­ØËW¯Ì2³_°äp¯†’ِ>žïšÞ™*ó¾sÐé1.éL|®9¬ëÃÿ*ÚGˆhq.®

    =========òpõÄg@kf?lrT[ÈÝ'0Q/ú1NÛ¼AzßÞÏuØJ﩯Òwé6n ùœQ&͆ùÒpNójbµú]

    =========%ñÓ“ù¿ô©“Ùx–; ,¤uÀ;/Œ1dŽ,­ƒ½Úߏn\Ý^å_Ô”ëèr¢C¾÷ÕŒ×;tüy

    =========įiõR’åïÒŸDÒ Kß3Šè§Þz¢]þnûò%î9îN¶NlŽFžÚQžÉ^Ô8µý¼JCuÂTCÄF

    =========£qày딞’Ÿýý*Ít¯kx–f^@‹#¿½m*Ã(àá–ÌÙm¿ÄaÙÚÞ˜¯“Þ_

    =========UiAÝÍâ`ç¿‹ý6y»ºuAu½ßø&Z@ŸÛÅã3î¢$!û5‘ɲìe‚tW~hʱœì¸Gy*

    =========OÑé¬_KŸMˆ‰g³ág~¾YůöZ¯Ýk¨á¯’ ¼`E×êW÷
    ݺ¦žx©G-nzÞYäŠÏ¯×DÊJ“I

    =========³Ài[÷æmÅøÑ×Ñ/lBªþû½;,ÿáñ_9 :ÜQC…9&v¶[ÐÿUø?„­,?Ã͐U Œ7r5ƒ

    =========Ð(­ËjÅsÎæ:m{[G'ªX+ŸÏ½xOþ¼´Þ3,×xŸà’êqÓb,—‡Xgˆ ¥œnI$_7sððüãÒ*

    =========;;RDþ(=ú,É™¶ö¿1Ï¢‹ÿ—çÿ´ìÏ(™²õûCVáVeÁÀ]¤­’;7NÑÊ3ç˜ÒÆ–ó

    =========;ÿÈìêé{¥îTŠ«¾ÞÖòâö|¼,† ²˜/ŸÓÖ‚gó0Õpªú+$(üÚÛÍ(²hi€eOrLØã\Üó?_

    =========wO ”å©ñT»Ï­Ï¯4&½‹Ðíë©Î±å”#£5Ü
    UTzÙ]{Ïjê`_ýŽB–|tÌôúc}½Vþ

    =========”SfôFvÈÛÇÚ+áyª…ß/+—YŠW çN¾x}ú ?÷ó{‹ÔOd+ccö(ÿq švOšP&MÏ®ÕË

    =========ÿç tA:!7±¦ñ»z£2öµý1äv¶pÆú‰•å`úX_
    ™%§rà1|rûÙfûÖCÎ/îp³kqºáçG

    =========t½Í¤çm(Sê^ÏR#ƒ¬Ieß(µÒö¯—¾ê)±ŒNeù¡}%àŒÛçÑãìfø;±îÁ¦(árïÈËuaÖ

    =========çx½ðà#Ÿ?[É€°þì¸Àž?lXìäáÜKõwŸ…ô5¯¹¹8Ù@8Ÿ>oWbFf·VŒÚNdš!ʺÖÇŸ¦F#

    =========_§Ií¾·ÒŸ®#FU·ûjO+`ß,7èlÝøÁFÕaŽÈôgHÛ¿Tg°c©¼}Î

    =========1ë1i¦þ€p q ~¿oŽÁãO­PÀnÂãŸòYø½¦ú j}iCî÷á6iß`/Sìž¡ÆVô¥“Ø®U´

    =========¶f%^Äð-ÊÀÝÔÕCÒñU“Gžha°_jÂô“ Úå—}÷zT;÷|;`<Ñt.ùÇa*ê¿.ÿ/xQغEùÒ

    =========6¹úô¿Ò¾º(Þ†µÎœî °7°¦p8íñÐ|àRÍù“\n)^Œ8½­4:š4´È}.Év»ÿ“CYjÊü‰

    =========6áóP_Çý’ÊrëfwþzæðtS”è›(œ©fïãÕð—;Ò^È–aÈst¬AvFÁoaDãó¦Þ~3î_+ž

    =========qdþ·*ïäx9׎›’…ý‰çŸ<.îsXÊØ×ËÓÇyŸ90”~{
    ³…¡-E^ÕQè/gú?§Ïu9P$©,

    =========¼‘½¨v@ž{œzé´­kn²®Ëyœªc
    ®'Co÷‚±*m¹h^5ÿî€"¾ÛëGO☂íRÐ}#wKZ

    =========ÔûI¶r_ÕáÛ±~°…FïÄAß!NAz-¼)Ã¥AS6u«Ó0£r^ÇÝ Gõô_df?ýÏfzߏ” b»

    =========xS§$Þ— Fˆg*ZP·C²»¿ûßäì³ÿYbˆzbtˆüT»«VEÁ oåéJÿfyY

    þ9SÛªr´!ÿ4

    =========Cyr 5Œ‹+q¾ó—-„Àí󽻄U¿czÊ“Ý_jóÅTD'î„&º}‰«xIÛÓõùA tfüõ

    =========þh¨Sº¹ßQ3ÇO³&W“¤Êåê¬÷€qß

  32. To Allan Miller (at TSZ):

    Ah, good old civility! People are being dishonest, now?

    Some certainly are. And to call things for what they are is civility.

    I don’t think, anyway, that you are in that lot. I appreciate your comments, as I have already said.

    Just to clarify this, as I can see it being misrepresented, more conventionally, these would be termed beneficial and deleterious.

    I appreciare you precisation, and I don’t think I have any problems with them. Amyway, you may have realized that I am not speaking of population genetics here, but just dealing with the causal logic of the neodarwinian explanation.

    In the language of my argument, therefore, the relevant concepts are:

    a) If a starting gene (such as the duplicated inactivated one in my example) gains some biochemical function that can improve reproductive fitness, I call that a positive functional mutation, that is positively selectable by NS. If the selection happens, the result is that the new gene is expanded in the original population. The concept is simple: in the beginning, the new mutation is necesarily present in a single individual. But, thanks to the reproductive advantage of that individual and its progeny, after some time the new gene is represented in the whole population, or in a good part of it, pertially or tottally eliminating the original form. Thgat’s what I call “expansion”. Thus expansion is very important, because it is the real factor that lowers probabilistic barriers: if a mutated gene expands from one to 10^9 members of the population, just to speak, its probabilistic resources to accept a new favourable mutation are increased of 10^9 times.

    b) At the same time, mutations that lower a function, or eliminate it, can be negatively selected, that is eliminated. That is a very important element too, because it makes functional genes tend to remain functional. In general, it works against evolution.

    c) It is true, however, that if a first mutation is selected, because functional, from that moment is it also preserved, to a degree, by the same principle of negative selection. That helps the effect described in a), because ot means that the “work” already done will probably no be lost.

    That’s the most I can say in favour of NS. From that perspective, I calculated that the theorical power of a single perfect selectable intermediate is very strong, although certainly not omnipotent. Its practical power is certainly much lower than what I showed.

    But the problem is always there. Even for one selectable intermediate, a lot of logical problems remain:

    1) What is the function of the intermediate?
    It cannot be the function of the target, because otherwise the intermediate would be the target itself. A member of a protein family is not an intermediate to the family: it is part of it.
    It is not the function of the starting sequence: indeed, if the starting sequence retains its function, its structure will be preserved by negative selection, and evolution towards some distant target will be impossible.
    It could be some other function, but then why should it be a step to the target function?

    2) If the gene is inactivated, and can therefore mutate freely, it is by definition non functional, and obeys pure laws of random variation. How can it, then, find new islands of function?

    3) And, even if it finds a new island of function, how does the organism “understand” that a target has been reached? Indeed, transcription and translation of the new gene should be reactivated, it they were inactivated, before any effect on reproduction may manifets itslef, and NS may enter the game.

    4) How is the new protein function, found by RV, immediately integrated in what already exists and works, so that it may improve so much reproductive fitness that it may be expanded? Most protein functions are highly integrated in complex molecular machines, as Behe has shown a lot of time ago. Even a single new protein, biochemically functional as it may be, has scarce probability achieving immediate integration and success.

    These are only some thoughts, just to go on a little more with a discussion that, I believe, has no more great chances of reproductive success in the combined environment of our two blogs..

  33. computerist (32):

    Those are very much random.

    If you have python, here is a very simple script you can play around with:

    Well, that’s about as good as you can get with a packaged random function.

    And again, mostly garbage. Some runs ‘look’ short which I guess means lots of unprintable characters, control codes, blanks, etc.

    Anyway, nothing to see here!

  34. computerist:

    A discussion of some of the issues using random number functions in code:

    http://support.sas.com/documen.....281561.htm

    Mostly you get pseudo-random numbers. Some are really bad in fact.

  35. Mung:

    Gee which definition should I pick? :-)

    From Wikipedia:

    Mung may refer to:

    Mung (computer term), the act of making several incremental changes to an item that combine to destroy it. Defined as “Mash until no good,” or recursively as “mung until no good.”

    Mung bean, a bean native to Bangladesh, India, and Pakistan

    A type of animal territory in which females of a certain species gather to demonstrate their prowess before or during mating season: the counterpart of Lek (mating arena)

    A fouling material (a disgusting substance)

    The common name of the brown algae Pylaiella

    The god “Lord of all Deaths” in Lord Dunsany’s seminal fantasy work The Gods of Peg?na

    A transliteration of the Korean word ?? (pronounced [m??m??]), an onomatopoeia for bark (dog)

    Mung Daal, a character in the cartoon series Chowder

    Rafael Cabrera Airport (ICAO code MUNG)

    A slang term for leukorrhea, (or a clear, white, or yellow vaginal discharge.)

    MUNG, acronym of Military University Nueva Granada

    Not the first one surely!! hahahahahahahahahhaah

  36. Jerad,

    Global warming is good. A cold planet is bad. And besides it appears the rising temps have been due to a solar maxima, whuich unfortunately means the earth will be cooling off.

  37. kairosfocus:

    Mung: Go for the full 128 character set, do please.

    Also, if you can set the coin length, that would help, a manual push the button mode (and icon) would be great if you want to go that far.

    : coin length

    The user should be able to specify the length of the string (i.e., the number of coins in the toss).

    : manual mode

    The user should be able to specify the number of rows to output (how many strings will be generated and output) in a given iteration.

    Does that represent what you have in mind?

    Probably no icon, unless I turn it onto a web page. at first it would just be a console-based app. You may actually be able to run by entering it here:

    tryruby.org/

    Once I write it, that is ;)

  38. Thanks, that’s about what I was thinking. KF

  39. Allan Miller@TSZ

    I do wonder what GP has in mind when he says “new protein domain” or “new biochemical function”?

    http://www.thefreedictionary.com/new

    One that does not exist in any prokaryote but that exists in all eukaryotes. Would that qualify, in your mind, as a ‘new’ protein domain or biochemical function?

    I don’t see what is so difficult to understand here.

    Protein domains along with biochemical functions, presumably, arose through the same evolutionary process as eyes, and wings, and tails, etc. All currently identified protein domains did not exist in the LUCA, did they? Do prokaryotes have rhodopsin?(For all I know they may, lol.)

    “In a bright light rhodopsin breaks down into retinal and opsin; in the dark the process is reversed.” I assume there is some biochemical function involved in that process.

    The only proteins we need to concern ourselves with are the ones that exist, and their accessibility from other points in the space that (on the evolutionary assumption) were occupied by ancestral sequences with similar or other functions.

    Isn’t that pretty much what gpuccio said? He grants the families. Where is the evidence for the ancestors and the functional intermediates?

    The essence of proteins is modularity.

    You mean they contain instances of protein domains?

    Isn’t that what gpuccio is saying? And then you go off to talk about proteins when gpuccio is trying to talk protein domains. Who cares if they get swapped around? Where did they come from?

  40. Zachriel@TSZ

    The algorithm also allows exploration of how evolutionary algorithms can evolve entirely new sequences by mixing and matching existing components.

    The components are instances of some protein domain. Where did the components come from?

    Keep in mind that proteins are far more flexible than words.

    That’s debatable, and probably irrelevant.

    http://www.merriam-webster.com/dictionary/run

    The 424 Definitions of the Word “Set”: by Lee Andrew Henderson

    Many folds are made up of just a few amino acids, with the rest of the protein just a scaffold. Multiple folds can often perform the same function. Many proteins have multiple domains and multiple functions, and rearranging these domains can create novel structures.

    Who cares if they can be re-arranged? Sounds like modular design to me.

    Where did the domains come from?

  41. Why do I get the feeling that Jerad is MathGrrl?

  42. Why do I get the feeling that Jerad is MathGrrl?

    Nah. He’s not.

  43. gpuccio:

    3) And, even if it finds a new island of function, how does the organism “understand” that a target has been reached? Indeed, transcription and translation of the new gene should be reactivated, it they were inactivated, before any effect on reproduction may manifests itself, and NS may enter the game.

    Yes. Isn’t it just magical how yet another fortuitous mutation came along at just the right time, the time when if we take this sequence and turn it into a string of amino acids (polypeptide) it just happens to have a selectable function?

    Maybe cells are intelligent?

  44. Juartus & Mung (43, 44):

    Why do I get the feeling that Jerad is MathGrrl?

    Nah. He’s not.

    I’m much better looking. In low light. After 3 drinks.

  45. lol

  46. Joe (37):

    Global warming is good. A cold planet is bad.

    Kind of depends on where you live.

    And besides it appears the rising temps have been due to a solar maxima, whuich unfortunately means the earth will be cooling off.

    Probably not actually:

    http://www.skepticalscience.co.....-basic.htm

  47. Generating CSI with Faulty Logic

    I take a fair coin and toss it 5 times: HTTHT

    That’s one of the possible set 2^5 sequences (1/32).

    Now I copy that sequence, and modify one position at random.

    Say the first position: TTTHT

    Is that sequence a member of the original set of 2^5 sequences?

    Why or why not?

    What is the relevance, if any, for calculating CSI?

  48. Mung (49):

    Generating CSI with Faulty Logic

    I take a fair coin and toss it 5 times: HTTHT

    That’s one of the possible set 2^5 sequences (1/32).

    Now I copy that sequence, and modify one position at random.

    Say the first position: TTTHT

    Is that sequence a member of the original set of 2^5 sequences?

    Obviously since there are only 32 possible sequences. List them all before you start. Modifying a given sequence just gives you another one of the 32.

    Why or why not?

    There are only 32 possible sequences of 5 Hs and Ts.

    What is the relevance, if any, for calculating CSI?

    You tell me. If a randomly generated sequence is randomly modified and then arrives at a target or functional/meaningful sequence . . .

  49. Joe (50):

    A History of Solar Activity over Millennia

    I prefer my multiply sourced, peer-reviewed science I think. You should try reading more than one person’s opinion.

  50. I want to make several points after browsing through the blog entries for the parent topic of this thread:

    http://www.uncommondescent.com.....ent-434996

    It seems that TSZ objector to design, AF, insists on the long since corrected canard that design is a “default” inference.

    1. I was impressed by several of the entries in this blog. In particular those authored by gpuccio and especially those at #856 and #909

    2. I would like to suggest that certain entries in certain blogs (like those 2 above) deserve to be saved or collected in a kind of structured (more or less) compendium of ID thoughts, principles and/or essays.

    3. I would say that the two entries above (#856 and #909) and maybe others make very well sense in a scientific paper on an ID topic. Gpuccio has a special gift to structure clear ideas, principles and to obtain and share with us significant insights into core topics discussed. I think that many of his and others ideas expressed in these blogs will trigger interesting thoughts and associations in the minds of the readers and maybe plant the seeds that later germinate in other relevant blog entries or who knows: papers or books. This is like a genuine on-line, group brain-storming.

    4. Related to this I was wondering if Word Press has the ability to “grade” a blog entry (with “Likes” or “Dislikes” or any other way). That may help authors to get feedback on the valuae of their contributions as perceived by others and also, later, for new-comers in going to “high mark” entries in very long blogs – like that under discussion.

    5. What about having the ability that the author who proposes a topic for discussion may choose to start a moderated blog. The moderator may be the topic initiator/author or may be an Editor or another “principal” selected by the author or the editors. Again I don’t know if Word Press provides moderated blogging (or facilities for that) or it is rather a manner to operate for the editors. The MAIN ROLE OF THE MODERATOR in my view would be to:

    a. Maintain the dialog aligned with the main topic of discussion or to relevant sub-topics – as the moderator decides.

    b. The moderator should be able to either warn the author of a blog entry that he/she is not providing an answer to a pending question/issue or challenge, and remove (or ignore) the entry as irrelevant immediately or after a number of warnings.

    c. The moderator should direct the blog exchanges and dialog toward achieving concrete, specific goals or objectives for the issue at hand and filter out the “noise” produced by perturbers, i.e. not well intended and honest participants.

    d. I think too many times the energy and good-will of ID bloggers is wasted with entertaining dialogs and exchanges that are just useless and bring no benefit to the authors and readers but only to the “enemies” disguised as posters.

    6. A topic author may propose another type of blog: collective collaboration for discussing and producing a collective essay on a particular ID subject. An example of such a Goal-Oriented topic might be the elaboration of a systematic hierarchy/list of topics in Intelligent Design. Or creating a Cell Model or a Replicator Model to be used later as the logical foundation to investigate evolution claims or cell biology research or cell biology research prognosis. Again, a good example for structuring a model or principles in investigating evolution claims are those entries at # 856 and #909 authored by gpuccio.

  51. To Allan Miller (at TSZ):

    You make a few points that should be answered.

    I do wonder what GP has in mind when he says “new protein domain” or “new biochemical function”? Does he have one that he considers ‘new’, and definitively inaccessible by the probabilistic resources available to any ancestors – something that can be investigated, rather than his personal, very general assumptions about the structure of protein space and its distribution of function?

    According to SCOP, there are about 2000 (1962 at present, but the number is constantly growing) protein superfamilies in the known proteome. Proteins included in different superfamilies are unrelated both at sequence and structure level, and share no obvious evolutionary relationship.

    New superfamilies keep emerging througout the whole history of life, as can be seen here:

    http://www.plosone.org/article.....ne.0008378

    (Thank you, Zachriel, for providing me a quick refernce to one of my favourite papers. I am afraid, however, that you have not really understood what it mean. It is not about “evolution” of the domains, but rather about their emergence in natural history. I quote:

    “Notwithstanding, these data suggest that a large proportion of protein domains were invented in the root or after the separation of the three major superkingdoms but before the further differentiation of each lineage. When tracing outward along the tree from the root, the number of novel domains invented at each node decreases”. Emphasis mine.)

  52. To Allan Miller (at TSZ):

    Behe’s CCC calculation suffers from a lack of consideration of recombination too, incidentally. It’s a very important, sometimes underappreciated evolutionary force

    What you say about recombination as sense, and I can agree. But I don’t think it can solve the fundamental problems about completely new information. Anyway, it can be reasonable to try to evaluate the real powers of recombination on some empirical basis, but it is certainly true, as you say, that it is a “sometimes underappreciated evolutionary force”. Underappreciated and not much supported by evidence, although often generically invoked.

    I understand that darwinists need to keep some faith in something, and as NS does not seem to help much, recombination can be of comfort.

    It is of some interest that my mcuh cherished “rugged landscape” paper, after concluding about the powerlessness of NS to recover fully a previously existing function, even in extremely favourable lab conditions:

    “The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical” (emphasis mine)

    goes on with some wishful thinking, completely unsupported by any data in the paper, about a possible way out:

    “and implies that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.”

    As can be seen here:

    http://www.plosone.org/article.....ne.0000096

    And however, Behe in TEOE makes an empirical evaluation of the powers of evolution from real life examples. It can be correct or not, but it is not “a calculation”. In real life, recombination can certainly act, even if not calculated.

  53. Jerad:

    I prefer my multiply sourced, peer-reviewed science I think. You should try reading more than one person’s opinion.

    My link was not an opinion and was science. Perhaps you should try learning the difference between opinion and science.

  54. Speaking of recombination- how was it determined that recombination is a blind and undirected chemical process?

  55. Jerad,

    That paper by Ilya G. Usoskin of the Sodankyla Geophysical Observatory at the University of Oulu, Finland was published in Living Reviews of Solar Physics.

    IOW it IS peer-reviewed science.

    Now what do you have to say?

  56. Jerad:

    Are you and/or anyone of your acquaintance taking me up on the offer of an up to 6,000 word essay for UD presenting your view and empirically warranted grounding of the blind watchmaker thesis style account of origins?

    I think about a week has now passed since I made the offer.

    KF

  57. And Zachriel with the equivocation:

    And, for the purposes of this discussion, it shows how and why evolutionary processes can be so effective at finding functional proteins.

    Intelligently designed evolutionary processes or blind watchmaker evolutionary processes? Or is cowardly equivocation still the best you and your ilk can provide?

  58. And what about the proteins whose folding requires a chaperone or chaperones?

    Which came first, according to the modern synthesis- the chaperones required for folding long polypeptides or the long polypeptides that wouldn’t fold until a chaperone came along to aid in that process?

  59. KF (60):

    Are you and/or anyone of your acquaintance taking me up on the offer of an up to 6,000 word essay for UD presenting your view and empirically warranted grounding of the blind watchmaker thesis style account of origins?

    I think about a week has now passed since I made the offer.

    Well, I’m not. And I told you I wasn’t going to at the time very soon after you made the offer. I can’t speak for anyone else as I’m not in contact with any of them so I’ll leave it up to them to decide.

    I’ll just reiterate that I can’t possibly hope to match things written by those much better versed in evolutionary theory than I am. Nor do I have anything new or interesting to say. I’ve been trying to stick to the math stuff since then.

  60. Joe (59):

    That paper by Ilya G. Usoskin of the Sodankyla Geophysical Observatory at the University of Oulu, Finland was published in Living Reviews of Solar Physics.

    IOW it IS peer-reviewed science.

    Now what do you have to say?

    I had a look at the journal and it does look fairly legit I must say. It sounds more like a review than research but that’s quibbling. I don’t know how other in the field looked at this work but I do know that a great many specialists in that field have looked at the data and come to a different conclusion.

    So, even a peer-reviewed journal article can still represent a minority viewpoint which is why I suggested you read some other opinions. I haven’t got the background to look at the big picture on this issue so I look to the consensus of many other who do have the background.

    I try not to make up my mind based on ideology or one scientific paper, I wait for some kind of group agreement to arise.

  61. InVivoVeritas:

    Thank you for the kind words.

    Your comments about brainstorming and about trying to build a collective set of detailed arguments are very interesting. Maybe some organized project could be proposed.

    I must say that I am extremely grateful to our “adversaries”, especially the best of them, because they truly stimulate and inspire the discussions about ID.

  62. Intelligently designed evolutionary processes or blind watchmaker evolutionary processes?

    Zachriel:

    In this case, we’re concerned with typical evolutionary algorithms based on random mutation and recombination.

    Biased towards a goal, which means you are talking about Intelligently designed evolutionary processes. And recombination is an intelligently designed evolutionary process- see Dr Spetner’s “Not By Chance”, which means you are definitely talking about intelligent design evolution.

    Thank you for clearing that up. Carry on…

  63. In response to kairosfocus challenge, Zachriel posts:

    Try Darwin’s Origin of Species (1859). It’s a bit dated and longer than 6,000 words, (the 6th edition is 190,000 words), but Darwin considered it just a long abstract, and it still makes for a powerful argument.

    But the evidence gathered since then has not borne out his “powerful argument”, which makes it impotent. There is still nothing that supports the claim of natural selection being a desiner mimic.

    That is what kairosfocus is looking for- you could start with telling us how to test the premise that any bacterial flagellum evolved via accumulations of random mutatons. My prediction is that you won’t.

  64. There is considerable evidence, much of it in Origin of Species, that shows that natural selection can be a mechanism of adaptation.

    Nope- ns is always just assumed, never demonstrated.

    Artificial selection shows that there are selectable intermediaries between quite different forms.

    Artificial selection is NOT natural selection- AS has actual selecting taking place whereas NS is just a result of 3 processes.

    Peter and Rosemary Grant’s work on finches in the Galápagos Islands shows how this works in nature.

    1- NS requires that the change be random/ due to chance and no one ahs demonstrated that wrt finches

    2- Still no designer mimic

    3- baraminology is OK with adaptations


  65. Biased towards a goal, which means you are talking about Intelligently designed evolutionary processes. And recombination is an intelligently designed evolutionary process- see Dr Spetner’s “Not By Chance”, which means you are definitely talking about intelligent design evolution.

    Or navigate a fitness landscape (which may or may not be dynamic), which is sufficient to understand certain basics of the process, such as the ability of recombination to create novelty.

    Please define your use of “fitness” and also how it is you determined that recombination is a blind watchmaker process.

  66. Artificial selection is NOT natural selection- AS has actual selecting taking place whereas NS is just a result of 3 processes.

    No, it’s not. However, it does show that selectable intermediaries exist, and that selection for a simple trait, such as size, will result in multiple genetic and physiological changes.

    Nature doesn’t select and natural selection could never produce a toy poodle even given selectable intemediates.


    NS requires that the change be random/ due to chance and no one ahs demonstrated that wrt finches

    Natural selection works on existing variations.

    Natural selection is a result Heritable random variation that leads to differential reproduction-

    “Natural selection is the result of differences in survival and reproduction among individuals of a population that vary in one or more heritable traits.” Page 11 “Biology: Concepts and Applications” Starr fifth edition

    “Natural selection is the simple result of variation, differential reproduction, and heredity—it is mindless and mechanistic.” UBerkley

    So, yes there needs to be variation and it needs to be random/ ie a chance event.

    The change isn’t random, but due to changes in the environment.

    The variation is the change and according to the modern synthesis is entirely by chance. Changes due to the environment would be directed changes ala Dr Spetner’s built-in responses to environmental cues”- IOW more evolution by design. Thanks.

    Also with natural selection whatever is good enough survives to reproduce. And with cooperation even the not good enough can make it.


    Please define your use of “fitness” and also how it is you determined that recombination is a blind watchmaker process.

    In an evolutionary algorithm, the fitness landscape is explicitly defined, and recombination is random.

    It is duly noted that you refuse to define your terms and can just declare what needs to be explained.

    BTW- Fitness, wrt biology, refers to reproductive success, an after-the-fact assessment.

  67. gpuccio:

    What you say about recombination as sense, and I can agree. But I don’t think it can solve the fundamental problems about completely new information.

    Where did the protein domains come from that are required for recombination? :)

  68. onlooker:

    The genomes are modeled as replicators.

    And the ability to replicate is the very thing that needs to be explained.

    Well, in most GAs reproduction is stochastic, just as in real environments.

    Wrong. In GAs and the environment reproduction proceeds by design. So there, I match your bald declaration.

    The model shows that the mechanisms of the modern synthesis are quite capable of generating functional complexity in excess of that required by your dFSCI.

    Nope- 1- they are NOT the mechanisms of the modern synthesis 2- no functional complexity was constructed 3- she starts with the very thing that needs explaining.

    It is quite telling that onlooker so uncritically accepts what Lizzie sez and acts like a belligerent little child when someone proposes a semiotic theory for ID.


  69. NS requires that the change be random/ due to chance

    Jon F

    WTF?? No it doesn’t. NS could operate perfectly well on variations introduced by a designer and biased in any way.

    Can’t call it “natural” selection is the mutations are directed. Darwin’s whole point with natural selection is design WITHOUT a designer. WTF, indeed…

  70. InVivoVeritas:

    I second the words of gpuccio @65. Thank your for your comments.

    I must say that I am extremely grateful to our “adversaries”, especially the best of them, because they truly stimulate and inspire the discussions about ID.

    Some of them. :)

    I wonder if more of them would be willing to post here if a thread was moderated, or even if they could moderate a thread.

    kf does have his 6k word essay challenge up.

  71. Zach:

    There are selectable intermediaries between wolves and toy poodles, and selection for very general traits (size, curly hair, docility) can result in the evolution of complex genetic and physiological changes.

    Evolution by design.

    There has to be variation for natural selection to work, but it doesn’t have to be the result of a chance event.

    It cannot be planned/ directed and still be natural selection. A designer mimic that uses designed processes is a contradiction.

    It may already exist in the population as part of the inherited variation.

    Well of course it already exists- it is ONE of the INPUTS.

    It could be inserted into the genome by magic, and natural selection would still work.

    That is incorrect, unless nature performs magic. However if your position is correct nature does indeed perform magic so you may have a point.

    Is that what you are saying? Because at face value natural selection cannot have any sort of designer input at all.

    You seem to be confusing the sources of variation with natural selection

    No, YOU are confused because YOU do not understand natural selection. OTOH I posted that variation is one of the inputs with natural selection being the output. And I supported that claim with two references.

    such as when you said “NS requires that the change be random/ due to chance”.

    Yes because those random variations are a required input for natural selection. You appear to have difficulties understandinf natural selection as evidenced by:

    Natural selection acts on existing variations,

    1- Natural selection is a result and doesn’t act on anything:

    The Origin of Theoretical Population Genetics (University of Chicago Press, 1971), reissued in 2001 by William Provine:

    Natural selection does not act on anything, nor does it select (for or against), force, maximize, create, modify, shape, operate, drive, favor, maintain, push, or adjust. Natural selection does nothing….Having natural selection select is nifty because it excuses the necessity of talking about the actual causation of natural selection. Such talk was excusable for Charles Darwin, but inexcusable for evolutionists now. Creationists have discovered our empty “natural selection” language, and the “actions” of natural selection make huge, vulnerable targets. (pp. 199-200)

    Thanks for the honesty Will.

    …from whatever source.

    2- Natural selection cannot be a designer mimic and have designed inputs. The whole point of NS is design WITHOUT a designer

  72. Allan Miller is confused:

    If you are right about GAs, it is one hell of a coincidence that a method of exploring certain kinds of digital space using only the biological observables of differentials in birth and death, mutation and (optionally) recombination should have such power that they are popular tools in engineering and maths, as well as biological applications unrelated to modelling evolutionary mechanism (eg phylogeny) … and yet you think the algorithm has NO power in the very realm that inspired it – biology?

    Strange that I have been positing for YEARS that GAs are what are running living organisms- nature doesn’t create GAs, Allan. Algorithms are a thing born in minds. GAs are what control Dr Spetner’s “built-in responses to environmental cues”- they are software, whereas nature can only possibly account for hardware.

    Front-load starting populations with GAs, provide some intial resources along with mechanisms of recycling, set it and forget it

  73. I disagree, Allan is not at all confused.

    The question is, from whence do GA’s derive their “power.”

    They say it’s from the mechanism alone. I say it’s from the mechanism + design.

    Now if they want to admit that there is design in nature, that certainly explains the “power” that exists in nature.

    If they claim design has nothing to do with it, then they must show that the mechanism alone is sufficient.

    This they cannot do, at least not using GA’s, lol.

    … and yet you think the algorithm has NO power in the very realm that inspired it – biology?

    Power to do what?

  74. Nevertheless, it demonstrates the existence of selectable intermediaries.

    Which design can get to and nature cannot. IOW it demonstrates the severe limits of natural selection.

    The source of variation doesn’t have to random for natural selection to still occur.

    If the source of variation is planned then it cannot be natural selection, by definition.

    Darwin posited a non-random theory of Pangenesis, for instance.

    Hey look, Haley’s Comet! Darwin always referred to variation by chance. Mayr, in “What Evolution Is” says teleology is not allowed. He also says:

    The first step oin selection, the production of genetic variation, is almost exclusively a chance phenomena except that the nature of the changes at a given locus is strongly constrained.

    And again I will add, so you can continue to ignore, natural selection was proposed as a designer mimic, ie design WITHOUT a designer.

    You’re confusing two different processes, the sources of variation and selection.

    Nope. How can I be when I told you exactly what each is wrt each other?

  75. To Allan Miller (at TSZ):

    I had written a long answer to you that was completely erased from the form because of some wrong typing before I could post it (one of the least intelligent forms of RV!).

    Now I am tired and frustrated. I hope I can find the goodwill to write it again tomorrow…

  76. I’ve changed my mind. Allan is confused.

    If you are right about GAs, it is one hell of a coincidence that a method of exploring certain kinds of digital space using only the biological observables of differentials in birth and death, mutation and (optionally) recombination should have such power…

    GA’s do not use only differentials in birth and death, mutation and (optionally) recombination.

    If they did, it would indeed be miraculous to find them at all useful in fields such as math and engineering.

    Where do you people come up with this stuff. Seriously.

    I like how this guy isn’t afraid to make it explicit:

    The chromosome should in some way contain information about solution which it represents.

    http://www.obitko.com/tutorial.....rators.php

  77. Since population implicitly contain much more information than simply the individual fitness scores, GAs combine the good information hidden in a solution with good information from another solution to produce new solutions with good information inherited from both parents, inevitably (hopefully) leading towards optimality.

    http://www.doc.ic.ac.uk/~nd/su.....ementation

  78. And today’s Junk for Brains winner is, onlooker!

  79. Allan@TSZ:

    If you are right about GAs, it is one hell of a coincidence that a method of exploring certain kinds of digital space using only the biological observables of differentials in birth and death, mutation and (optionally) recombination should have such power…

    Mung:

    GA’s do not use only differentials in birth and death, mutation and (optionally) recombination.

    Zachriel@TSZ:

    Differential refers to differences due to relative fitness, usually defined by a fitness function or map.

    I know what deferrential refers to.

    Let me rephrase:

    There is more to a GA exploring certain kinds of digital space than differences due to relative fitness (usually defined by a fitness function or map), mutation and (optionally) recombination.

    For example, potential solutions must be encoded into a “chromosome.”

    Encoding potential solutions into a chromosome implies there is a problem to be solved.

    Information about which potential solutions are more likely to solve the problem must be implemented.

    There’s more to a GA than just the three things Allan listed and claimed to be the only things used to explore the “digital space.”

    If I wasn’t clear before, I hope that helps.

  80. Zachriel:

    Just so we’re clear, you agree that there are selectable intermediaries between wolves and toy poodles?

    Just so we are clear that nature does NOT select, meaning the only slectable intermediaries are artificially selectable only.

    If the source of variation is planned then it cannot be natural selection, by definition.

    That is false.

    What I said is true. You are nobody to say otherwise and you sure as heck cannot produce a reference to support your claim.

    Again, you are confusing two different processes;

    That is false.

    the sources of variation and natural selection.

    Nope. I have made it clear which is which and I have supported my claim with references. So stuff it Zach.

    For instance, if a genetically modified organism escapes into the natural environment, it will be subject to natural selection just like any other phenotype.

    It will be subject to something but again natural selection is a result of three processes. You don’t have any idea what natural selection is.

    Darwin always referred to variation by chance.

    That is also false. Darwin proposed a non-random source of variation called Pangenesis, a speculative theory which included Lamarckian inheritance of acquired traits.

    Page number of “On the Origins of Species…” in which he states that the variation for natural selection is/ can be non-random.

    What part of being a designer MIMIC alows for an actual designer?

    And nice of you to cowardly avoid my Mayr reference.

    So to recap I provide references to support what I claim and Zacho just repeats his refuted nonsense. And we are the people who don’t understand the theory of evolution. :roll:

    So how about it Zach? Do you have the sack to actually ante up some references to support your claims?

  81. Also natural selection is supposed to be blind and mindless. And that cannot be with directed mutations.

    IOW only if one totally redefines natural selection can one say that natural selection allows for artificial inputs

  82. Mung World

    ok, so I created my own version of Lizzie’s program.

    took less than 10 seconds
    1522 generations

    What’s the big deal?

  83. keiths@TSZ

    Mung is under the impression that a GA has to be seeded with potential solutions

    It’s not an “impression” that I am under. It’s a fact.

    No, Mung, potential solutions do not have to be encoded into a “chromosome”. That is optional. You can start with a purely random “genome”, as Lizzie does in her program.

    Lizzie has encoded a potential solution to her problem in each member of her starting population. It matters not that they were randomly generated.

    1.) if we change her encoding her program will not work. It depends upon sequences of 0′s and 1′s. Just try changing that and see what happens.

    2.) There is at least the possibility that a solution will be found among the first 100 randomly generated genomes, though she doesn’t actually check to see if that is the case.

    Each chromosome can be thought of as a point in the search space of candidate solutions. The fitness of a chromosome depends on how well that chromosome solves the problem at hand. For that to happen the potential solution must be encoded in the chromosome.

    Maybe someone over there at TSZ will be kind to you before you put your foot in it any more than you already have.

  84. Zachriel@TSZ

    Some novel protein domains are available to completely random processes.

    Novel. Would that be like, new?

    Maybe you can Allan can talk:

    http://theskepticalzone.com/wp.....ment-16208

  85. chromosome encoding

    Each problem solver is a chromosome. A position, or set of positions in a chromosome is called a gene. The possible values (from a fixed set of symbols) of a gene are known as alleles. In most genetic algorithm implementations the set of symbols is {0, 1} and chromosome lengths are fixed. Most implementations also use fixed population sizes.

    The most critical problem in applying a genetic algorithm is in finding a suitable encoding of the examples in the problem domain to a chromosome.

    http://www.cse.unsw.edu.au/~bi...../05ga.html

    In genetic algorithms, a chromosome (also sometimes called a genome) is a set of parameters which define a proposed solution to the problem that the genetic algorithm is trying to solve.

    The design of the chromosome and its parameters is by necessity specific to the problem to be solved.

    http://en.wikipedia.org/wiki/C.....gorithm%29

  86. To Allan Miller (at TSZ):

    So, back to the task.

    I certainly agree that recombinations happens. Exon shuffling and domain shuffling can have a role in biological reality. And I do believe that we must try to understand what that role is. Unfortunately, not much is known about the cause and mechanisms of recombimation. The problem is that we, in ID, like to test the explanatory power of any proposed model, instead of acceppting some explanation just because someone likes it.

    If recombination is truly random, and if, as you say, it can occurr at any point of the genome, then we should evaluate its probabilistic power by sizing the space of all possible recombinations, and of functional ones. That is probably a very difficult task. That’s why I usually stick to the model of single domain proteins (basic protein domains) in my argument. It is simpler, and more tractable.

    For recombination, we should carefully consider the emerging role of finalistic adaptation mechanisms. It is possible that certain recombinations are favoured versus others by the structure itself of the genome, for instance whole gene or whole exon recombinations could be favoured versus purely random ones. There can be genomic sites that make recombination more likely. All that could increment the power of recombination, but should be explained as an adaptive mechanism already present in the existing genome.

    Finally, I doubt that recombination can have any logical role in the explanation of basic protein domains, because they are functional units that cannot be decinstructed into parts that would yield selectable biological functions.

    You say:

    Behe’s CCC argument relies on serial mutation 1 then 2 or 2 then 1, with no benefit till both occur in the same individual. Calculations show it to be of low (though not vanishing) probability. But since 1 and 2 must necessarily be at different positions in the gene, recombination can occur between them, increasing the chance substantially, even though recombination will cause occasional loss of 1-2 links.

    I think you are wrong here. First of all, I would like to restate, for clarity, that Behe derives his conclusions from observation of empirical data, and then he argues that those observations are in line with his calculations. But that is not the real point.

    The real point is that, while your discourse about recombination can make some sense in the recombination of functional elements, it is of no importance in the case of individual mutations that have no function until they conflate in a more complex output. The important point is: a recombination can certainly join two mutations, but it can join any set of two mutations with the same probability, unless we can show that some mutations, and in particular those that are necessary for the future function, recombine more frequently than others. IOWs recombination in this case does not alter the probabilistic scenario.

    This is an error often made by many darwinists. A random effect does not change the probabilities of a specific output, unless we can demonstrate some explicit connection between the effect and the output. That’s exactly the reason why the so often invoked neutral mutations and drift have no relevance in the computation of dFSI. They are random effects, and they favour no specific result.

    So, again, I see no relevance of the recombination mechanism for the emergence of a new protein domain.

    Other mechanisms clearly occur that cause fragments to be moved greater distances in the genome. The existence of long areas of sequence identity (or close enough to be revealed by statistical test) in different genes – the very thing that enables us to declare homology of a ‘domain’ – is regarded as evidence of the within-genome common descent of that sequence by duplication, which necessarily involves a recombination event.

    I have absolutely no problem with these concepts. I have absolutely no problem with common descent.

    Other explanations for that homology are pretty ad hoc – one could infer that it was moved (or tooled in situ) by a Designer – but what would distinguish identity from such a source from that caused by known mechanisms of recombination?

    I have no intention to propose any ad hoc explanation for those data. I will not invoke any Designer for them, unless and until I can show that dFSCI is implicit in them. At present, I have no reason to believe such a thing for the existence of homologue genes.

    Finally, you say:

    If you are right about GAs, it is one hell of a coincidence that a method of exploring certain kinds of digital space using only the biological observables of differentials in birth and death, mutation and (optionally) recombination should have such power that they are popular tools in engineering and maths, as well as biological applications unrelated to modelling evolutionary mechanism (eg phylogeny) … and yet you think the algorithm has NO power in the very realm that inspired it – biology? And despite working fine in other statistical fields, according to some anti-common-descenters in ID their use in tree-building leads inevitably to false phylogenies …! Every time they are applied to the biology that inspired them, they apparently fall to bits. One hell of a coincidence.

    I am not sure I understand your point.

    I have clearly shown that Lizzie’s GA is an implementation of IS, and not a model of NS. That GA has nothing to do with
    “the biological observables of differentials in birth and death, mutation and (optionally) recombination”. It has, instead, all to do with the much more general logical and mathemathical concepts of random generation and variation of strings, and intelligent selection according to a mathemathically defined target. I suppose that many other more popualr GAs have similar characteristics. But I am neither an expert nor a fan of GAs.

    I am not saying that GSa are useless. Lizzie’s GA finds its solution (although it could be found much more easily by a top down reasonment). Even the infamous Weasel GA finds a solution: the Weasel phrase itself, that it already knew. But other GAs can certainly be more useful than that.

    If I believed that algorithms are useless, I would not use a computer. Algorithms can do very remarkable things. And GAs are simply algorithms that use, at some point, some random variation. So, they can certainly be very useful, for specific problems. But that does not mean that they are giving us any useful information about biological systems, or NS.

    If a GA did really model in some way, even grossly, the effects we expect in NS, it could probably show, very trivially, how some microevolutionary events take place, like antibiotic resistance and other events where minimal variation is functional in a certain environment. And nothing more. But we already know that from the observation of spontaneous biological systems.

    Waht any GA cannot do is generate truly new dFSCI for a truly new function, about which the original algorithm has no direct or indirect information.

    Finally, I have no objections to using GAs in phylogeny. If they work well, I am perfectly fine with that. As phylogeny is inferred from homology and other explicit mechanisms, there is no problem in using some GA that correctly models those mechanisms. The results can be more or less correct, but they are certainly potentially valid and interesting.

    As I have alredy said, I have absolutely no problem with common descent. Indeed I believe that common descent is the best scientific explanation for what we observe, and that it is an invaluable component of a credible ID scenario.

  87. To onlooker (at TSZ):

    Any fitness function in any GA is intelligent selection, and in no way it models NS.

    Please, do not consider any more that statement. Keiths is right, it was a wrong generalization. I have given a very generical example of how a fitness fuinction could be built that, while being essentially useless, at least would not be an implementation of IS, and could resemble generically what we expect from NS (IOWs, it would add nothing to what we already know).

    The correct concept is as follows:

    It is completely wrong to model NS using IS, because they have different form and power.

    As I said, you help me to refine my concepts, and I appreciate that.

    Before someone states that I am changing arguments, I would suggest that you read again my original definitions of IS and NS, from which this statement can very clearly be derived:

    “d) NS is different from IS (intelligent selection, but only in one sense, and in power:

    d1) Intelligent selection (IS) is any form of selection where a conscious intelligent designer defines a function, wants to develop it, and arranges the system to that purpose. RV is used to create new arrangements, where the desired function is measured, with the maximum possible sensitivity, and artificial selection is implemented on the base of the measured function. Intelligent selection is very powerful and flexible (whatever Petruska may think). It can select for any measurable function, and develop it in relatively short times.

    d2) NS is selection based only on fitness/survival advantage of the replicator. The selected function is one and only one, and it cannot be any other. Moreover, the advantage (or disdvantage, in negative selection) must be big enough to result in true expansion of the mutated clone and in true fixation of the acquired variation. IOWs, NS is not flexible (it selects only for a very tiny subset of possible useful functions) and is not poweful at all (it cannot measure its target function if it is too weak).

    Those are the differences. And believe me, they are big differences indeed.”

    As everyione can say, in these definitions there is all the logic of my detailed argument about Lizzie’s GA, hwere I show that it is simply an implementation of IS, and not a model of NS.

    Do you agree that a model must resemble the logical form and power of what it is modeling, to be valid?

    I’m curious, how would you measure functional complexity in such an environment? Would it simply be the length in bits of the digital organisms? If an organism with sufficient functional complexity to meet your dFSCI threshold were to appear, would you consider it to have dFSCI or would the fact that it arose through evolutionary mechanisms, which might even be tracked mutation by mutation, mean that the dFSCI medal could never be earned?

    It’s easy. I would proceed like Lenski. I would “freeze” (copy) the virus periodically to examine its code. If and when any functional string of code expresses a new function that helps the virus to reproduce, and therefore partially or totally replace the simpler version, then it will be easy enough to evaluate the funtional complexity of that new string of code, with the ususal methods detailed at the beginning of your thread at TSZ.

  88. To onlooker (at TSZ):

    A distinction without a difference. The model shows that the mechanisms of the modern synthesis are quite capable of generating functional complexity in excess of that required by your dFSCI.

    This is exactly the type of wrong statement that has prompted me to analyze in detail this issue. Have you read my post #910 in the old thread? Please, refer to it for any following discussion on this.

  89. To Zachriel (at TSZ):

    Some novel protein domains are available to completely random processes. However, the natural history is not well-documented.

    What do you mean? To what are you referring here?

  90. To Zachriel (at TSZ):

    Keep in mind that your “don’t think” encompasses all evolutionary algorithms. Evolutionary algorithms, such as Word Mutagenation, can show you how and why recombination is such a powerful force for novelty.

    Can you give us the code? Can we discuss the oracles in it?

  91. To Allan Miller (at TSZ):

    You keep linking us to that paper. I have already recognized that it is an interesting paper, and I have also given some brief comments. But, as you go on linking it as though it were the answer to all questions, I have to remind readers of what you alredy acknowledged from the start, but many may have missed. Form the paper:

    As an initial step toward achieving this goal, we probed the ability of a collection of >10^6 de novo designed proteins to provide biological functions necessary to sustain cell growth. ”

    With all its limits, that paper is a good demonstration of how powerful human top down design can be in protein engineering.

    Petrushka, are you listening? :)

  92. To Allan Miller (at TSZ):

    I maintain that your reasoning is wrong.

    Please consider that a sYstem tests a limited number of new states thorugh random variation. Let’s say it tests 10^9 new states in a certain time.

    So, the probability of A and B being both present in a same state tested depends on the probability of that particular state versus all possible states that can be tested, and on the probablistic resources of the system (in this case, 10^9 attempts. It does not depend on what kind of random variation we use to get to new states. Unless, as I said, you can show that some method of variation favours that particular state.

    That does not seem to be the case for a specific two mutations set where each mutation has nothing specific and is completely non functional.

    Therefore, your reasoning is wrong.

  93. To Allan Miller (at TSZ):

    And domains can be deconstructed. Four amino acids will make a turn of a helix.

    Everything can be deconstructed. What I said is that protein domains cannot be deconstructed into smaller, functional, naturally selectable elements. Can you please explain how a turn of a helix can give reproductive advantage?

    Duplicate that ‘proto-domain’ a few times and you have an extended helix, 50, 100 bases long …

    And why should RV duplicate that “proto-domain”, and not any other possible sequence of four aminoacids? You are making here exactly the same logical error that I have discussed in my previous post.

    and the ID-er comes along and declares that the domain is irreducible complex – for, if you remove it from the modern protein, or even chop it back to 4 bases, it ceases to work!

    Which is simply true.

  94. To Allan Miller (at TSZ):

    Note that I am asking GP what he considers ‘new’, not denying that anything in biology can ever be considered such.

    And I have alredy answered: I consider new each new protein domain of a new protein superfamily or family emerging throughout natural history, with no sequence and structure similarity with what existed before. IOWs, for example, each of the 3464 domains listed in this paper (which indeed uses the family level).

    There appears to be no significant mechanism to introduce new DNA sequence other than through template copying and fragment shifting,

    And so? The problem is not how new DNA sequence is introduced, but the cause of the introduction: was it RV or design that caused the variation?

  95. The paper was the usual one:

    http://www.plosone.org/article.....ne.0008378

  96. F/N: I have now put up the essay challenge as a full post. KF


  97. Also natural selection is supposed to be blind and mindless. And that cannot be with directed mutations.

    Allan Miller:

    God, Joe – Variation and Selection are two different things!

    I know that Allan. And I also know that you cannot have natural selection without variation and it cannot be NATURAL selection if the mutations/ variations are directed.

    Exlicitly chosen mutations can still be filtered by the blind and mindless process (which could not be otherwise, unless there is also an Intelligent Selector with a population-wide overview) of one type leaving leaving more or fewer offspring than another.

    LoL! The processes of natural selection are: variation, heridity and fecundity, meaning natural selection is the RESULT of those three processes, Allan. How long have YOU been discussing evolution and still don’t know that? It means, Allan, that if one of the inputs is NOT blind and mindless, then NS is NOT blind and mindless.

    But thanks for proving that you cannot even connect the dots.

  98. To the TSZ ilk-

    GAs are a DESIGN mechanism, period. And NOTHING you can say will ever change that fact.

    So please, keep hanging your position on a known design mechansim. It not only exposes your ignorance but also exposes your desperation.

  99. Zachriel:

    Darwin proposed a non-random source of variation called Pangenesis, a speculative theory which included Lamarckian inheritance of acquired traits.

    In what way is Lamarkian inheritence non-random? For example, if a man loses his arm in an accident, an acquired trait, that would be random.

  100. Zachriel:

    That isn’t necessary to show that recombination is a powerful mechanism for generating novelty.

    It is necessary to show that recombination is a non-telic process. And taht is something that you cannot do.

    Zachriel:

    Natural selection is based on the reproductive fitness of the replicator.

    1- Fitness = reproductive success

    2- Natural selection requires the fitness be due to heritable random variation(s)

  101. In what way is Lamarkian inheritence non-random? For example, if a man loses his arm in an accident, an acquired trait, that would be random.

    A mouse losing a tail is not a heritable trait. (Weismann, 1899).

    I never said it was. What I said is a man losing an arm is an ACQUIRED trait. Also I noticed that you avoided answering the question.

    GAs are a DESIGN mechanism, period.

    So are weather simulations and calculations of planetary orbits.

    And the weather is the result of a designed planet and planetary orbits are the result of a designed universe.

    Natural selection requires the fitness be due to heritable random variation(s)

    We already pointed to a simple counterexample.

    Of what?

    If a genetically modified organism enters the natural environment, it would be subject to natural selection. For that matter, so would a domestic dog entering the wild, à la The Call of the Wild.

    It would be subject to a result? What does that even mean? Yes it will now be subject to natural environmental pressures, but that does nothing to any claim I have made.
    And I noticed that you still refuse to provide any references even though I requested that you do so.

    Telic Thoughts has you pegged- you are an insipid troll.

  102. Zachriel can’t seem to follow what he(?) sez from one day to the next-

    Zachriel:

    Darwin proposed a non-random source of variation called Pangenesis, a speculative theory which included Lamarckian inheritance of acquired traits.

    In what way is Lamarkian inheritence non-random? For example, if a man loses his arm in an accident, an acquired trait, that would be random.

    Zachriel:

    A mouse losing a tail is not a heritable trait. (Weismann, 1899).

    Right, it is an ACQUIRED trait, which both Lamark and Darwin (pangenesis) thought was also heritable. YOU brought it up, remember?


  103. Natural selection requires the fitness be due to heritable random variation(s)

    Allan Miller:

    Natural selection doesn’t give a damn how the variations were generated, nor what people variously mean when they stick ‘random’ in a sentence.

    And it sure as hell doesn’t care about YOUR misrepresentations. Again I have provided references to support my claims and I can provide more.

    It’s also debateable whether the variation needs strictly to be heritable,

    Who debates that? I have heard debates about whether or not the variation has to be genetic, allowing for behavioural traits that get passed down to be part of NS as they too aid in the survival and reproduction processes.

  104. Must use preview window BEFORE posting-

    Natural selection requires the fitness be due to heritable random variation(s)

    Allan Miller:

    Natural selection doesn’t give a damn how the variations were generated, nor what people variously mean when they stick ‘random’ in a sentence.

    And it sure as hell doesn’t care about YOUR misrepresentations. Again I have provided references to support my claims and uses of the word random and I can provide more.

    It’s also debateable whether the variation needs strictly to be heritable,

    Who debates that? I have heard debates about whether or not the variation has to be genetic, allowing for behavioural traits that get passed down to be part of NS as they too aid in the survival and reproduction processes.

  105. Joe,

    Natural selection requires the fitness be due to heritable random variation

    Two separate things: variation and natural selection.

    Until we learned to manipulatie DNA natural selection and directed selection (breeding) both worked on a basis of variation in the source population. Much of the variation is due to random mutations. Directed selection didn’t require directed variation and natural selection doesn’t ‘require’ nattural variation. Seclection of any kind kicks in after variation of any kind is produced.

    But selection and variation are separate processes.

    Look up natural selection in a dictionary.

  106. OK wait, I think I have found something- bear with me:

    Mark Frank has a new post over on TSZ that pertains to Wm Dembski and Robert Marks.

    So if we take Mark Frank and Robert Marks, subject them to crossover, we (can) get Mark Marks.

    Are you still with me? Good

    Mark Marks is the sound that a cleft lip dog makes.

    A cleft lip is caused by a mutation. Mutations are one source of variation. Evolution requires variation.

    Therefor the existence of Mark Marks, which crossover proves can exist, is evidence for evolution!

    Perhaps this should be posted in kairosfocus’s challenge thread… ;)


  107. Natural selection requires the fitness be due to heritable random variation

    Two separate things: variation and natural selection.

    They are not that separate as you cannot have natural selection without the variation. That would make them rather connected.

    BTW I provided definitions of natural selection, one from UC berkley and another from a college biology text.

    see posts 70, 75 and 78, then get back to me. Thanks.

  108. Joe,

    They are not that separate as you cannot have natural selection without the variation. That would make them rather connected.

    BTW I provided definitions of natural selection, one from UC berkley and another from a college biology text.

    see posts 70, 75 and 78, then get back to me. Thanks.

    I agree any kind of selection requires variation. Natural or artificial selection requires variation in the population to select from.

    One of your definitions mentions heritable variation but it doesn’t say anything about the variation being random.

    You kept saying the variation had to be random.

    I agree that non-random variation makes the whole process non-undirected. But, strictly speaking, selection is separate from the variation. Natural selection is selection by natural, non-directed processes/pressures/affects.

  109. Comment 78:

    Mayr, in “What Evolution Is” says teleology is not allowed. He also says:

    The first step oin selection, the production of genetic variation, is almost exclusively a chance phenomena except that the nature of the changes at a given locus is strongly constrained.

    Then there is the fact that NS was proposed as a designer mimic which means no directed variation as directed variation is what a designer uses.

    And natural selection isn’t selection of any kind. It is wrongly named so that Darwin could try to fool people. NS is a result of 3 processes.

  110. Joe,

    I agree if the whole process is going to be undirected then no teleology is allowed. Clearly. Even Mayr says ‘almost exclusively a chance phenomena’, i.e. mostly random.

    But the selection part is separate from the variation part.

    Natural selection is the culling process imposed by the environment. It’s what ‘selects’ some individuals over others. I would have said environmental cull but we use what is traditional.

    And I wouldn’t have called NS a design mimic. Are breeders design mimics? Maybe they are . . . I just wouldn’t have used the term.

  111. Joe,

    I can’t imagine how it would arise but you could have artificial variation, i.e, introduced by a designer, coupled with natural selection, undirected environmental culling.

    In fact, I thought that was partly your view!

  112. To Allan Miller (at TSZ):

    You have not exactly answered my points. Instead, you add some strange considerations:

    This is simply contradictory. Shuffling bits and pieces of protein is an adaptive mechanism because it increases the power of module shuffling, which is a disruptive mechanism and has limited power of evolutionary exploration? Make your mind up!

    ??? I have said nothing about recombination being disruptive, that was more your discourse. I have only said that it is difficult to evaluate its probabilistic powers, and that anyway it has probably no use for single protein domains, which cannot be deconstructed into selectable functional units. Out of that, I have explicitly recognized that potential of recombination, and suggested that it could also be an adaptive mechanism. Where is the contradiction?

    The bottom line point to bear in mind is that recombination (distinct from exon shuffling) is blind to gene expression. Totally. So it has nothing to ‘go on’ to establish what would be a legitimate swap and what would not. It is variable across genome length, for sure, for many reasons both ‘active’ and ‘passive’, but it is not attracted by regions that could do with a bit of a shake-up so much as repelled by those which would be better without.

    OK. It is blind. To gene expression. But, if adaptive mechanisms can favour some recombinations that are more likely to produce some type of outcome, where is the problem?

    Would you say that the genetic recombination that produces the basic antibody repertoire is completely blind to gene expression? I definitely would say the opposite. It is obviously a very adaptice mechanism, and a very complex one, already embedded in the genome.

    There are many different kinds of recombination, and I don’t know how much benefit there is in lumping them all together as ‘adaptive’

    I have never said that all recombinations are adaptive. Why do you believe that I said such a thing?

    Recombination due to viruses, transposons, damage misrepair, ectopic misalignment in meiosis – these are no more obviously adaptive in themselves than point mutation.

    I would definitely object for transposons.

    But, nonetheless, all recombinations, whether adaptive or not, still promote much wider exploration of protein space than you started off allowing for

    No. I allow for any possible exploration of protein space, but I try to evaluate its probability and credibility. My model, as said many times, is the emergence of basic protein domains, and I maintain that I can’t see how recombination would be helpful for that.

    but this is not always a good thing.

    Obviously.

    Such exploration is not to the benefit of any individual organism, or most genes. It’s just something that happens, and organisms adapt if that-which-happens throws up a beneficial combination – one more source of the spectrum-of-variation on which NS works both positively and negatively.

    In my language, that would simply be NS, not adaptation. Adaptation would imply some active help by some mechanism already embedded in the genome, which goes beyond simple RV, exploiting some active algorithmic information.

  113. To Zachriel (at TSZ):

    Random sequences can form active proteins (Keefe & Szostak, 2001).

    Alredy been there. I don’t know if you have ever read my long analysis of Szostac’s paper, some time ago. He used RV + Intelligent selection by measurement of ATP binding to get to an essentially useless protein. But he never analyzed the original random sequences, which were selected for a mere very week ability to bind ATP, and then intelligently engineered into the final protein. No good at all. Wrong premises, wrong conclusions, wrong methodology.

    The origin of the original protein domains is still largely conjectural.

    Indeed. You are very clever with words. What an elegant way of saying “We have no idea!” That’s one of the many reasons why I admire you.

    By the way, if you want to find a needle in a haystack, try sitting on it.

    :)

    The algorithm is very simple. The landscape is the dictionary of valid words.

    Oh, yes. The algorithm is very simple. And it has a whole dictionary as a oracle! Simple indeed.

    If they form a word, they enter the population.

    And how does the algorithm know that a word was formed? Ah, I forgot! The dictionary.

    If they do not form a word, they do not enter the population.

    Why am I not surpised?

    A couple of insights: It is possible to evolve long words much faster than random assembly. Recombination is essential to this process.

    That’s fine with me. And I suppose that the dictionary is essential to appreciate the successes of recombination.

    Believe me, I have nothing against your personal algorithms. They are elegant and brilliant, and I like them. In principle, they are not different from Dawkins’ Weasel, but what a difference in class!

  114. To Zachriel (at TSZ):

    Similarly, in protein-space, simple motifs are often repeated, and recombination between sequences that exhibit such motifs are much more likely to generate workable proteins.

    What a pity that there is no dictionary there to select those not-naturally-selectable but much-more-likely-to-generate-workable-proteins motifs.

    If you recombine workable protein sequences, you are much more likely to find a new workable protein sequence than random assembly alone.

    That has nothing to do with my answer, which was dealing with Allan Miller’s discourse about two single neutral mutations. Two single neutral mutations are not, I believe, “workable protein sequences”.

    See my post #90 for context, the paragraph before the one you quoted, which says:

    “The real point is that, while your discourse about recombination can make some sense in the recombination of functional elements, it is of no importance in the case of individual mutations that have no function until they conflate in a more complex output.”

    Zachriel, I have full esteem of your intelligence. At least from you, I would expect criticism for what I really say.

    Natural selection is based on the reproductive fitness of the replicator. There can be many functions that accomplish this aim, so if longer legs provide an advantage, then it can be subject to natural selection.

    True.

    In the abstract, this is done with a fitness landscape,

    Like a dictionary? That’s where all problems start.

    but more detailed simulations are possible. As for the size of the advantage, that is also easily simulated.

    I am ready to consider any algorithm that simulates NS in a credible way, both formally correct and probabilistically appropriate to with existing data. I am convinced that such an algorithm would be completely trivial, and would add nothing to what we already know.

  115. Zachriel equivocates:

    Hence, a general example is sufficient for you to see why recombination is an essential evolutionary mechanism.

    Yes, an Intelligently Designed evolutionary mechanism.

  116. Jerad:

    I can’t imagine how it would arise but you could have artificial variation, i.e, introduced by a designer, coupled with natural selection, undirected environmental culling.

    Ok, you do’t understand that definitions of natural selection I provided.

  117. Joe (120)

    Ok, you do’t understand that definitions of natural selection I provided.

    Well, one of us is wrong.

    Evolutionary theory’s main processes are random mutation and natural selection, agreed?

    (There are other sources of variation and mutation covers a wide variety of events. Also there are other selection filters in operation but NS is the biggie.)

    Both RM and NS are undirected. One is unpredictable, random. The other is more predictable and deterministic.

    They do not influence each other . . . mostly. It may be that mutation rates are selected for but that’s not for sure yet.

    Any kind of selection, ‘Natural’, ‘Artificial’, etc, needs a varied population to work with. (Otherwise, what is there to select?) The underlying causes of the varieties must be heritable or they won’t get ‘fixed’ in the population. But selection works no matter what the source of the variation is. It can’t ‘see’ the source of the variation. Selection can only cull from the varieties it’s presented with. It might ‘keep’ all varieties. It might kill them all. Depends on the environment at the time. The dinosaurs had a bad stroke of luck.

    Let’s say a designer was tweaking the mutations in a population but otherwise leaving nature to get on with things. The environmental pressures would still be naturally selecting who lived to reproduce and who died. The designer was not affecting the selection process. That would still be natural selection with a feed of guided mutations.

  118. Jerad:

    Evolutionary theory’s main processes are random mutation and natural selection, agreed?

    No. For one natural selection INCLUDES random mutations:

    Differential reproduction due to heritable random variation (mutation)= natural selection

    So the main mechanisms would be natural selection (if you bundle those 3 processes together as one mechanism) and genetic drift with a little neutral theory sprinkled about.

    Then there is sexual selection and cooperation to consider.

    Both RM and NS are undirected. One is unpredictable, random. The other is more predictable and deterministic.

    Yet Dennett said “there is no way to predict what will be selected for at any point in time.” And whatever is good enough urvives and has a chance at reproduction.

    You want NS to be a nice tight bullet when in fact it is more like bird-shot from a sawed-off shotgun.

    Let’s say a designer was tweaking the mutations in a population but otherwise leaving nature to get on with things. The environmental pressures would still be naturally selecting who lived to reproduce and who died. The designer was not affecting the selection process. That would still be natural selection with a feed of guided mutations.

    In that scenario NS would no longer be a designer mimic. Ya see the WHOLE PURPOSE of NS was that it is a designer mimic- design without a designer. Now you want it to be a designer helper of sorts.

    As I said you are totally redefining the term to suit your needs.

  119. Differential reproduction due to heritable random variation (mutation)= natural selection

    Zachriel, still searching for a clue:

    The “random” is extraneous.

    Only to equivocators, like yourself.

    Nor is mutation the only source of variation.

    I never said nor implied that it was. The reason mutation was in () is because I was responding to Jerad who used the word mutation. I didn’t want to put random variation and have him come back with “I said random MUTATION”- I know how ya’ll operate. And here you are, full of your bloviations and confusions.

    Natural selection can occur when there are existing variations in a population,

    Yes, it can. But it doesn’t have to.

    regardless of whether there is a source for novel variations.

    If there isn’t a source for novel variation, then how did the variation get into the population? Ya see if you start out with one genotype then the first time there is a mutation it would be a novel variation.

    Nice job, cupcake

  120. Today’s Junk for Brains winner is Zachriel, who chide’s ID’ers for “assuming that evolutionary processes are no better than random assembly” while appealing to random assembly by a random process such as recombination.

  121. And the comedy show at TSZ continues.

    Mung:

    There is at least the possibility that a solution will be found among the first 100 randomly generated genomes [in Lizzie's program], though she doesn’t actually check to see if that is the case.

    keiths:

    Suppose Lizzie’s program initialized the genomes to all 0?s. Then there would be no potential solutions among the initial genomes…

    I said randomly generated genomes. I’d say the chance that Lizzie’s program generated 100 strings of all 0′s at random are about the same as her generating CSI.

    Mung: There is at least the possibility that a solution will be found among the first 100 randomly generated genomes, though she doesn’t actually check to see if that is the case.

    zachriel:

    Yes, that the nature of randomness and its fit to a landscape.

    iow, I’m right. you know it. But you don’t have the guts to tell keiths.

    Could you slap some sense into keiths whilst the two of you go about getting on the same page?

    Zachriel:

    Yeah, he’s pretty confused on that point.

    right. I’m the one that’s confused.

    keiths:

    Mung’s statement is incorrect. It is not mandatory to encode potential solutions into the genome.

    Every chromosome generated by the GA is a potential solution. Else what is the point of generating them?

    keiths:

    Well, duh. Of course you have to have an encoding.

    And the light goes on. Maybe.

    Allan Miller:

    In a GA you could start with a string of length zero

    What would a string of length zero consist of?

    How would you test the fitness of such a string?

    What would crossover look like?

    Allan Miller:

    So you couldn’t not start with a ‘potential solution’ (‘solution’ in evolution being a fitter genome for now).

    Instead of repeating me, why not just admit I’m right?

  122. What would a string of length zero consist of?

    Strings and perhaps a few branes…

  123. Natural Selection:

    From http://www.biology-online.org/....._selection

    A process in nature in which organisms possessing certain genotypic characteristics that make them better adjusted to an environment tend to survive, reproduce, increase in number or frequency, and therefore, are able to transmit and perpetuate their essential genotypic qualities to succeeding generations.

    From http://www.thefreedictionary.com/natural+selection

    The process in nature by which, according to Darwin’s theory of evolution, only the organisms best adapted to their environment tend to survive and transmit their genetic characteristics in increasing numbers to succeeding generations while those less adapted tend to be eliminated.

    and

    a process resulting in the survival of those individuals from a population of animals or plants that are best adapted to the prevailing environmental conditions. The survivors tend to produce more offspring than those less well adapted, so that the characteristics of the population change over time, thus accounting for the process of evolution

    and

    The process by which organisms that are better suited to their environment than others produce more offspring. As a result of natural selection, the proportion of organisms in a species with characteristics that are adaptive to a given environment increases with each generation. Therefore, natural selection modifies the originally random variation of genetic traits in a species so that alleles that are beneficial for survival predominate, while alleles that are not beneficial decrease. Originally proposed by Charles Darwin, natural selection forms the basis of the process of evolution.

    From http://dictionary.reference.co.....+selection

    the process by which forms of life having traits that better enable them to adapt to specific environmental pressures, as predators, changes in climate, or competition for food or mates, will tend to survive and reproduce in greater numbers than others of their kind, thus ensuring the perpetuation of those favorable traits in succeeding generations.

    From http://www.merriam-webster.com.....0selection

    a natural process that results in the survival and reproductive success of individuals or groups best adjusted to their environment and that leads to the perpetuation of genetic qualities best suited to that particular environment

    and

    a natural process that results in the survival and reproductive success of individuals or groups best adjusted to their environment and that leads to the perpetuation of genetic qualities best suited to that particular environment

    and

    Process that results in adaptation of an organism to its environment by means of selectively reproducing changes in its genotype. Variations that increase an organism’s chances of survival and procreation are preserved and multiplied from generation to generation at the expense of less advantageous variations. As proposed by Charles Darwin, natural selection is the mechanism by which evolution occurs. It may arise from differences in survival, fertility, rate of development, mating success, or any other aspect of the life cycle. Mutation, gene flow, and genetic drift, all of which are random processes, also alter gene abundance. Natural selection moderates the effects of these processes because it multiplies the incidence of beneficial mutations over generations and eliminates harmful ones, since the organisms that carry them leave few or no descendants.

    From http://oxforddictionaries.com/.....Bselection

    the process whereby organisms better adapted to their environment tend to survive and produce more offspring. The theory of its action was first fully expounded by Charles Darwin, and it is now regarded as be the main process that brings about evolution.

    From http://www.answers.com/topic/natural-selection

    the principle that the best competitors in any given population of organisms have the best chance of breeding success and thus of transmitting their characteristics to subsequent generations. The members of any population show individual differences — anatomical, physiological, or metabolic — that affect their functional efficiency in a given environment. The less efficient members tend to die out or produce fewer offspring than the more efficient members, which are better adapted to compete for food or other resources, and so produce relatively more offspring. The principle is fundamental to modern concepts of evolution, and was first articulated, independently, by the British naturalists Alfred Russel Wallace (in 1858) and Charles Darwin (in 1859).

    and

    the gradual, non-random process by which biological traits become either more or less common in a population as a function of differential reproduction of their bearers. It is a key mechanism of evolution. The term “natural selection” was popularized by Charles Darwin who intended it to be compared with artificial selection, what we now call selective breeding.
    Variation exists within all populations of organisms. This occurs partly because random mutations cause changes in the genome of an individual organism, and these mutations can be passed to offspring. Throughout the individuals’ lives, their genomes interact with their environments to cause variations in traits. (The environment of a genome includes the molecular biology in the cell, other cells, other individuals, populations, species, as well as the abiotic environment.) Individuals with certain variants of the trait may survive and reproduce more than individuals with other variants. Therefore the population evolves. Factors that affect reproductive success are also important, an issue that Charles Darwin developed in his ideas on sexual selection, for example. Natural selection acts on the phenotype, or the observable characteristics of an organism, but the genetic (heritable) basis of any phenotype that gives a reproductive advantage will become more common in a population (see allele frequency). Over time, this process can result in populations that specialize for particular ecological niches and may eventually result in the emergence of new species. In other words, natural selection is an important process (though not the only process) by which evolution takes place within a population of organisms. As opposed to artificial selection, in which humans favour specific traits, in natural selection the environment acts as a sieve through which only certain variations can pass.

    From http://www.macroevolution.net/.....G0xxbTA7Sk

    A definition of natural selection with regard to individuals (the definition given under conventional theory):

    A natural process taking place within a population. During this process individuals with certain heritable traits produce more offspring than individuals lacking those traits. Since the traits are heritable, individuals with those traits become more common in the population.

    A definition of natural selection with regard to forms of life (the definition given under stabilization theory):

    A natural process taking place within successive similarity sets. A similarity set is a set of forms all of which can hybridize with at least one other form in the set. Some forms within such a set will hybridize to produce new types of organisms (or produce them by any other stabilization process) at greater rates than other forms in the set. The heritable traits of those forms producing more offspring forms will, obviously, be present in a higher proportion of offspring forms in subsequent similarity sets, than will the traits of those forms that do not succeed in producing so many offspring forms. With the passage of time, a higher proportion of forms in the similarity set will possess those traits that have assisted in the production of new offspring types

    From Page 11 “Biology: Concepts and Applications” Starr fifth edition

    Natural selection is the result of differences in survival and reproduction among individuals of a population that vary in one or more heritable traits.

    From http://evolution.berkeley.edu/.....ndom.shtml

    Natural selection is the simple result of variation, differential reproduction, and heredity—it is mindless and mechanistic. It has no goals; it’s not striving to produce “progress” or a balanced ecosystem.

    From http://www.answersingenesis.or.....-evolution

    From a creationist perspective natural selection is a process whereby organisms possessing specific characteristics (reflective of their genetic makeup) survive better than others in a given environment or under a given selective pressure (i.e., antibiotic resistance in bacteria). Those with certain characteristics live, and those without them diminish in number or die.

    From http://creationwiki.org/Natural_selection

    Natural selection does not create new traits in organisms: it only favors the spreading of advantageous pre-existing traits, and disfavors the spreading of disadvantageous pre-existing traits. In other words, selection is the inbreeding of favored genes, which reduces the diversity of genetic information in a population, and (in the absence of some other source for genetic diversity to outpace selection) produces a purebreed or genetic homozygote for the trait in question. The result is that organisms become highly tailored to their environment over time, and harmful mutations are kept from spreading throughout the population.

    and

    The fact that natural selection happens is acknowledged by both evolutionists and creationists. Organisms have been repeatedly observed adapting to their environment, and the role of natural selection in this process is observable and beyond reasonable dispute. The point of dispute is over the origin of genetic information and the cellular mechanisms responsible for maintaining and manufacturing genetic diversity

    Joe

    Natural selection requires the fitness be due to heritable random variation(s)

  124. keiths:

    Suppose Lizzie’s program initialized the genomes to all 0?s. Then there would be no potential solutions among the initial genomes

    That is false. Again, you demonstrate that you don’t understand what is being discussed. They would still be a potential solution. Just not a good solution. Just not an actual solution. A string of 500 0′s is still in the search space.

    But I was reminded of a challenge I had issued. That challenge consisted in setting all strings to the same initial value, rather than having them randomly generated.

    So please, have Lizzie initialize all her starting population of strings to all 0′s. By all means. Let’s see how well it performs then.

  125. onlooker, choosing willfull ignorance sez:

    Heritable variation with differential reproductive success does, demonstrably, generate large amounts of functional complexity, according to your own definition.

    As has been explained to you, ad nauseum, heritable variation with differential reproductive success is the dFSCI that needs to be explained in the first place.

    And it has yet to be demonstrated that it can generate any dFSCI wrt living organisms.

  126. Real world Darwinian evolution can also solve problems (and generate dFSCI) without any information from the environment other than “better” (you survived and produced lots of viable offspring) and “worse” (you died early or failed to reproduce for some other reason).

    So you say yet cannot support. Not only that but real world darwinian evolution STARTS WITH the very dFSCI that requires an explanation in the first place.

  127. Mike Elzinga:

    We see the same misconceptions regarding ID/creationist confusion about genetic algorithms. They want every generation in the program to be completely scrambled.

    No, we just don’t like people calling each member in a new generation an outcome of 500 coin tosses when it isn’t.

  128. To Mike Elzinga-

    We see the same misconceptions regarding evolutionists’ confusion about genetic algorithms. They want to use a known design mechansim as an example of a blind watchmaker mechanism.

    And Mike, you need to explain how a new generation can arise- IOW GAs start with the very thing that your position needs to explain in the first place.

  129. keiths@TSZ:

    Yes. That’s why IDers are so fearful of actual studies of actual fitness landscapes.

    They have already conceded that Darwinian evolution can work, given the right fitness landscape. Their only hope is to show that real fitness landscapes don’t have the necessary characteristics.

    lol.

    Darwinian evolution does not need “the right fitness landscape” to work. (What would a “wrong” fitness landscape look like?)

    Your problem, keiths (and apparently the problem of a few others over there at TSZ), is that you don’t know what a fitness landscape represents.

    You think fitness landscapes lead to eyes, and elbows, and asses. And if they did, one would have to infer design was behind it. IDists have no reason to fear fitness landscapes.

    Joe Felsenstein, is there some reason you don’t speak up? Explain fitness landscapes to poor keiths.

  130. chromosome encoding (cont.)

    : Genetic Algorithms in Search, Optimization, and Machine Learning

    Genetic algorithms are different from more normal optimization and search procedures in four ways:

    1. GAs work with a coding of the parameter set, not the parameters themselves.
    2. …
    3. …
    4. …

    Genetic algorithms require the natural parameter set of the optimization problem to be coded as a finite length string over some finite alphabet.

    Some of the codings introduced later will not be so obvious, but at this juncture we acknowledge that genetic algorithms use codings.

    To us a genetic algorithm we must first code the decision variables of our problem as some finite length string.

  131. genome-length

    Return the number of bits required to encode a candidate solution in the context of a particular problem.

    http://www.softwarematters.org/ga-engine.html

    HT: “MathGrrl”

    LAWL

  132. Well, I think I finally got a grasp on Lizzie’s problem over at TSZ. She thinks a description of a function is a description of an observed pattern.

    Not sure how that works, but it really does appear that she believes that.

    IOW, if I define a function that simulates 10 coin tosses and weight it such that the chance of a heads is greater than the chance of a tails, as might happen with a weighted coin, I can simply describe the function as “generates more heads than tails.”

    So if I have a sequence of heads and tails, for example, ttHtHHHtHH, I can simply describe this as more heads than tails and claim I have CSI. Or some such malarky.

  133. “Well, I think I finally got a grasp on Lizzie’s problem over at TSZ. She thinks a description of a function is a description of an observed pattern.”

    But aren’t these the same people arguing, against the recent ENCODE findings, that the vast majority of the genome is still junk because ENCODE’s definition of functionality was too loose??? By Lizzie’s even looser definition than ENCODE’s I guess we can now declare that neo-Darwinists hold the genome to be virtually 100% functional :)

  134. To TSZ:

    No, keiths, I wasn’t trying to change the subject, and your response gives fair evidence that I wasn’t.

    Mung:

    For example, potential solutions must be encoded into a “chromosome.”

    keiths: “You’re Wrong!

    Well, no, I’m not. Your attempts at logical refutation are no substitute for the facts.

    I have quoted numerous sources now, including posters right there at TSZ, that agree with what I wrote. Don’t blame me if they don’t think enough of you to set you straight.

    Mung:

    GAs work with a coding of the parameter set, not the parameters themselves.

    Zachriel:

    Well, yes. That’s the genetic part of genetic algorithms, which are a subset of evolutionary algorithms. So?

    So. Tell that to keiths.

    Zachriel:

    ..perhaps you would like to respond to our actual point.

    Which was:

    Z: That’s the error of IDists. They assume that evolutionary processes are no better than searching completely randomized sequences…

    Well, I think that’s a straw-man. That’s my response.

    Of course, if you can point out where I’ve actually made such an assumption then it might not in fact be a straw-man. But until then…

    Perhaps you are referring to the nature of the landscape.

    I am referring to the need for an encoding. The nature of the landscape is not known until it’s known (but that’s another topic).

    keiths disputes that there is any encoding involved.

    Mung:

    Every chromosome generated by the GA is a potential solution.

    keiths: “You’re Wrong!

    Zachriel:

    Of course they are *potential* solutions…

    Please explain it to keiths.

    I’m saying things, you’re agreeing with me, and keiths continues to just plow ahead.

    keiths:

    A string of 500 0?s cannot be a solution. Something that cannot be a solution is not a “potential solution.”

    Sheesh. You don’t know that it’s not an actual solution until you measure it’s “fitness.” That’s why it’s a “potential” solution.

    keiths:

    Obvioulsy [heh], and if that’s all you meant by “potential solution” then I would have no objection. However, you clearly think that information has to be smuggled into the initial genomes

    What the heck did you think I meant? And where did I use the word “smuggled.”

    Change the 0′s and 1′s in Lizzie’s “genomes” to T’s and H’s, change her mutation function to flip T’s and H’s instead of 0′s and 1′s and then see how well her fitness function works when it can’t find any contiguous 1′s to count or any ’0′ to separate them.

    So yeah, there’s information in the chromosome. It’s not smuggled in, it’s encoded. man oh man.

    Zachriel:

    Landscapes that are amenable to evolutionary algorithms usually exhibit local structure. Indeed, some IDers argue that protein fitness landscapes are too rugged for evolution to be effective.

    We’ll need to clarify what is meant by “landscape.” To me a fitness landscape isn’t something that is there waiting to be discovered (or climbed, ala Mount Improbable), it’s something that is created as populations evolve.

    Now I think it may be true that there are heights that creationists and ID theorists think cannot be reached by a given population in a given scenario because they just can’t produce the required rate of reproduction.

    But we may be talking past one another and it’s best to clarify terms.

  135. Allan and Zachriel:

    The way I understand Zachriel’s argument is that he is appealing to a bag or assortment of pre-existing components (aka protein domains) that can be used in proteins and that their availability for use somehow lends less of a random character to the process (making a functional protein more likely) even though the main proposed mechanism for this shuffling is recombination, itself a random process and the protein domains themselves also arose largely as a result of a random process (perhaps “guided” by “natural selection”).

    Anyways, rather than attack a straw man I’ll pause here and give you all a chance to respond.

    Personally I have no conflict with regular repeated processes going on inside living organisms because to me that smacks of teleology. :)

    So if recombination helps organisms adapt, I am totally cool with that (as long as it’s an accurate description).

    I was just trying to make sure you all weren’t insisting on talking about what can be constructed from the parts while gpuccio is trying to talk about how the parts were constructed.

  136. Mung World

    To my admirers at TSZ.

    I tossed my program together in a short evening. I am actually rather pleased with it, I even managed to make it object-oriented (for the most part).

    However I freely admit it is not an exact duplicate of Lizzie’s program just written in another language, I rather attempted to capture the “spirit” of what she built.

    It’s a bit rough around some of the edges, but I would like suggestions on how it can be improved.

    I call my digital organisms LiddleLizzards, in honor of Elizabeth.

    Here’s my LiddleLizzard class. I think the first thing that can use improvement is the mutate method, it’s pretty rough. ;).

    # mutates this chromosome
    def mutate
    chromosome[rand(500)-1] = ’1′
    end

    All I do here is set one position in the chromosome to a ’1′. If it’s a zero it gets changed, if it’s a ’1′ it’s like a neutral mutation. I don’t know what that cashes out to in terms of a mutation rate, if someone wants to tell me.

    Some potential modifications:

    1. Set the chosen locus to either a zero or a one, that would not be too difficult to code.

    2. Explicitly set the mutation rate.

    3. Create a Mutation object that is passed in when the digital organism is created that encapsulates it’s mutation parameters.

    4. Pass in the length of the string to generate rather than hard-coding it in a constant.

    Honest evaluation, criticism, and suggestions for improvement are welcomed. You can leave comments at that link as well.

  137. keiths@TSZ

    I see you’re also confused about fitness landscapes. Here’s a thought: Wouldn’t it make sense to learn about evolution and GAs before condescending to people who actually understand them?

    It wasn’t entirely clear what sort of landscape(s) he was talking about, So I decided to wait and find out. You, otoh, plow ahead unabated.

    Are you saying you understand evolution and GA’s, or are you hoping someone else is going to save you again.

  138. After you answer, go ahead and take your version of Lizzie’s program, set the initial genomes to all 0?s, and let us know what happens.

    Using my own program for that test would be a little silly, since it’s function is to maximizes the number of contiguous ’1′s. Sorry to disappoint. (not really)

  139. Allan Miller@TSZ:

    While the ‘replication’ function of biological replicators is a vital part of the string, that role is taken by the copy method in a GA, so the strings themselves don’t actually need to consist of anything at the start. The point of bringing them up is to point out that such strings are not likely to be ‘solutions’ to any worthwhile GA, so you aren’t necessarily ‘pre-seeding’ the population with anything.

    So consider the zero-length digital organism as the absolute minimal replicator common to all GAs. As long as a method exists that occasionally adds random bits to a string, something will soon emerge, and variations between these ‘non-null’ bit-strings can be evaluated by the selection module. A set of strings of length zero evidently cannot vary, but they can still ‘compete’ via drift. You can still replicate and remove strings of length zero from a population.

    Hi Allan,

    : Introduction to Evolutionary Computing

    The choice of representation forms an important distinguishing feature between different streams of evolutionary computing. From this perspective GAs and ES can be distinguished from (historical) EP and GP according to the data structure used to represent individuals. In the first group this data structure is linear, and it’s length is fixed, that is, it does not change during a run of the algorithm.

  140. Mung,

    You would have better luck trying to sweep sand from a beach than to try to reason with the TSZ ilk. They really believe Lizzie created CSI using natural selection. And that means they are hopelessly ignorant of both CSI and natural selection.

    And it is very telling that they cannot provide any real world examples of natural selection actually doing something, let alone producing CSI.

  141. Lizzie’s program is not a real world example? And what about mine? Doesn’t it generate CSI?

  142. A Question About CSI

    I have a program that generates a random string of ASCII characters.

    “y\x1EB.\x01UF\x1CLy)V(PHP\x04\rp=v~ i/pG\e_@\x0E\x06kf-FH\x00VBM]\ttLyu&eN\x1D>gA9*=o{cGV\x182ORh\x12uH56″

    I can simply describe this string as “a program generated string.” Why doesn’t that demonstrate that I have “generated CSI”?

  143. gpuccio, I hope you’re reading this:

    http://www.evolutionnews.org/2.....65001.html

    HT: BA77

    In other words, if two phylogenetic trees aren’t congruent, the problem isn’t that common descent is wrong, but rather the conflict is simply evidence of HGT.

    Remember when the strong evidence for evolution was the vaunted “twin nested hierarchy”?

  144. Mung @147:

    Ah, yes, HGT. That convenient get-out-of-jail-free card when problems come up with the traditional evolutionary storyline. HGT is a real phenomenon, to be sure. Just doesn’t have the power that adherents claim it does.

    HGT, “I [laugh] in your general direction.”*

    * Apologies to Monty Python.

  145. Allan Miller@TSZ

    What Mung has there is a 735-bit individual from Lizzie’s digital binary menagerie (assuming his program generates all possibilities from the ASCII set, including control characters).

    Hi Allan,

    I apologize for the confusion. If it exists it’s my fault not yours.

    I do believe that the string you are referring to in your post is the one here.

    That is from a completely different program.

    I first wrote a command line program that allows a user to start the program, passing in two values. One value to set how many binary characters should be in the string (the length) and the other value to set the number of strings to generate.

    Having generated the strings the program then converts each one into an ASCII string based upon the underlying bit stream.

    Here is the link to that program:

    https://gist.github.com/3816082

    It’s not a GA and has no fitness function. It’s a demonstration of how unlikely it is to get meaningful text. So if the program were to produce a string of 72 characters that match precisely the first 72 characters of this post, it would be reasonable to make a design inference.

    I do appreciate your observations.

  146. Corrected link:

    Allan Miller@TSZ

    But the real point of that exercise to was to ask, is that CSI? If not, why not? What makes my string and it’s description any different from what Elizabeth has done?

    oh look! Randomly generated string. That’s a pithy description of the pattern. Therefore, CSI.

    I’m questioning her interpretation of what Dembski means by a specification.

    1. There must be a pattern.

    2. It must be simply describable.

    3. The description must describe the pattern, not the function used to help produce the pattern.

  147. yikes!

    ok, i have to stop using that @ sign.

  148. Zachriel: How’d you guess our PIN?

    that made me laugh :)

  149. Toronto:

    You don’t understand what Elizabeth is trying to do, do you?

    Yes, I understand exactly what Elizabeth is trying to do. Where were you before she launched TSZ?

    Instead of coding right away, why don’t you just describe, in English, what you think you are attempting to do with your exercise.

    Well written code is as descriptive as English text.

    If you can’t understand the code speak up, it means I have not communicated well.

  150. lol. So I posted code and asked for comments, some of them are even funny.

    JonF

    The code is indeed atrocious and unlike anything that’s been discussed before.

    I guess by atrocious JonF means it’s unreadable and doesn’t even perform the intended function. He doesn’t say why he thinks it’s atrocious. He certainly doesn’t say anything constructive.

    I think it’s readable. Others there at TSZ seem to have no problem reading it. And it certainly did what I asked of it, so …

    keiths:

    I had a look at the code for Mung’s LiddleLizzard class. It’s atrocious, and it bears no resemblance to Lizzie’s program.

    Is this supposed to be a criticism?

    My code is written in Ruby, her’s is written in MatLab.

    My code is object-oriented, her’s is barely even procedural.

    I’ve only posted a single class.

    Her code is atrocious if you ask me.

    Any constructive criticism?

    keiths:

    His ‘mutate’ method sets a random bit in the chromosome to 1. It never sets bits to 0. That’s right — Mung’s program latches! KF will be apoplectic.

    So? You think the results will be different if I mutate to either 0 or 1? Will that prevent me from generating CSI, like Lizzie?

    News Flash! Latching prevents algorithmic CSI generation!

    I would think it encourages it, but what do I know. =P

    keiths:

    Mung’s fitness function looks for the longest sequence of consecutive 1?s in the chromosome. The length of that longest sequence is the fitness value. That’s it. No kidding.

    So? Will that prevent me from generating CSI, like Lizzie?

    Do I have a single pre-specified target?

    Why can’t fitness be judged according to “the longest sequence of contiguous 1′s”? You don’t say.

    Why is Lizzie’s choice of fitness function better than mine? You don’t say.

    Either Mung has absolutely no idea what Lizzie’s program does, or he doesn’t know to code.

    And the evidence that you know what Lizzie’s code does is?

    I just created a very clean (though admittedly minimalist) class in Ruby. It shows that I do know how to code. Maybe you just don’t understand object-oriented programming.

    My background:

    Basic
    awk
    C
    C++
    Java
    Ruby

    I’ve actually sold a program I developed.

    Other programs I’ve developed have produced significant income for me.

    Now I just give them away for free.

    I just happen to love Ruby. So shoot me.

    At least I’ll die happy!

    :)

    For you real coders out there, try Ruby!

  151. Fitness Landscapes

    DrBot on October 5, 2012 at 12:32 pm said:

    Mung:

    Think about what a fitness landscape for a 64 bit encryption key would look like – you have 18,446,744,073,709,551,615 possible key values and only one of them is right.

    I’m sorry, but that question doesn’t even make sense to me.

    64 bit encryption keys don’t exhibit differential reproduction, do they? In what sense is one 64 bit encryption key more fit than another 64 bit encryption key?

    So do I perhaps have a point about context?

  152. Mung:

    I have a program that generates a random string of ASCII characters.

    “y\x1EB.\x01UF\x1CLy)V(PHP\x04\rp=v~ i/pG\e_@\x0E\x06kf-FH\x00VBM]\ttLyu&eN\x1D>gA9*=o{cGV\x182ORh\x12uH56?

    I can simply describe this string as “a program generated string.” Why doesn’t that demonstrate that I have “generated CSI”?

    The bolded indicates that any string of like length would do, so it is not specific. The default on the S remains at 0. Chi_500 = – 500.

    Next prob.

    KF

  153. DrBot:

    Mung – here is a fitness landscape for a 10 bit key (1024 possible values) – visualised in two dimensions.

    What is the most effective method of navigating this fitness landscape?

    Why are you calling it a fitness landscape? Fitness is a measure of differential reproduction. So, context.

  154. olegt on October 5, 2012 at 1:15 pm said:

    Mung, you do need to modify the mutation function to switch a bit randomly in either direction.

    olegt, Thank your for your response.

    If I were attempting to simulate biological evolution, I think you might have a valid point, though there seems to be some debate over there at TSZ on this point.

    But the purpose of my “GA” is not to simulate biological evolution, but rather to simply illustrate how easy it is to generate CSI. Lizzie, imo, made her program unnecessarily complex. But she perhaps had a different goal in mind than I did.

    If you think Lizzie’s program generates CSI, I would like to know why you think mine doesn’t.

    Does the fact that my mutate function “latches” make a difference that makes a difference?

    Assume that it also generates a ’0′. At first, the chance that this makes a difference is 50/50. In a randomly generated sequence of 0′s and 1′s, trying to “replace” one position in that sequence with a 0 is as likely as not to have no effect.

    So I suppose that the argument here is that my mutation function is not “non random wrt fitness.” Given my “fitness” function, I would grant that objection. If the goal is to simulate Darwinian processes, that would be relevant. But the goal is to generate CSI.

    If that is in fact the objection, I would level the same objection against Lizzie’s program.

    That said, I may yet modify my “GA” to incorporate your suggestion, with the proviso that the change is not relevant to the argument, but rather to remove it as a point of contention.

    I did recognize this as a potential objection before it was raised.

    Mung:

    1. Set the chosen locus to either a zero or a one, that would not be too difficult to code.

    Let me again express my thanks for your post and your tone.

  155. olegt on October 5, 2012 at 1:15 pm said:

    I am also not sure how your fitness is defined. Not knowing Ruby, I can’t parse this piece of code:

    ok, this is partially Ruby and partially a regular expression, which is not Ruby specific.

    def fitness

    Begins the definition of a method named “fitness” that may be called on this particular organism to determine it’s fitness. A message can be sent to any organism that can respond to this message asking it to provide it’s fitness.

    If you understand object-oriented programming this should be clear. If not, just ask and I will try to explain and clarify my code. This is not intended as any sort of an insult. Good code is readable and understandable.

    score = 0

    The default fitness is 0. Some “objectors” at TSZ appeared to criticize this, they probably just didn’t understand the code.

    chromosome.scan(/1*/).each do |str|
    score = str.length if str.length > score
    end

    This is probably what you were asking about. chromosome is a String. The scan method iterates over the string looking for patterns (this is where regular expressions come in.

    A regular expression (regex or regexp for short) is a special text string for describing a search pattern.

    http://www.regular-expressions.info/

    We are looking for patterns in the “chromosome.”

    Each pattern in the chromosome that matches the regular expression (a one followed by one or more ones) is assigned to the variable str. (chromosome.scan(/1*/).each do |str|)

    So then we are interested in how many contiguous “1″s are in the string that matched the pattern.

    score = str.length if str.length > score

    If we find a pattern of 1s that has more 1s than the previous number of ones we assign that vsalue to the score.

    So “fitness” is decided based upon the number of contiguous 1′s that are found in the sequenec.

    If you have questions about this , please ask. I will try to answer/explain.

    But of greater interest to me is, why does my GA not model the same thing as Lizzie’s GA? Did I not generate CSI? Why not?

  156. So, posters at TSZ question whether I understand Lizzie’s “GA.’

    I question whether they understand Lizzie’s “GA.”

  157. Mung,

    Perhaps if your string won a prize it would be CSI. The prize would be a target and some specified string would be the winner. And if that string is 500 bits or more, then it is CSI because it specified a winning combination.

  158. Zachriel on October 5, 2012 at 1:17 pm said:

    A fitness landscape represents relative reproductive fitness.

    ok, good, we are on the same page.

    keiths is an idiot.

    Joe Felsenstein abdicates his position.

    It can represent a real-life situation, such as protein function, or be an abstraction.

    Oh, so a fitness landscape is a representation?

    It represents relative reproductive fitness?

    Is protein function synonymous with reproductive rate?

    If not, you’re spouting Bee period Ess.

  159. olegt:

    If I understand this correctly, your fitness function is the length of the longest string of ones in a given binary sequence. That does not match Lizzy’s fitness function.

    So?

    I never asserted that my fitness function matched Lizzy’s fitness function in every detail. What’s wrong with my fitness function?

    Why does Lizzy’s fitness function lead to the generation of CSI while mine does not?

    Lizzie claims her fitness function maximizes something. So does mine .

  160. olegt:
    Your program has not run yet. It did not produce digital organisms from a suitably small target space.

    My program did run.

    Your claim that my program did not produce digital organisms from a suitably small target space is not consistent with your claim that my program did not run,

    Your claim that my program did not produce digital organisms from a suitably small target space is simply false.

  161. Toronto on October 6, 2012 at 4:43 am said:

    Why does that matter?

    Because ignorance is no excuse.

  162. Toronto:

    Object oriented code was being written before C++ or SmallTalk or any other language that explicitly defined objects existed simply because experienced programmers started to use structures in that manner.

    Object oriented code was being written before the concept of objects even existed. What a moron.

  163. Toronto:

    Any “runnable” code is not going to be easy for everyone to figure out if they are not familiar with the syntax and operators of that language.

    If you use pseudocode instead, everyone no matter what their language of choice is can understand it.

    No explanation of why “pseudo code” is more understandable is offered.

    No explanation of why “pseudo code” is any different from “runnable” code is offered.

  164. keiths:

    It’s a real pleasure to disagree with an intelligent person who’s open to reasoned argumentation and actually tries to understand what I’m saying, and why.

    ok, so who at TSZ has had the balls to disagree with the garbage you’ve asserted?

  165. keiths:

    The corrected fitness function should work like this:

    1. If the genome is all 0?s, return a fitness of 0.

    Demonstrating, for all who care to see, that you are a complete moron.

  166. Toronto:
    Its even better to simply say in English what you want to do before coding and get feedback before you commit to anything.

    Working code trumps whatever fantasy world you’re in.

  167. keiths:

    And there should be a mutation rate parameter that is independently applied to each bit. Currently, every call to mutate() sets exactly one bit in a genome, never more or less.

    So?

  168. Toronto at TSZ claims to understand:

  169. olegt:

    If I understand this correctly, your fitness function is the length of the longest string of ones in a given binary sequence. That does not match Lizzy’s fitness function.

    So?

    My fitness function does not generate CSI and hers does?

  170. Toronto:
    Let me guess; you generate a “specification” for your code after the “functionality” has been observed.

    So?

  171. Mung:

    Why does Lizzy’s fitness function lead to the generation of CSI while mine does not?

    If it makes you feel any better Lizzie’s did not lead to the generation of CSI and Lizzie still hasn’t demonstrated any understanding of what CSI is. For that matter no one over on TSZ appears to understand what CSI is.

  172. keiths sez:

    I wonder what Dembski and Marks make of quantum randomness. Radiation is a quantum phenomenon, and radiation can cause mutations. Thus mutations can be the result of an inherently random process. Yet they can also change the course of evolution. Do Dembski and Marks think that God (or Satan, or angels, or some other non-material intelligence) is the source of every random quantum event? If not, then random quantum events are a second source of information besides non-material intelligences (a category which includes humans, according to Dembski).

    No, keiths, that is NOT a SOURCE of information. It is a source for changing existing information.

    For the record- ID does NOT state that random effects never happen in a designed universe. ID does NOT state that random mutations never occur.

    But please keep humping your strawman arguments. It is entertaining.

  173. How’d you guess our PIN?

    Add PIN to the list of things Zachriel doesn’t understand.

  174. olegt:

    Sorry, Mung, but from this it is safe to conclude that you don’t understand what Lizzie did.

    Lizzie wrote a program that allegedly simulates natural selection producing CSI. Unfortunately for Lizzie it does neither. But that won’t prevent you from continuing to claim otherwise. And we wouldn’t expect anything less…

  175. keiths:

    We’re still waiting for you to modify your program to address your own challenge, and to explain why you think the challenge is relevant.

    Well if it takes specified complexity to get replication, and it does, then if you start out without any specified complexity then you can’t even get started.

    And that is why Lizzie fails to produce CSI- she smuggles in specified complexity by just granting reproduction with variation.

  176. Joe, I think perhaps you are correct.

    I didn’t try to make a game of it.

  177. To observers @ TSZ:

    There seems to be some misunderstanding about my program. It’s fundamental purpose is to show how easy it is to generate CSI using a simple algorithm. Isn’t that what Lizzie program is designed to show as well?

    IMO, Lizzie’s program is needlessly complex and takes too long to achieve the desired result.

    The fundamental question that needs to be asked and answered is, why does her program generate CSI while mine does not?

    I’ll be incorporating some of the suggestions I’ve seen there at TSZ, but first I want to code up a way to track historical information so I can observe the effects of changes made to the program.

    I hope to work on that today.

    I’ll also post a link to my LizzardPopulation code.

  178. KeithS proves that he doesn’t know what CSI is:

    I don’t think you’ve told us explicitly what the target of your program is. If olegt’s surmise is correct and your target is a sequence of all 1?s, then your program does generate CSI: it finds a single specified target pattern out of a search space containing 2500 patterns.

    But he does prove my point in comment 161.

    What is it with monkeys and shiny prizes? :razz:

  179. KeithS proves that he doesn’t know what CSI is:

    I don’t think you’ve told us explicitly what the target of your program is. If olegt’s surmise is correct and your target is a sequence of all 1?s, then your program does generate CSI: it finds a single specified target pattern out of a search space containing 2500 patterns.

    According to Dembski, the most up-to-date definition of CSI is:

    Χ = log2[10^120 * φ_S(T) * P(T|H)]

    Have you calculated Χ and determined that it’s less than 1? How did you choose H, and how did you determine φ_S(T)?

    I can go through the calculation given one way of interpreting Dembski, and it comes out to easily have CSI. But I’d like to see your calculation first.

  180. R0bb,

    In order to qualify as CSI it cannot be algorithmically compressable. And a sequence of all 1s is algorithmically compressable.

    As I said you guys don’t understand anything ID.

  181. In order to qualify as CSI it cannot be algorithmically compressable.

    Dembski says the opposite. According to his definition of CSI, the more compressible it is, the more CSI it has. Which of Dembski’s works have you read?

    As I said you guys don’t understand anything ID.

    Since Dembski disagrees with you, apparently he’s one of the guys who don’t understand anything ID.

  182. Robb,

    Try to compress the works of Shakespear- CSI. Try to compress any encyclopedia- CSI. Even Stephen C. Meyer says CSI is not amendable to compression.

    A protein sequence is not compressable- CSI.

    So please reference Dembski and I will find Meyer’s quote.

  183. “Complex sequences exhibit an irregular and improbable arrangement that defies expression by a simple formula or algorithm. A specification, on the other hand, is a match or correspondence between an event or object and an independently given pattern or set of functional requirements.”– Stephen C. Meyer in Evidence for Design in Physics and Biology: From the Origin of the Universe to the Origin of Life

  184. “If it is shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information.”–CJYman

  185. Mark Frank is confused. Above I was discussing COMPLEX SPECIFIED INFORMATION and he links to a paper about SPECIFICATION ONLY, in an attempt to refute what I said.

    SPECIFICATION is the S in CSI, Mark.

  186. To Mark Frank- once again I refer you to my exmaples in 186- none of which are compressible and all of which exhibit CSI.

    How to YOU deal with those facts, Mark? By ignoring them, as usual…

  187. To Mark Frank-

    The “on/off” of a pulsar is specified. However it is NOT complex. Therefor we do not infer design.

    The pattern of a snowflake is also specified. However it too lacks complexity and therefor we do not infer design.

    Page 164 of “The Design of Life”:

    For instance, a sequence of randomly arranged Scabble pieces is complex without being specified. Conversely, a sequence that keeps repeating the same short word is specified without being complex. In neither of these cases is an intelligence required to explain these sequences.

  188. R0bb,

    I examined Elizabeth’s program closely and I don’t see where she even attempts to calculate CSI in it. So how do you suppose she knows she generated CSI?

    If you can help me turn that into code I’d be more than happy to include it in my program. Maybe we can all learn something.

    Wouldn’t that make a simple marvelous fitness function? The more CSI in a string the more fit it is. Or, the more specified it is, the more fit it is.

    Have you calculated ? and determined that it’s less than 1?

    Does “less than one” indicate “less” CSI? How much less?

    Dembski says the opposite. According to his definition of CSI, the more compressible it is, the more CSI it has.

    The it to which you are referring is the pattern T?

    Given a 500 bit string of 0′s and 1′s and the chance hypothesis the probability of T|H is 1 in 2^500?

    What do you make of Dembski’s absolute specificity?

    –log2 P(T|H)

    Now in my (initial) program, the pattern that all my “winning” strings had in common was a minimum of 450 contiguous 1′s. That’s certainly a restricted subset of all possible when it comes to a 500 character string.

    But the actual description of a specific pattern is going to basically boil down to the same algorithm, isn’t it?

    n.times {print ’1′}

    So in what sense is any one of them more or less compressible? Do they all then have the exact same CSI?

    regards

  189. To Mark Frank-

    The most likely explanation is that YOUR interpretation of Dembski is wrong.

    And Mark proves he is clueless:


    To Mark Frank- once again I refer you to my exmaples in 186- none of which are compressible and all of which exhibit CSI.

    How to YOU deal with those facts, Mark? By ignoring them, as usual…

    Leaving aside the fact I only just joined the debate and he has never referred me to anything,

    Mark, you quoted my comment 186 in your OP. Are you really that slow?

  190. To Mark Frank-

    Yes I have read the paper. Nice of YOU to ignore everything I have said.

    Do you really think ignoring what I say refutes it?

    Pathetic…

  191. Mark Frank:

    Can you then explain what Dembski does mean by specified if he does not mean compressible?

    SPECIFICATION IS NOT CSI. Specification is only one part of CSI- ie the S.

    As I said encyclopedias are CSI, Mark. And guess what? Not compressible.

    I accept that you and many other ID proponents define “specified” in such a way that it is incompressible.

    Spoken like a true dolt. You are confused mark as I have never defined “specified” that way. What i do understand is that there is a HUGE difference between specified and CSI.

  192. And Allan Miller chimes in:

    Mung reckons a string of all 1?s has high CSI; Joe reckons completely random sequences do (given that they are the least compressible).

    Nope, completely random sequences do not have CSI. As I said you guys ignore what I write and make stuff up.

    Losers…

  193. Yeah, That’s a bad case of misrepresentation.

    The text of Shakespeare is not random. But it is specified.

  194. And I see Allan misrepresent me as well.

    High CSI? What’s that? Low CSI? What’s that?

    How much CSI makes for high CSI and how little CSI makes for low CSI? Lizzie makes the same mistake. No wonder it just gets repeated over there.

  195. Joe:

    Mark Frank is confused. Above I was discussing COMPLEX SPECIFIED INFORMATION and he links to a paper about SPECIFICATION ONLY, in an attempt to refute what I said.

    You might want to inform Dembski that his paper is about specification only. He thinks it’s about CSI:

    For my most up-to-date treatment of CSI, see “Specification: The Pattern That Signifies Intelligence” at http://www.designinference.com.

  196. The design inference is CONTEXT specific. For eample 500 1s- if that occurred by someone rolling a die 500 times, recording the result of each roll, then yes I would infer specified complexity existed and therefor design.

    CSI is a special case of specified complexity, which would mean all CSI is SC but not all SC = CSI. (if not I am sure kairosfocus, mung, PaV, gpuccio will correct me) and both indicate design

  197. R0bb,

    1- the paper does not exist in isolation

    2- the ENTIRE paper, not just the part mark/ you can quote mine

    Now I have given expamples of CSI that obvioulsy counter what you think Dembski is saying. That should tell you something but obvioulsy you no speaky the language…

  198. Joe:

    So please reference Dembski and I will find Meyer’s quote.

    I’m aware of Meyer’s position, and thank you for sharing CJYman’s take, although I’m not sure why I should put stock in it.

    As for a reference, why would you need one when you have already read Dembski’s work? For example, you say that you’ve read the “Specification” paper, so you can easily answer the following questions:

    1) Is specified complexity directly or inversely related to φ_S(T)?

    2) Is φ_S(T) directly or inversely related to compressibility?

    Or you might want to reread the section Specifications via Compressibility.

    Or you can think back to Dembski’s poster child of specified complexity in both The Design Inference and No Free Lunch, namely the Caputo sequence. Is it compressible or not?

    We’ve been over this before, Joe.

  199. R0bb,

    Perhaps you missed my comment:

    The design inference is CONTEXT specific. For eample 500 1s- if that occurred by someone rolling a die 500 times, recording the result of each roll, then yes I would infer specified complexity existed and therefor design.

    CSI is a special case of specified complexity, which would mean all CSI is SC but not all SC = CSI. (if not I am sure kairosfocus, mung, PaV, gpuccio will correct me) and both indicate design

    That takes care of the caputo sequence…

  200. Now I have given expamples of CSI that obvioulsy counter what you think Dembski is saying. That should tell you something but obvioulsy you no speaky the language…

    Another predictin fulfilled…

  201. Mark’s confusion continues:

    We were discussing different concepts of specification

    I never was, mark. I have been specifically talking about CSI, not just spoecification.

    I never said, thought nor implied specification was not compressible. IOW you really need to seek help…

  202. R0bb:

    I’m aware of Meyer’s position, and thank you for sharing CJYman’s take, although I’m not sure why I should put stock in it.

    When it comes to Intelligent Design cjyman forgot more than you know- and he doesn’t forget. :razz:

  203. It looks like Mark Frank has given up on trying to force his misconceptions unto us:

    You are right. I feel a fool for trying.

    No Mark, you are just a fool…

  204. And anothe clown chimes in:

    I think I can understand the problem here.

    Yeah, Mark messed up. He is conflating mere specification with CSI.

    What Joe and Dembski are both doing is looking at the object in question and deciding whether it was Designed. If the answer is yes, then it must have high CSI, otherwise the CSI must be substantially lower.

    Nope, only evos think that way, and here we have Flint.

    Joe realizes that high compressability can’t be the measure of specification,

    Joe was talking about CSI, not mere specification, wrt compressibility.

    One moron lights the torch and another jumps in to take it from there.

  205. Joe:

    CSI is a special case of specified complexity, which would mean all CSI is SC but not all SC = CSI.

    Where did you get that idea?

    Here’s Meyer, in his famous paper:

    Dembski (2002) has used the term “complex specified information” (CSI) as a synonym for “specified complexity” to help distinguish functional biological information from mere Shannon information–that is, specified complexity from mere complexity.

    You have even quoted the above yourself.

    Here is Dembski and Wells’ definition of “complex specified information” in the glossary of The Design of Life:

    complex specified information  Information that is both complex and specified. Synonymous with SPECIFIED COMPLEXITY.

    So, again, where did you get the idea that the terms are not synonymous?

  206. R0bb:

    So, again, where did you get the idea that the terms are not synonymous?

    I checked a thesaurus. ;)

  207. R0bb,

    CSI and SC are different manifestations of the same thing.

    And if you read what I said I never said they were not synonymous…

  208. Mung: “And I see Allan misrepresent me as well.”

    And on it goes:

    Mung reckons a string of all 1?s has high CSI

    Really? How much CSI is “high” CSI?

    You tell us, sunshine! It’s your (ID’s) bloody concept! You claimed to generate it by making a string of 1?s.

    More CSI and less CSI are the sorts of things you folks come up with:

    Which has more CSI, an onion, a cheetah, a blueberry, a mushroom, a human, a lobster, a lichen, a crinoid, or a halibut? And which has more CSI, a western fence lizard or a garter snake?

    Allan:

    You tell us, sunshine! It’s your (ID’s) bloody concept! You claimed to generate it by making a string of 1?s.

    And Lizzie claims to have generated CSI. Where were you then?

    I don’t believe I said a string of ’1′s has “high” CSI. I’m not sure I said a string of 1′s has any CSI. I want to know why, if her strings have CSI, mine don’t.

    Allan:

    Rendering CSI a handy term for ‘it’s-a-string’?

    My, that’s even more simply describable than the one I came up with! So yeah, I guess any string of sufficient complexity has CSI.

    In case you jokers haven’t caught on yet, I am mocking Lizzie’s effort to generate CSI and the non-critical acceptance of such by her fan club over there at TSZ.

    She doesn’t calculate the CSI for any of her strings. She doesn’t explain which ones have more or less CSI or why.

    At least I asked R0bb if he was able to assist me in incorporating such a measure into my program. That may be more than Lizzie ever attempted to do.

    http://www.uncommondescent.com.....ent-435866

  209. More Flint:

    The very idea of CSI lacks any real-world referent.

    Computer programs, encyclopedia articles, text books, assembly instructions, genomes- all real world referents of CSI

  210. And Zachriel continues to amuse:

    That means Dembski is treating specified complexity as a quantity, not a boolean.

    You have to see if the quantity is there to qualify as CSI. Once you pass the threshold the rest is irrelevant to the design inference.

  211. Try to compress the works of Shakespear- CSI. Try to compress any encyclopedia- CSI.

    Actually, English text is highly compressible as shown empirically, and even more compressible in theory.

    If a compression engine is optimized for N-bit English text, it can theoretically achieve a compression ratio of 1 – (logV)/N, where V is the number of N-bit sequences that are valid English text. For non-small N, the vast majority of N-bit sequences are not valid English, which means that (logV)/N is very small and the compression ratio is very high.

    The same principle applies to any kind of specification. Whatever various ID proponents understand by the term “specification”, I think we can all agree that the number of “specified” outcomes must be very small in comparison to whole sample space in order for a specified outcome to be considered special and evidence of design. This means that a compression engine that is optimized for specified outcomes can achieve very high compression ratios.

  212. Joe:

    R0bb,

    1- the paper does not exist in isolation

    2- the ENTIRE paper, not just the part mark/ you can quote mine

    We can’t quote the entire paper, much less all of Dembski’s work. If you think that Mark or I is guilty of quote-mining, please show us the quote mine and show us something in the context that contradicts our interpretation.

    Now I have given expamples of CSI that obvioulsy counter what you think Dembski is saying. That should tell you something but obvioulsy you no speaky the language…

    I am saying that compressible sequences can be CSI. To counter that, you would have to show that compressible sequences cannot be CSI. Where have you done that?

  213. Joe:

    And if you read what I said I never said they were not synonymous…

    You said that “all CSI is SC but not all SC = CSI” and “CSI and SC are different manifestations of the same thing.” These indicate that the terms are not synonymous. Agreed?

  214. Joe, a summary of your position on various items:

    - This paper is about specification (the S part of CSI) only. Never mind that it talks extensively about complexity (the C part of CSI). And never mind that Dembski says that this paper is about CSI.

    - Compressible sequences do not qualify as CSI. Rolling 500 1s in a row qualifies as specified complexity, but not CSI. But that is not to be construed as saying that “specified complexity” and “CSI” are not synonymous.

    - The works of Shakespeare and encyclopedias are incompressible. Never mind the fact that they are compressed often, and English text is known to be highly compressible.

    Do you disagree with any of that?


  215. Try to compress the works of Shakespear- CSI. Try to compress any encyclopedia- CSI.

    Actually, English text is highly compressible as shown empirically, and even more compressible in theory.

    Then why didn’t you do as requested?

    I am saying that compressible sequences can be CSI.

    SC- and it all depends on the CONTEXT just as I said.

    You said that “all CSI is SC but not all SC = CSI” and “CSI and SC are different manifestations of the same thing.” These indicate that the terms are not synonymous. Agreed?

    Disagree.

  216. Compressible sequences do not qualify as CSI. Rolling 500 1s in a row qualifies as specified complexity, but not CSI.

    What is the information in a string of ones? What does it tell me?

    The works of Shakespeare and encyclopedias are incompressible. Never mind the fact that they are compressed often, and English text is known to be highly compressible.

    ALGORITHMICALLY compressible- and i still notice you haven’t done so, just sed it.

  217. - This paper is about specification (the S part of CSI) only. Never mind that it talks extensively about complexity (the C part of CSI). And never mind that Dembski says that this paper is about CSI.

    You cannot quote the part that refers to “specification” and have it apply to CSI/ SC. It’s that simple.

  218. Zacho:

    Joe is apparently defining SC as a quantity, but CSI as a Boolean.

    Nope.


    If it is shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information.”–CJYman

    Dembski’s definition doesn’t involve function.

    CSI does.

  219. If the presence of specified complexity in an object means it was designed, and the presence of complex specified information means it was designed, that would mean they mean the same thing, ie they are synonymous.

  220. So close!

    keiths:

    Joe isn’t saying that CSI is a boolean. He’s saying that SC and CSI are commensurable quantities. An SC value below a certain threshold is not CSI, while a value of SC above that threshold is CSI.

    An SI value below a certain threshold is not CSI.

    To me SC is for objects- like Behe’s mousetrap- several components that come together in such a way as to convey some function that is spearate from the components themselves.

    And CSI would be for something like the message in “Contact”

  221. If an object exhibits specified complexity then it is also a given that it took CSI to create it, meaning it contains that CSI.

    Encyclopedia exhibit complex specified information with means they also have a specified complexity.

  222. Dembski:

    It is a combinatorial fact that the vast majority of sequences of 0s and 1s have as their shortest description just the sequence itself. In other words, most sequences are random in the sense of being algorithmically incompressible. It follows that the collection of nonrandom [algorithmically compressible] sequences has small probability among the totality of sequences so that observing a nonrandom sequence is reason to look for explanations other than chance.

    Well Joe, I’m wondering if some of the folks over at TSZ aren’t just as skilled at word games as we are. ;)

    It seems to me the point of compressibility is the same point as it’s always been with Dembski and here at UD.

    Dembski:

    To sum up, the collection of algorithmically compressible (and therefore nonrandom) sequences has small probability among the totality of sequences, so that observing such a sequence is reason to look for explanations other than chance.

    Patterns of small probability. Algorithmically compressible sequences are just one example of such a pattern.

    Do you get the sense that the folks at TSZ think algorithmically compressible sequences are the only small probability sequences?

  223. Note he never says therefor “design”. That is because law/ regularity/ necessity can also produce algorithmically compressible sequences.

    His point is chance cannot produce algorithmically compressible sequences.

    It still all depends on the context. If you have an algorithmically compressible sequence, you have ruled out chance-> DEFAULT chance is out.

  224. Earth to Zachriel- when cjyman said:

    If it is shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information, it does not mean that every instance of CSI has to be like that. He is saying if that is what you have then you have CSI.

    Is English not your first language?

  225. Moar Dembski:

    The basic intuition I am trying to
    formalize is that specifications are patterns delineating events of small probability whose occurrence cannot reasonably be attributed to chance.

    Not limited to algorithmically compressible sequences.

    With such patterns, we saw that the order in which pattern and event are identified can be important. True, we did see some clear instances of patterns being identified after the occurrence of events and yet being convincingly used to preclude chance in the explanation of those events (cf. the rejection regions induced by probability density functions as well as classes of highly compressible bit strings — see sections 3 and 4 respectively). Even so, for such after-the-event patterns, some additional restrictions needed to be placed on the patterns to ensure that they would convincingly eliminate chance.

    Are we and TSZ just talking past one another. Is there really some point of fundamental disagreement here that I am just not grasping?

    I see two possible ways to interpret Dembski here.

    1. These patterns are specifications. In which case, we need specification plus something else.

    2. These patterns do not yet qualify as a specification.

    WmD.

    specifications, as I indicated in section 1, are supposed to be patterns that nail down design and therefore that inherently lie beyond the reach of chance.

    Perhaps there’s another interpretation that makes sense. Maybe that’s the one that Frank et al. are working from. I guess I’ll shut up now and see what they have to say.

  226. Zachriel:

    Shorter Mung: Compressible, might be CSI.
    Shorter Joe: Non-compressible, required for CSI.

    :)

    If you’ve known me for any length of time, you know that I don’t have any problem disagreeing with other people here at UD. Heck, I even disagreed with Meyer. If I don’t like what Dembski has written I’ll disagree with him, lol.

    Am I correctly interpreting Dembski?

    If a sequences is algorithmically compressible that does not auto-magically make it a specification.

    If Joe says he disagrees with me, so be it. People learn through disagreement. I don’t see it as a horrible bad thing.

  227. p.s. mine’s shorter. thought i’m not sure i should be bragging about that on the internet.

  228. Zachriel,

    I offered what cjyman said about CSI to support my claim pertaining to CSI being not algorithmically compressible.

  229. Patrick, aka MathGrrl, whines:

    Asking nicely for a definition and some example calculations worked so well the last time it was tried, after all.

    More revisionist history.

    “MathGrrl” appealed to ev. Tom Schneider, creator of ev, claims to have used it to generate CSI. Patrick had nothing to say about that.

    Elizabeth Liddle posted that she was writing a program to generate CSI. Did Patrick ask her what definition she was using and an example calculation?

    Now if Tom Schneider and Elizabeth Liddle understood the definition of CSI and how to calculate it well enough to write programs to demonstrate it could be generated, “MathGrrl’s” complaints ring hollow and Patrick is just whining.

  230. Zachriel:

    This is easy to resolve, though. Just provide a clear and unambiguous definition of CSI.

    What definition was Lizzie using, and where were you in that thread?

    http://theskepticalzone.com/wp/?p=576

  231. 236

    Mung at 234,

    You have beat them over the head so many times with their own use of the words and concepts they claim not to understand, one would think they’d eventually become embarrassed about saying they don’t understand them.

    But then again, its Patrick, so there’s an explanation in itself. He has a family to protect from those lying Christians. Demonstrating pseudo-intellectual irrationality is hardly too much to ask in comparison.

  232. Thanks mate!

  233. petrushka on October 9, 2012 at 3:36 pm said:

    The simple fact is you can’t do a useful probability calculation without assuming something about the history and context of the sequence.

    Do probability calculations enter into the determination of “Shannon information”?

    IOW, to calculate the “amount of information” in Shannon terms, what must be either known or assumed?

  234. Mike Elzinga:

    Does anyone here think it would do any good to show him the formula?

    wow. just wow. Assume I have access to the formula.

    Shannon’s paper is, after all, available online. And “the formula” has been reprinted and discussed in many books since then.

    Do you think Shannon information can just be read off any old sequence? How much “Shannon information” is in the following sequence: 00101

    In order to calculate the “amount of information” in that sequence in Shannon terms, what did you either know or assume?

  235. So tell us, Mike,

    If someone tells you that in a sequence of 500 0′s and 1′s there is 500 bits of Shannon Information, would you believe them, and why?

    petrushka?

  236. Allan Miller on March 16, 2012 at 4:44 pm said:

    Ido:

    The individuals with highest fitness have 44 HHHT 59 HHHHT and 5 HHHHHT. That seems to be a local maximum where the population gets stuck.

    Allan:

    Crying out for inversion, gene conversion, an alignment mutation and/or a transposon!

    More Intelligent Design please! Fine Tuning anyone?

  237. Upright BiPed,

    You’ll love this.

    Patrick in the Creating CSI with NS thread at TSZ:

    http://theskepticalzone.com/wp.....mment-8289

    http://theskepticalzone.com/wp.....mment-8290

    I need to go back and review, lol. No telling what I’ll find. I sure hope he’s not trying to generate CSI. =P

    And then there’s R0b:

    http://theskepticalzone.com/wp.....mment-8363

    Sorry, got to run and see if his code is still there before he can delete it. I want to see that CSI calculation!

  238. Joe:

    Compressible sequences do not qualify as CSI. Rolling 500 1s in a row qualifies as specified complexity, but not CSI.

    What is the information in a string of ones? What does it tell me?

    Prior to the rolls, you didn’t know what the result would be. After the rolls, you knew that the result was a sequence of 500 1s. That’s the information you learned. Your uncertainty was reduced by about 1300 bits.

    The works of Shakespeare and encyclopedias are incompressible. Never mind the fact that they are compressed often, and English text is known to be highly compressible.

    ALGORITHMICALLY compressible- and i still notice you haven’t done so, just sed it.

    Yes, algorithmic compression. What kind of compression did you think I was talking about? Hydraulic?

    Are you seriously doubting the compressibility of Shakespeare and encyclopedias? Okay, I downloaded the works of Shakespeare and an encyclopedia, and I compressed them with PAQ on level 8. I got 80% compression for Shakespeare and 79% for the encyclopedia. You’re welcome to reproduce these results.

  239. R0bb:

    I would just clarify my views about compressibility, that I have already expressed in the thread.

    First of all, I am aware that Dembski considers compressibility as a form of specification. He may be right, but very simply I have never considered it as a form of functional specification in my discussions about biology. In particular, compressibility is not a function we can observe in any special way in the biological world. The functional specification for proteins and other biological molecules derives from what they can do, not from the fact that thet can be compressed (indeed, biological molecules are not specially compressible at all).

    So, maybe compressibility can be considered as a form of specification, but that is not relevant for biological discussions.

    But there is another aspect of compressibility that is of relevance to any discussion about CSI or dFSCI. If the observed string is compressible, we must always consider the possibility that it came into existence in the system we are considering in an indirect way. IOWs, we have two “chance” explanations to consider:

    1) The string was generated by RV directly

    2) A simnpler system was generated by RV directly, and then generated the observed string by a necessity mechanism.

    The second scenario is the one where an algorithm that can compute the solution is generated by RV. I have discussed that scenario in detail about Lizzie’s algorithm.

    As the secons scenario would still be a geberation of the solution by RV, even if indirectly, it must be considered, and its complexity evaluated. But, in the second scenatio, the complexity to be evaluated is the complexity of the algorithm (in the case of Lizzie’s example, the complexity of the simplest executable string that can output the solution). If tha complexity of the algorithm is lower than the complexity of the observed string, that will be the dFSI of the string. Otherwise, the dFSI of the string remains the complexity of the string itself.

    So, if you have a string of 500 1s, its direct complexity is 500 bits, its indirect complexity will be the simplest executable program that, in the system we are considering, will output 500 1s. If you can write an executable string that does that, and is less than 500 bits, the complexiy of that string becomes the dFSI of the original string, because the new string is a compression of the original string, and still can generate it in the system.

    So, if we want to apply that to the works of Skakespeare, you can reduce the funtional complexity of the original observed string (the works of S themselves) by calculating the total complexity of:

    a) The compressed string that you obtained

    +

    b) The software that can expand it into the original observed string.

    In the end, I believe we can safely affirm dFSCI for the works of Shakespeare anyway.

    But, if you can find a way to generate Hamlet in a system through a functional complexity of less than 500 bits, please let me know.


  240. What is the information in a string of ones? What does it tell me?

    Prior to the rolls, you didn’t know what the result would be.

    So 500 1s alone do NOT tell me anything- I need to have information about the entire process. Got it.

    BTW can I see those alleged algorithms?

  241. R0bb:

    A few more comments, just to make it more clear:

    a) Compressibility as specification.

    Indeed, compressibility can be used to specify, just like any other property.

    Specification is not a narrow concept. Anything that objectively qualifies a subset of a search space is a specification. So, if my search space is made of 1000 objects, and 10 of them are red, being red is a psecification that objectively qualifies a subset of the search space. The complexity of the specified subset will be, as usual, 10/1000, that is 10^-2.

    Higly compressible strings are, as Dembski says, a small subset os all possible strings. By defining the length of a string, and the degree of compressibility, we can probably calculate the maximum specific complexity of some specific subset of compressible strings.

    b) Compressibility as a possible result of necessity mechanisms.

    Now, let’s say that an observed string is specified, for instance because it is functional, or even because it is compressible. As I have stated many times, it is not enough to compute the maximum functional information in that string (the ratio of the target space to the search space). We also have to consider of any known necessity mechanism can explain what we observe, completely or in part.

    Now, in many cases, it will be clear that no known mechanisms is available. That is rather obvious for most human complex dFSI, such as Hamlet or a complex software.

    But highly compressible strings are different: they can often be generated by necessity mechanisms, so we should be extremely cautious when evaluating the dFSI of such a string.

    For example, 500 heads looks like a specified (becaiuse highly compressible) string, and its maximum complexity is 500 bits. But such a string can easily be generated by the tossing of an unfair coin, that can only give head as a result. That would be a necessity mechanism that can completely explain the string. If such a mechanism is possible in the system we are considering, then the dFSI of the string becomes zero.

    Another example. A DNA sequence of 500 thymidines could appear specified (because highly compressible). But it can easily be generated in a system where only thymidine, and no other nucleotide, is available.

    That’s why compressibility, while being a possible way to specify a subset, should be considered with extreme caution when we try to evaluate dFSI for that subset. Compressibility is often a sign of a simple necessity explanation.

  242. Zachriel:

    We only gave the thread a cursory view, but from what we did read, it was presumably Dembski’s definition; a long sequence which has a simple description (the function), but is unlikely due to chance alone (a uniform probability distribution).

    Please reference Dembski stating that, because I have provided CSI that is not that.


    gpuccio: As I have stated many times, it is not enough to compute the maximum functional information in that string (the ratio of the target space to the search space). We also have to consider of any known necessity mechanism can explain what we observe, completely or in part.

    Zachriel:

    So your answer to Dembski’s rhetorical question, “Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?” is no.

    Nope, that does NOT follow from what he said.

  243. Allan Miller:

    So there can never be translated strings that do not have dFSCI, whatever they contain, since they go through the ribosome/mRNA/tRNA/aaRS system!

    If the translated strings do not function then I would say they do not have dFSGI- no “F”. And we see that with genetic engineering- some, or even most, times the transplanted gene gets translated but the protein does not form. And all you have is an unfolded, functionless polypeptide.

  244. Joe:

    I offered what cjyman said about CSI to support my claim pertaining to CSI being not algorithmically compressible.

    Yes, you said:

    In order to qualify as CSI it cannot be algorithmically compressable.

    But then you said:

    If it is shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information, it does not mean that every instance of CSI has to be like that. He is saying if that is what you have then you have CSI.

    So you’re denying that CJYman’s criteria, including incompressibility, are requirements for something to be CSI.

    So which is it? Is incompressibility a requirement for something to be CSI, or is it not?

  245. gpuccio!

    welcome back.

  246. To Allan Miller (at TSZ):

    He makes a good point about the relevance of compressibility to biological strings, though then rather blatantly ‘smuggles in’ a function relating to the existence of the transcription/translation system.

    Are you referring to me here? Where did I “smuggle in” that?

    So there can never be translated strings that do not have dFSCI, whatever they contain, since they go through the ribosome/mRNA/tRNA/aaRS system!

    Just to be clear: if we are considering a System where the trancription and translation apparatus already exist (that is, if our scenario is the emergence of new proteins after OOL and LUCA), then we will not consider the complexity of those things (they are already part of the System, and they arre available). We will only analyze the functional complexity of the new protein, given the transcription and translation apparatus, and all other functionalities already present in the cells where the new protein originates.

    But, if we are debating OOL, then the whole complexity of the minimal known reproducing beings should be taken into consideration.

    As I have tried to explain many times (apparently without great success) computations of dFSCI are never made abstractly, they are always made with explicit reference to a System, a Time Span, and so on (see my detailed discussion in my previous thread, entirely pasted at TSZ).

  247. To Shallit (at TSZ):

    Yes, it’s clear that Dembksi and most ID advocates are quite confused about the relationship between Kolmogorov complexity and the bogus concept of CSI. In my paper with Elsberry we point out that Dembski associates CSI with low Kolmogorov complexity (highly compressible strings). But strings with low Kolmogorov complexity are precisely those that are “easy” to produce with simple algorithmic procedures (in other words, likely to occur from some simple natural algorithm).

    You are absolutely right on this point.

    By contrast, organismal DNA (for example) doesn’t seem that compressible; experiments show long strings of organismal DNA are often poorly compressible, say only by about 10% or so. This is, in fact, good evidence that organismal DNA arose through a largely random process.

    Right again. That’s exactly what I have tried to say here.

  248. R0bb,

    Can you provide the alleged compression algoritm or not? What was compressed, exactly?

  249. To Keiths (at TSZ):

    It’s even worse than that. An object can conform to multiple specifications, in which case it simultaneously possesses multiple CSI values, all equally valid:

    That is perfectly true, but there is nothing bad about it. I stated that point very clearly in my definition of dFSCI: functional complexity is always computed for an explicit functional specification.

    I have also offered many times the example of a tablet computer: it can be specified as a paperweight (a perfectly valid function), but its functional complexity for that function will be very low. Or it can be specified as a computer capable of many explicitly defined functions, and its functional complexity for that specification will be extremely high.

    I am glad that you understand correctly this point.

  250. To Zachriel (at TSZ):

    So your answer to Dembski’s rhetorical question, “Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?” is no.

    I would simply say, again, that we have to define the System where the object arose, the Tine Span, and we must know enough about those things to be able to reason about the object and its origin.

    I don’t think that is in real contrast with Dembski, if we consider that Dembski is assuming the whole known universe and known physical laws as his System, and the time from Big Bang to now as his Time Span. IOWs, Dembski is trying to answer questions about the possible emergence of life in the universe.

    My approach is different. I usually ask questions about the possible emergence of protein domains after OOL. Or, in alternative, about the possible emergence of the basic life system in LUCA. I usually prefer the first scenario, because we have more details about it.

  251. All Dembski is saying is that if we did not observe the thing arising can we still determine it was designed or not? And the answer is clearly YES.

  252. I offered what cjyman said about CSI to support my claim pertaining to CSI being not algorithmically compressible.

    That seems to contradict what you just said.

    Maybe to you.

    Is CSI necessarily non-compressible?

    It all depends on how you are defining “compressible”.

  253. To Zachriel (at TSZ):

    Well, that establishes that there are conflicting definitions.

    And so? That is good evidence of intellectual vitality and non dogmatism in the ID field!

  254. To Keiths (at TSZ):

    About your thread regarding common descent. I must disagree with you.

    Being clearly a member of you “third group”, I must say that I don’t see any of the difficulties you describe, which derive only from your preconceived assumtpions about how the biological designer would act.

    I have none of those preconceptions, and i judge from evidence. IMO, evidence is clearly in favour of a designer who acts with all the obvious constraints created by physical laws, and who is not acting as an omnipotent dictator.

    Reuse of existing hardware and software is extremely common in human design, and rather obvious in biological design. That said, I would say that there are however many examples in natural history that are best explained as sudden design explosions, and where the reuse of existing design, while present, is overwhelmed by the sudden emergence of novelty: OOL and the Ediacara and Cambrian explosions are the best known examples of that.

    So, even if you say, with your usual arrogance:

    If you are still an IDer after reading, understanding, and digesting all of this, then it is safe to say that you are an IDer despite the evidence, not because of it. Your position is a matter of faith and is therefore a religious stance, not a scientific one.

    I must answer that “digesting all of this” has not changed a comma in my scientific embrace of ID theory.

  255. Mung:

    Thank you!

  256. To Shallit (at TSZ):

    Just a correction. I agree with all that you say, except obviously the last phrase, that I included in the quote by mistake:

    “This is, in fact, good evidence that organismal DNA arose through a largely random process.”

    I obviously don’t agree with that. Indeed, I believe quite the opposite: the fact that, as you say, “organismal DNA… doesn’t seem that compressible”, and that it is however highly functional, is good evidence that it did not arise through any random or algorithmic process, but trough design.

  257. It’s even worse than that. An object can conform to multiple specifications, in which case it simultaneously possesses multiple CSI values, all equally valid

    Yup, a swiss army knife, almost anything from RONCO or Popeil- is keiths saying that these things are not designed?

    How does that work- Seeing that anything designed can be used for more than one thing, they are not designed?

    How can we get these guys to testify in the next trial?

  258. To OM:

    The equation is in the post R0bb made here

    My values can be found here

    Have fun…

  259. Where do these guys come up with their nonsense:

    blockquote>By analogy, all of Lizzie’s organisms have high dFSCI because they run inside a complex, designed computer, and are handled by a program that is more complex than they are

    No, Lizzie’s organisms don’t have any dFSCI because they don’t do anything- they don’t have a function and posses no information.

  260. So, gpuccio, now that you’re back.

    The subject of recombination and protein domains was previously raised over at TSZ (Allan Miller, iirc).

    Thought you might like to take a look at this paper:

    PLOS ONE Are Protein Domains Modules of Lateral Genetic Transfer

  261. Zachriel:

    It means people can be discussing CSI, but referring to different things entirely.

    Not likely but if you were one of those two people then anything is possible.

  262. petrushka with her lie of the day:

    CSI in its ID garb, cannot exist if there is a natural process that can generate the structure in question.

    THAT despite everything we have said…

  263. OMTWO on October 10, 2012 at 7:15 pm said:

    Once again I ask you for that pseudocode. I believe I can build a program that will output what would commonly be known as CSI. I’m asking you how I could go about checking that output? How can I measure that “CSI”?

    lol

    See here:

    http://theskepticalzone.com/wp/?p=576

    Code here:

    http://theskepticalzone.com/wp.....mment-8121

    Let us know when you find the CSI calculation.

  264. Hi Mung,

    I haven’t really been following the discussion, but since we’re talking about EA’s, I have an idea for one:

    1) generate an initial config space
    2) iterate and randomize the config space
    3) at each iteration compile the config space
    4) test whether compilation (produces an object file) fails or succeeds

    The EA runs relative to function rather than a target string.

    Lets see how far the function goes with respect to the given config space.

    Let me know what you think.

  265. To Zachriel (at TSZ):

    It means people can be discussing CSI, but referring to different things entirely.

    Or to slightly different aspects of the same thing. Or to different definitions of similar concepts. That ìs how cognition grows.

    The basic concept of CSI is very simple and intuitive: how complex must an object, or a string in the digital case, be to express some objectively defined function, or property? And then how unlikely is it that such an object or string can arise by RV? Or can the complexity be only apparent, and the Kolmogorov complexity be really low?

    These simple points are treated in different ways according to contexts and to different people. But the fundamental concept remains: complex functions require specific complexity to be implemented, and that specific complexity can be measured.

    It’s mainly the obstinate resistance of people committed to materialistic reductionism that tries to confound the issue. They probably know all too well that CSI is deadly to their beliefs, and would argue any possible thing to evade the concept.

    All the discussion about compressibility, indeed, although intresting, is completely irrelevant to the biological context. Biological strings are scarcely compressible. They are the kind of strings that formally appear “pseudorandom”, except for the fact that they convey a specific function. Exactly the kind of object that allows, with the greatest safety, a design inference.

    Indeed, even the algorithm issue is irreleant, in the end. We all know very well that no algorithm can explain protein sequences. The only historical proposal is exactly classical neo darwinism, which derives some minimal power from the existance in the System of complex biological replicators competing for environmental resources, the one and only source of the effect usually called “natural selection”. But that RV+NS algorithm completely fails to explain almost all biological functional complexity, for the reasons many times discussed.

    Any debate about compression or other possible necessity algorithms is mere distraction, a real and fundamental strawman. Compression and other possible algorithms have no role in biological systems (with the only possible exception of adaptational algorithms already embedded in the genome).

    In the end, only design can explain what we observe. The only logical attempt to deny that fact has been classical neodarwinism, and the RV+NS algorithm. If it fails (and it does fail!), what remains is either design or complete mystery.

    You are free to choose to stick to mystery. As for me, I have my reasonable scientific explanation.

  266. Hi computerist,

    I’ve considered doing something like that with Ruby, which does not need to be compiled to object code.

    You can generate a string and then attempt to execute it as Ruby code and see if it fails, from within a running program.

    eval “some string of code”

    Another idea to would be to see if it cold be executed as an operating system command.

    exec “some command”

  267. Shallit says:

    By contrast, organismal DNA (for example) doesn’t seem that compressible; experiments show long strings of organismal DNA are often poorly compressible, say only by about 10% or so. This is, in fact, good evidence that organismal DNA arose through a largely random process.


    gpuccio: Biological strings are scarcely compressible.

    and now Zachriel:

    That is not quite correct. While standard compression routines, such as those suitable for compressing text or pictures are not effective, it is still possible to compress biological sequences.

    Adjeroh & Nan, On Compressibility of Protein Sequences, Proceedings of the Data Compression Conference 2006.

    You and Shallit need to get on the same page there Zacho

  268. To Zachriel (at TSZ):

    My point was, and is:

    Biological strings are scarcely compressible.

    I never said they are not compressible at all.

    I need not remind you that a highly compressible sequence is something completely different.

    Let’s consider, for example, a sequence of 10^9 1s. It would have a “natural” complexity of 10^9 bits (quite a value!). But I believe that it can be compressed by some very simple algorithm, of much lower complexity. That would be a highly compressible sequence.

    [Set up counter to 0
    Write 1
    Increment counter and compare to 10^9
    Loop till count is met.
    Print string. KF]

    Biological sequences are scarcely compressible, for their intrinsic nature. They certainly have a few regularities, that can account for some compressibility, but they are not certainly in the range of “ordered” sequences, that can be outputted by some simple computation.

    As I commented about Hamlet, you can certainly compress the text somewhat, but you would still need the compressd sequence plus the decompressing algorithm to get Hamlet. Do you really believe that those entities could arise in some random system?

  269. Onlookers:

    Sometimes, we need to go back to basics to clear an atmosphere of the fog from burning strawmen, in order to get back on track.

    Step 1: Config spaces, W:

    The idea here is that a given set of components put together in a system (down to atoms if necessary) can be arranged or scattered in a large number of possible ways, W. This traces to ideas in statistical mechanics and to phase space, but we are more concerned with position and orientation and coupling than with momentum.

    Next, think about an exploded diagram of a system — say, a Cardinal Spinning reel — and the requisites for putting it together right in order to work. Parts have to fit together and be arranged and coupled in a fairly restricted number of ways, if something is to function. We can define a particular arrangement as an event or occurrence E, and we can cluster those that work under the restrictions of requisites of function, T.

    Thus we see a zone of function or island of function, T within a wider space of possible configs, W.

    Step 2: Dembski’s first models, in NFL

    As the IOSE notes here, in NFL pp. 148 and 144, Dembski discussed (in a work that was published by Oxford and previously was essentially his Doctoral work in the field, so we can be reasonably confident that it passed serious scrutiny by peers of scholarship twice):

    p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.

    I submit that what they have in mind is specified complexity [[cf. here below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .

    Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .”

    p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”

    What Dembski does here, is to insert another common premise form the world of Stat mech, the idea that taking states as equiprobable in the absence of other info that biases choice of config, isolation to 1 in 10^150 of states E in zones T in W, is sufficient to secure something in T from reasonable chances of being found by chance and/or mechanical necessity.

    He is also generalising from the context of functional specificity to the pure idea of being in a narrow, isolated Zone, T.

    This is one gateway used to inject all sorts of confusing or dismissive distortions.

    So, let us note that he is quite plain that in the biological world, specification pivots on function. Whether or not any particular way to set up this zone T succeeds in one’s estimation should be isolated from the point that there is such a reasonable concept as an isolated zone T in a field of possibilities W that is observable on some reasonable criterion such as, T is the cluster of possibilities that works in some definite way.

    Similarly, I have seen huge debates on how to define and calculate probabilities exactly.

    This is not needed. We know or should know that chance based, random sampling of a population, of reasonable scope, will normally capture the bulk, and miss special isolated zones. We even have a law of large numbers to that effect in statistics.

    If we are searching by random processes or uncorrelated mechanical necessity without guidance on where T is, unless we have a sufficiently large sample, we are apt to miss such special, isolated zones. Indeed, there is a whole province of statistical testing that pivots on that tendency to be in the bulk not special zones such as tails. (The difference here is the tails or special zones are isolated to 1 part in 10^50.)

    Step 3: 1 part in 10^150

    Elsewhere, I have discussed how on the gamut of our solar system’s 10^57 atoms and 10^17 s, where fastest chemical reactions take up about 10^30 Planck times, the space of possibilities of 500 bits is such that the number of possible observations or search steps of the solar system’s atoms, would sample the equivalent of pulling one straw sized sample from a cubical hay stack as thick as our galaxy, about 1,000 LY. Overwhelmingly, such a sample will reliably pick up straw, not anything else.

    Going up to the scale of the observed cosmos, 1,000 bits more than suffices to isolate zones T to 1 in 10^150. That is the number of Planck-time atomic states for the 10^80 or so atoms in the cosmos, is as 1 in 10^150 or so of the space of possibilities.

    Converting into bits, 10^150 is roughly 500 bits worth of possibilities, and 1,000 bits is 10 ^301 possibilities, equally roughly. Where, ASCII text strings use 7 bits per symbol. 500 bits is just short of 72 ASCII characters, and 1,000 is about 143 characters (hence the limit on a Tweet).

    We notice that we routinely produce text in English of 72 to 143 or more characters. We do so informationally and intelligently, not by blind chance and/or mechanical necessity. Indeed, we would dismiss as absurd the notion that text in this blog thread was produced by lucky noise on the machinery of the Internet. For obvious reasons.

    Now, also, anything that can be described as a collection of nodes and arcs, can be reduced to a cluster of descriptive strings, which can be concatenated, so — as AutoCAD shows — discussion on strings is WLOG.

    We have of course, abundant evidence that functionally specific, complex organisation and/or information — FSCO/I — is routinely and only observed as the product of design. this is important, as it is an inductive generalisation on billions of cases in point backed up by an analysis as above as to why this is so.

    We must bear this in mind as we examine the tilting at windmills confused for was it giants, strawman tactics and objections.

    Step 4: What about genetic algorithms and other forms of incremental climbing of a mt Improbable?

    The key observation is that such things are based on intelligently designed algorithms. Were such a program to be constricted de novo from statistical noise captured by a computer, we might have something to boast of, but this is not the case for reasons directly tied to the just above. I doubt that any GA program is less than 72 ASCII characters long.

    Similarly, we observe that such programs depend on some form or another of incremental hill climbing off the performance of a well-behaved fitness function that leads up to a peak zone or one of a cluster of linked peak zones, so the step size can be small and the uphill trend keeps one pointed peak-wards on the whole.

    But the above makes it plain that most of the field of possibilities for a multipart functional entity of reasonable scope will not be like that. For most of W, functionality = 0, and there are no trends to on the whole point one uphill. So, as soon as we are in a zone that has that uphill pointing aspect, we are already within an island of function T, where all steps Ei –> Ej will have some functional character on both legs and we can reward desirable increments, individually or on a population of samples basis, say best of 100 or 1,000 or the like.

    The real problem , however is not to move to a peak within T but to move from a zone W which overwhelms possible search resources, to find T without intelligent guidance.

    In short, a big question is being begged, and the problem posed is being strawmannised by objectors. They are so used to working inside T that hey do not see the problem of the much wider zone W, and the challenge to find T.

    BTW, that is exactly why I have insisted on a molecules to man frame for the 6,000 word blind watchmaker thesis essay challenge. At OOL, there is no existing von Neumann code based self replicating mechanism to appeal to; it too needs to be explained as an instance of FSCO/I which is patently irreducibly complex. (The resulting ducking, dodging, mischaracterisation, denigration, thread vandalism etc etc speak volumes on this challenge.)

    Step 5: What about CSI?

    Now of course Dembski generalised the zone T, and has sought to provide a generic model. the success/failure of such attempts should be understood relative tot he above, not by twisting them into pretzels as is altogether too common.

    If you think Dembski has failed to capture the framework above, fine, show that, and suggest ways that he could better do so. Do not pretend that an extension can be criticised, so the underlying issue can be dismissed.

    Instead of going to town on whether his mathematical model of 2005 is correct relative to the above, or can be twisted into pretzels, let us first show what it is trying to do, and then go about simplifying it for use. Years ago in response to a challenge by MF in his blog, I presented the following which is in the UD weak argument correctives, no 27:

    27] The Information in Complex Specified Information (CSI) Cannot Be Quantified

    That’s simply not true. Different approaches have been suggested for that, and different definitions of what can be measured are possible.

    As a first step, it is possible to measure the number of bits used to store any functionally specific information, and we could term such bits “functionally specific bits.”

    Next, the complexity of a functionally specified unit of information (like a functional protein) could be measured directly or indirectly based on the reasonable probability of finding such a sequence through a random walk based search or its functional equivalent. This approach is based on the observation that functionality of information is rather specific to a given context, so if the islands of function are sufficiently sparse in the wider search space of all possible sequences, beyond a certain scope of search, it becomes implausible that such a search on a planet wide scale or even on a scale comparable to our observed cosmos, will find it. But, we know that, routinely, intelligent actors create such functionally specific complex information; e.g. this paragraph. (And, we may contrast (i) a “typical” random alphanumeric character string showing random sequence complexity: kbnvusgwpsvbcvfel;’.. jiw[w;xb xqg[l;am . . . and/or (ii) a structured string showing orderly sequence complexity: atatatatatatatatatatatatatat . . . [The contrast also shows that a designed, complex specified object may also incorporate random and simply ordered components or aspects.])

    Another empirical approach to measuring functional information in proteins has been suggested by Durston, Chiu, Abel and Trevors in their paper “Measuring the functional sequence complexity of proteins”, and is based on an application of Shannon’s H (that is “average” or “expected” information communicated per symbol: H(Xf(t)) = -?P(Xf(t)) logP(Xf(t)) ) to known protein sequences in different species.

    A more general approach to the definition and quantification of CSI can be found in a 2005 paper by Dembski: “Specification: The Pattern That Signifies Intelligence”.

    For instance, on pp. 17 – 24, he argues:

    define ?S as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [?] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 logarithm of the conditional probability P(T|H) multiplied by the number of similar cases ?S(t) and also by the maximum number of binary search-events in our observed universe 10^120]

    ? = – log2[10^120 ·?S(T)·P(T|H)].

    To illustrate consider a hand of 13 cards with all spades, which is unique. 52 cards may have 635 *10^9 possible combinations, giving odds of 1 in 635 billions as P(T|H). Also, there are four similar all-of-one-suite hands, so ?S(T) = 4. Calculation yields ? = -361, i.e. < 1, so that such a hand is not improbable enough that the – rather conservative — ? metric would conclude “design beyond reasonable doubt.” (If you see such a hand in the narrower scope of a card game, though, you would be very reasonable to suspect cheating.)

    Debates over Dembski’s models and metrics notwithstanding, the basic point of a specification is that it stipulates a relatively small target zone in so large a configuration space that the reasonably available search resources — on the assumption of a chance-based information-generating process — will have extremely low odds of hitting the target. So low, that random information generation becomes an inferior and empirically unreasonable explanation relative to the well-known, empirically observed source of CSI: design.

    W should not have to state the obvious, but given objections that have been raised, we do: the semiotic agents in view are constrained by available atomic resources and resulting limits on opportunities for observation. No more than 10^117 chemical time events for the 10^57 available atoms can happen in the history of the solar system to date for instance of perhaps 10^17 s.

    Step 6: The log reduced, simplified Chi metric

    CSI per the 2005 expression is intended to generalise and opens up a can of worms and side tracks as we have seen.

    It is in my view useful to simplify, certainly moreso than to try to disentangle the thicket of strawmannish objections erected in the hopes of burying the CSI concept; which itself is a breach of the basic premise on which science is built, of seeking to improve.

    This was done in response to MathGrrl/Patrick’s challenge of some time ago, as is presented in the IOSE and elsewhere. Clipping IOSE (accessible all along):

    xix: Later on (2005), Dembski provided a slightly more complex formula, that we can quote and simplify, showing that it boils down to a “bits from a zone of interest [[in a wider field of possibilities] beyond a reasonable threshold of complexity” metric:

    ? = – log2[10^120 ·?S(T)·P(T|H)].

    –> ? is “chi” and ? is “phi”

    xx: To simplify and build a more “practical” mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally:

    Ip = – log p, in bits if the base is 2. That is where the now familiar unit, the bit, comes from. Where we may observe from say — as just one of many examples of a standard result — Principles of Comm Systems, 2nd edn, Taub and Schilling (McGraw Hill, 1986), p. 512, Sect. 13.2:

    Let us consider a communication system in which the allowable messages are m1, m2, . . ., with probabilities of occurrence p1, p2, . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [[--> My nb: i.e. the a posteriori probability in my online discussion here is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by

    I_k = (def) log_2 1/p_k (13.2-1)

    xxi: So, since 10^120 ~ 2^398, we may “boil down” the Dembski metric using some algebra — i.e. substituting and simplifying the three terms in order — as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p):

    Chi = – log2(2^398 * D2 * p), in bits, and where also D2 = ?S(T)

    Chi = Ip – (398 + K2), where now: log2 (D2 ) = K2

    That is, chi is a metric of bits from a zone of interest, beyond a threshold of “sufficient complexity to not plausibly be the result of chance,” (398 + K2). So,

    (a) since (398 + K2) tends to at most 500 bits on the gamut of our solar system [[our practical universe, for chemical interactions! ( . . . if you want , 1,000 bits would be a limit for the observable cosmos)] and

    (b) as we can define and introduce a dummy variable for specificity, S, where

    (c) S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T:

    Chi = Ip*S – 500, in bits beyond a “complex enough” threshold

    NB: If S = 0, this locks us at Chi = – 500; and, if Ip is less than 500 bits, Chi will be negative even if S is positive.

    E.g.: a string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will — unsurprisingly — be positive.

    Following the logic of the per aspect necessity vs chance vs design causal factor explanatory filter, the default value of S is 0, i.e. it is assumed that blind chance and/or mechanical necessity are adequate to explain a phenomenon of interest.

    S goes to 1 when we have objective grounds — to be explained case by case — to assign that value.

    That is, we need to justify why we think the observed cases E come from a narrow zone of interest, T, that is independently describable, not just a list of members E1, E2, E3 . . . ; in short, we must have a reasonable criterion that allows us to build or recognise cases Ei from T, without resorting to an arbitrary list.

    A string at random is a list with one member, but if we pick it as a password, it is now a zone with one member. (Where also, a lottery, is a sort of inverse password game where we pay for the privilege; and where the complexity has to be carefully managed to make it winnable. )

    An obvious example of such a zone T, is code symbol strings of a given length that work in a programme or communicate meaningful statements in a language based on its grammar, vocabulary etc. This paragraph is a case in point, which can be contrasted with typical random strings ( . . . 68gsdesnmyw . . . ) or repetitive ones ( . . . ftftftft . . . ); where we can also see by this case how such a case can enfold random and repetitive sub-strings.

    Arguably — and of course this is hotly disputed — DNA protein and regulatory codes are another. Design theorists argue that the only observed adequate cause for such is a process of intelligently directed configuration, i.e. of design, so we are justified in taking such a case as a reliable sign of such a cause having been at work. (Thus, the sign then counts as evidence pointing to a perhaps otherwise unknown designer having been at work.)

    So also, to overthrow the design inference, a valid counter example would be needed, a case where blind mechanical necessity and/or blind chance produces such functionally specific, complex information. (Points xiv – xvi above outline why that will be hard indeed to come up with. There are literally billions of cases where FSCI is observed to come from design.)

    xxii: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was designed. The metric may be directly applied to biological cases:

    Using Durston’s Fits values — functionally specific bits — from his Table 1, to quantify I, so also accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold:

    RecA: 242 AA, 832 fits, Chi: 332 bits beyond
    SecY: 342 AA, 688 fits, Chi: 188 bits beyond
    Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond

    xxiii: And, this raises the controversial question that biological examples such as DNA — which in a living cell is much more complex than 500 bits — may be designed to carry out particular functions in the cell and the wider organism.

    So, all along we have known how the matter could be addressed to the relevant context of function, and applied to biology.

    All the huffing, puffing, erection of a forest of strawmen and setting them alight to cloud the issue, is pointless and willful.

    KF

  270. F/N: Since P’s track record is relevant, it should be noted that in his MG persona he tried to dismiss the above log reduction as a probability calculation, a quite severe blunder.

  271. To Allan Miller (at TSZ):

    In comments to me, having made a similar interpretation, GP denied that this is his argument. Once the replication system or translation or whatever is in place, we take that dFSCI-to-date as a given, and apply the metric to the ‘extra’ dFSCI within a particular Time Span.

    That’s essentially correct. Obviously, as I have already stated, it is also possible to analyze the emergence of basic replication (OOL), and in that case the transcription and translation mechanism becomes part of what must be explained.

    Computation of dFSI is always a highly empirical task, and it must always be referred to a specific scenario, and to a specific problem.

  272. KF (275):

    To illustrate consider a hand of 13 cards with all spades, which is unique. 52 cards may have 635 *10^9 possible combinations, giving odds of 1 in 635 billions as P(T|H)

    Do you happen to know how 635×10^9 was arrived at? I kind of figured, if we’re talking about the number of different ‘hands’ of size 13 that can be selected from a standard deck of cards that it should be 52C13 (52 choose 13). But when I evaluated that value I got something different. Just curious. It’s not going to change the eventual negative result but I’m just wanting to make sure I’m tracking the logic okay.

  273. Petrushka (at TSZ):

    I always revert back to what I believe to be true. That’s my usual behaviour, and there is no need to “pin me down” to get that :)

    Joe Felsenstein (at TSZ):

    … which seems to have nothing to do with the stuff about dFCSI. So why bother with dFCSI?

    It has all to do with dFSCI. Protein domains:

    a) Have high functional complexity (therefore cannot arise in a purely random system)

    AND

    b) Are irreducible to simpler functional naturally selectable intermediates, and therefore cannot be explained by the only available necessity mechanism, NS.

  274. Jerad:

    52 chose 13 seems to be exactly 6.35*10^11, as KF said. What would be your result?

  275. gpuccio,

    See, told you I was dopey. I just checked it thoroughly. I had eyeballed the factorials and estimated. Sigh. I need more tea obviously.

    Never mind.

  276. To Zachriel (at TSZ):

    What is your definition?

    You can find it here:

    http://www.uncommondescent.com.....inference/

    post #88.

  277. Jerad:

    No problem. I will not attack you because you have taken two mutually inconsistent positions in less than one hour :)

  278. gpuccio,

    I don’t get credit for admitting I made a mistake?

    Tough crowd!!

  279. Jerad:

    It was only an ironic quote of what Keiths said about me at TSZ, for “taking three mutually inconsistent positions in less than 48 hours”.

    I certainly did not get any credit from him for admitting, twice, that I had made a mistake.

    Tough crowd!! :)

  280. To Keiths (at TSZ):

    Quite the opposite. I don’t make any assumptions about how the Designer would act. He has trillions of options open to him, and he could choose any one of them, regardless of whether it produced an objective nested hierarchy.

    You are making the assumption that the designer “has trillions of options open to him” (why?), and that he “could choose any one of them” (how do you know that? are you an expert about the designer’s free will?), “regardless of whether it produced an objective nested hierarchy” (so you know how many of the options would produce that, and that the designer has no reason to prefer one kind of option to another one; again, how do you know that?).

    Those are a lot of assumptions.

    It’s the evidence that tells us that the objective nested hierarchy exists.

    Fine.

    1a) Out of the trillions of possibilities, unguided evolution predicts an objective nested hierarchy; we see an objective nested hiearchy; the prediction is successful, and unguided evolution fits the evidence extremely well.

    It certainly fits the evidence of the hierarchy. But, unfortunately, it does not fit the evidence of the complex biological information. You are reasoning as though the hierarchy were the only evidence available.

    ID predicts neither an objective nested hierarchy, or the lack thereof; we see an objective nested hierarchy;

    It does nor predict necessarily the hierarchy, but it is perfectly compatible with it. What ID does predict is the complex functional information in the designed objects.

    ID proponents have to assume that the Designer chose to produce an objective nested hierarchy,

    Either chose, or had to. Because of specific restraints.

    which is exactly the same pattern that unguided evolution would have produced.

    No. It is simply the same pattern as any form of evolution, guided or unguided, would have produced if it had to work by modification of the existing beings, instead of having to create new beings from scratch each time. It is very obvious that the first option can be the best, or the only one, available to a designer if specific constraints on how the designer can act are present in the system.

    There is no successful prediction, and a completely unwarranted assumption.

    There is no prediction here, but there is a much more powerful prediction about complex functional information. And there is no assumptiom at all: we observe the evidence, we infer design (from complex functional cinformation), and we reasonably infer that the designer had specific, and definable, limjitations in how to act.

    Physical laws don’t require an objective nested hierarchy.

    The designer has to modify matter from consciousness, through some interface. We don’t know how that interface works, and what its laws are. The real constraint is obviously how to implement the design in the material world. The simple explanation for the nested hierarchy is that it is easier for the designer to modify what already exists than to redo everything from scratch. Is that so difficult to understand?

    That suggests that your embrace of ID is not scientific.

    You are entitled to your opinion, however bizarre.

    Keep thinking about this,

    I think about many things, but I usually decide myself what to think about. Anyway, thank you for the kind suggestion.

    but try to do so with the attitude that you want to discover the truth, whatever that may be — even if the truth turns out to be uncomfortable.

    That is a very wise principle for thinking about anything, and I certainly can reciprocate the encouragement.

    P.S The UD side of the discussion is happening on this thread, so you might want to repost your comment there.

    I will copy this comment there too.

  281. Zachriel with the daily equivocation:

    If we didn’t know the origin of nylonase, for instance, you would conclude design. Discovering its plausible evolutionary origin, you would then realize it was a false positive.

    We are interested in its blind watchamker origin as ID is not antievolution and is OK with nylonase evolving by design.

    IOW Zachriel and the TSZ ilk can only equivocate because they have nothing else.

  282. Joe Felsenstein:

    gpuccio’s argument was that the dFCSI was already there because Elizabeth had made the program’s organisms able to reproduce.

    That is my argument and it follows from what was said in “No Free Lunch”. You need to explain reproduction, you cannot just use it as a given.

    Also Lizzie did not have extra SI put into anything. She did not generate CSI.

    But anyway it is very interesting that not one of you can come up with a biological example of natural selection adding specified informaton to any genome.

  283. Joe Felsenstein:

    Again, am I misunderstanding gpuccio’s argument?

    Most likely

    How?

    Lack of critical thinking skills. Or the inability to understand your opponents due to some limbic issue

  284. Zachriel to gpuccio:

    You just defined dFSCI in #4 as something with no known “deterministic explanation”.

    That is incorrect. Just because there aren’t any known deterministic explanations does not mean it is part of the definition.

    A deterministic explanation for dFSCI would mean the presence of dFSCI is not a hallmark of design.

    How many times do you have to be told that?

  285. And petrushka goes for the personal shot:

    Remo Rohs and Gorka Lasso:

    This paper provides new insights into the evolution of the symmetry of protein domains and into protein engineering. The authors show that the widely adopted domain duplication and divergence model is not the only source for domain evolution. A new evolutionary model is described, according to which a particular subdomain can lead to the assembly of a new symmetry-based protein domain by combining several repeats of the same subdomain. The latter implies that modular evolution is an ongoing process.

    Unlike Joe, I will not read an abstract and argue that the issue is settled.

    Unlike petrushka I tend to NOT equivocate and understand that the paper needs to address the mechanisms proposed, namely accumulations of random mutations. You just don’t get to declare gene duplication followed by function changing muations, a random process just because you are too lazy to determine an actual cause.

    IOW the question is evolved how-> By design or accumulations of random mutations?

  286. gpuccio:

    It certainly fits the evidence of the hierarchy. But, unfortunately, it does not fit the evidence of the complex biological information. You are reasoning as though the hierarchy were the only evidence available.

    The argument by keiths where he assigns to “THE DESIGNER” trillions and trillions, possibly even infinitely unlimited options, was so lame i couldn’t even bring myself to care about it, lol.

    Thanks for addressing it.

    Now you do bring up an excellent point. What is it that makes for the ability to identify this “objective nested hierarchy.”?

    If genomes were just random assemblages, what sort of objective nested hierarchy would that result in?

  287. Joe:

    Can you provide the alleged compression algoritm or not?

    Joe, I told you what algorithm I used — PAQ. I provided a link to a zip file containing the source and executable. What exactly are you looking for?

    What was compressed, exactly?

    I provided links to the files that were compressed, told you how much they were compressed, and invited you to reproduce the results.

    Everything in those files was compressed by about 80%, although 12.5% can admittedly be attributed to the unused bit in each 7-bit ASCII character.

  288. Joe:

    Is CSI necessarily non-compressible?

    It all depends on how you are defining “compressible”.

    He clearly means “algorithmically compressible”. So you already answered the question when you said, “In order to qualify as CSI it cannot be algorithmically compressable.”

    To maintain this position, you have had to claim a distinction between the terms “CSI” and “specified complexity”, while simultaneously maintaining that they’re synonymous. You do this with the following logic, which I’ll assume you’re saying in jest:

    If the presence of specified complexity in an object means it was designed, and the presence of complex specified information means it was designed, that would mean they mean the same thing, ie they are synonymous.

    And you support your position with a quote from CJYman, but then later deny that CJYman said that incompressibility is a requirement for something to qualify as CSI, meaning that the quote doesn’t actually support your position. And, true to fashion, you cap off this denial with an insult:

    Is English not your first language?

    Finally, your position requires you to ignore or spin clear statements by Dembski, like this one from page 144 of No Free Lunch:

    It is CSI that within the Chaitin-Kolmogorov-Solomonoff theory of algorithmic information identifies the highly compressible, nonrandom strings of digits (see section 2.4).

  289. gpuccio:

    The basic concept of CSI is very simple and intuitive: how complex must an object, or a string in the digital case, be to express some objectively defined function, or property? And then how unlikely is it that such an object or string can arise by RV? Or can the complexity be only apparent, and the Kolmogorov complexity be really low?

    But this rendering of the basic concept of CSI seems to assume that CSI entails high Kolmogorov complexity, or at least apparently high Kolmogorov complexity. So the question of whether CSI, a term invented by Dembski, really does entail that, is basic to the concept.

  290. Joe:

    That is incorrect. Just because there aren’t any known deterministic explanations does not mean it is part of the definition.

    A deterministic explanation for dFSCI would mean the presence of dFSCI is not a hallmark of design.

    How many times do you have to be told that?

    You should be complaining to gpuccio. He’s the one who said that, in order for a string to be said to exhibit dFSCI, “It is required also that no deterministic explanation for that string is known.”

  291. R0bb,

    You first converted tne text and then compressed the conversion. Not the same thing.

    Are the same number of words still used?

  292. R0bb,

    Stop misrepresenting gpuccio. What I said is what has been said since ID came around.

    And you are also mispresenting the Dembski quote- the CSI within the within the Chaitin-Kolmogorov-Solomonoff theory .

  293. To Zachriel (at TSZ):

    I am afraid you are seriously misunderstanding me here.

    First, I will answer the small things.

    It’s not important, but what is the function of Hamlet?

    I will answer that briefly, but we could go into more detail if you want.

    There are essentially two kinds of functional information (see also Abel):

    a) descriptive information (like language) has mainly the purpose of conveying meaning

    b) prescriptive information (like software, or a protein coding gene) has mainly the purpose of implementing a function.

    It is possible to describe descriptive information (like Hamlet) in term of an explicit function, such as: a text that can convey all the information about the story, the characters, the emaning, and if we want even the emotion and the beauty.

    Prescriptive information is easily described by defining the function it implements.

    Again, just as an aside, how many permutations of words have the same function as Hamlet? Keep in mind the many, many versions of Hamlet. Seems intractable, especially given the lack of a clear functional specification.

    The problem is indeed tractable. I could show you that dFSI necessarily increases with the increasing length of a text. Therefore, we can be sure that, beyond some length, a non redundant text will certainly be beyond the threshold of, say, 500 bits.

    Let’s grant that Hamlet has high functional complexity, per your definition.

    It has.

    So if we are ignorant, we are more likely to judge it to be design. This is nothing but a gap argument.

    It’s not a question of ignorance. A plausible explanation must be known, otherwise we deny all scientific principles. You cannot just say: maybe in the future we will find some necessity explanation for that, therefore why ifer design even if it has all the properties of a designed thing? That is not science. Such a position can never be falsified. It is only wishful thinking, to defend one’s pre commitments to a specific ideology.

    Moreover, while strings with some regularity can easily evoke the suspect of a possible necessity origin, pseudo random strings which convey a meaning have never been explained that way.

    More on your last comment later, I must stop now.

  294. Joe, I didn’t convert the text to anything. I don’t know what you’re talking about. I gave you everything you need to reproduce the results — why not try it?

    Are we seriously arguing over whether English text is compressible?

  295. Folks:

    Let’s keep things fairly simple.

    Take a protein. How much can its string vary without disastrous loss of function? If not a lot, then it is specifically functional. (In short, we are in zones T when we have relatively narrow sets of possible configs in a much larger space, that will work.)

    Similarly, for DNA that codes for the protein.

    Next, for multipart systems in cells made up from proteins, etc.

    Remember, there is a reason why we have a fear bordering on panic about radioactivity, which accelerates random mutation rates. (Way back, I learned the main mechanism was breaking up H2O, which then reacts aggressively with whatever is nearby. Breakdown of function is a very likely outcome.)

    With CSI, the debates back and forth are on an attempted generalisation which exists in a context of a clear understanding that in life forms specificity is cashed out on config dependent function.

    And Way back Abel and Trevors highlighted that a completely random string will have low compressibility in the algorithmic sense, functionally organised strings will have moderate compressibility, and ordered ones, strong compressibility. the point of K-compressibility is that if you can set up a simple way to get there, it is compressible. A truly random string simply has to be repeated. Functional strings tend to have some redundancy so moderate compressibility is possible. things which are simple and highly ordered, like a crystal or vortex etc, will have fairly short and simple descriptors.

    In short, you can describe a wiring diagram or the result of it enough to force a specification of its config to within a fairly narrow scope, but because there is a minimum amount of complexity demanded by therequisites of function, you don’t get the degree of compression by algorithmic description or statement of controlling law etc that happens in other cases.

    Meanwhile much of the onward back forth is on increasingly tangential side-tracks. I get the feeling that in some cases they are setting up red herrings led away to strawman caricatures used to be rhetorically punched up and dismissed.

    let’s remember, whatever flaws one may find or think s/he finds in Dembski’s models and statements, the fundamental issue is that we are looking at things complex enough to set up large spaces W, sufficiently large to exhaust the atomic resources of our solar system or observed cosmos. In these spaces, we have zones T that are apparently describable or observable, that are narrow zones of interest. The candidates for being at an E in T are blind chance and necessity, or design. On sampling space and sampling grounds as well as by direct observation, the best explanation under such circumstances is design.

    Let us not blind ourselves by kicking up enough dust, smoke and fog to miss the main point.

    Which was the point of my earlier comment.

    KF

  296. R0bb,

    Are the same number of words still used? If not what words are missing? And if the same number of words are used what was compressed?

  297. R0bb:

    Joe, I didn’t convert the text to anything. I don’t know what you’re talking about.

    paq8l is an open source (GPL) file compressor and archiver.

    So first you had to CONVERT the text to a file and then the FILE was compressed.

  298. kf @301:

    Well stated, including the clarification on compressibility. In terms of compressibility, CSI tends to (although does not necessarily in all cases) fall between the extremes. This is due to the fact that CSI is characterized both by complexity (i.e., not a simple highly ordered state) and by rules (e.g., all forms of information use some kind of vocabulary and syntax that follows certain rules of order).

    The question of whether CSI is more or less compressible than this or that string misses the primary point. CSI is not primarily about compressibility. It is about syntax, semantics, pragmatics. The compressibility aspect (the simple statistical Shannon aspect of information) is interesting at some level, but must not be allowed to overshadow the real issues.

  299. To Zachriel (at TSZ):

    gpuccio: #5) Any object whose origin is known that exhibits dFSCI is designed (without exception).

    Of course. You just defined dFSCI in #4 as something with no known “deterministic explanation”. How could it be otherwise?

    I believe that here you misunderstand. Points 2-4 are intended to explain how dFSCI is defined and measured.

    Point 5 is a completely different thing. It just means that dFSCI, as previously defined, is empirically capable to distinguish between human artifacts and non designed strings, with 100% specificity (and many false negatives).

    This is an empirical fact. It has nothing to do with how dFSCI is defined.

  300. To Petrushka (at TSZ):

    That seems to have two unrelated problems

    Only two? That’s really a compliment.

    It violates the ID code of not discussing the motives and attributes of the Designer,

    A code I have violated many times, and you of all should know that.

    and it makes no sense

    Why am I not surprised?

    An omniscient being,

    Did I speak of omniscient beings? When?

    or one that can assemble long strings of functional DNA,

    Ah, that’s lowering the reqwuirements a little bit, I believe.

    anticipating its function within a changing ecosystem,

    A smart designer, I would say. Maybe not omniscient or omnipotent, but certainly smart.

    would not have the kind of limitations characteristic of mere mortal designers

    But he could certainly have other kinds of limitations.

    At any rate it makes no sense to assign attributes to invisible imaginary magicians.

    That is, I believe, exactly what you have been doing here.

    As for me, I prefer to infer from empirical facts the possible attributes of a very real designer who has left evidence of his existence everywhere.

  301. To Joe Felsenstein (at TSZ):

    Again, am I misunderstanding gpuccio’s argument?

    Yes. Absolutely.

    How?

    Indeed, I can recognize practically nothing of my argument in your words.

    You say:

    Yet confronted by Elizabeth’s GA program, gpuccio was not willing to acknowledge that the amount of SI increased in that program.

    I don’t understand what you mean. I said that the program is an algorithm that computes a solution to a well defined question. To do that, it obviously needs to have a lot of SI about the question and the possible solutions. Moreover, the program uses RV + IS to find possible solutions, and it succeeds, as many algorithms using RV + IS can do.

    Regarding the computation of dFSI in a string that is a solution to the question, I said that there are two possibilioties: the dFSI of the string is an upper limit to its dFSI. If an executable program with lower dFSI can compute the solution, then the dFSI of the program is the dFSI of the string itself, because we have to consider anyway the lower complexity that can generate the string (IOWs, the Kolmogorov complexity of the string).

    I have also said that there is probably a much simpler top down algorithm that can compute a solution without using any RV, as KF has shown.

    gpuccio’s argument was that the dFCSI was already there because Elizabeth had made the program’s organisms able to reproduce.

    Where did I say that? I only pinted out that Elizabeth’s program uses IS, with a probability of being positively selected at each round of 0.5 for each string. What has that to do with reproduction? What has that to do with NS? The answer is simple. Nothing.

    What I said is that NS, in a biological context, is simply a byproduct of the existence of biological beings that:

    a) can reproduce themselves

    b) have to rely on environmental resources to exist, mainly because they are based on metabolism.

    reproduction and metabolism certainly imply a lot of functional information, and therefore, if we are analyzing a scenario about OOL, they become part of what has to be explained. In other, more limited scenarios, reproduction and metabolism (and therefore NS) can be taken as given in the system, because we are not trying to explain them in our context.

    That’s when we all started arguing about intelligently computer simulations of unintelligent natural processes.

    Well, this at least is clear.

    This seems to me to be a big contradiction.

    ???

    When an organism has dFCSI and can reproduce, gpuccio says that we can count the “extra” SI put into the genome by an adaptation.

    I can’t even understand what you are saying here, and yet you state that it is something I have said. That seems to me a big contradiction!

    I spoke of adaptation as a possible mechanism by which an existing genome takes advantage of environmental changes through intelligent algorithms already embedded in the genome.

    I have clearly offered the example of antibody maturation as a model of intelligent adaptation based on RV + IS.

    I have mentioned that many believe that active adaptational algorithms do exist in bacteria, and possibly in other living beings. That’s all.

    I have not said that any new biological information that arises is explained by adaptation. I don’t believe that. New protein domains, IMO, are designed, and are not the product of adaptation (and, obviously, not even of RV + NS).

    But when the genomes are in a GA, gpuccio refused to count the extra SI that was put into those genomes.

    I really can’t see what you mean here. An example, please, of when and where I would have done something like that.

    There all the SI was said to be coming from the original SI put in when the GA was set up.

    Again, what I said is that if an algorithm (whether it uses RV and IS or not) computes a solution, the complexity of that solution is the lowest between the two: the natural complexity of the string, and the complexity of the algorithm that computes the string. There is no doubt that, in extreme xases, the apparent complexity of an ordered string can be much higher than the complexity of the algorithm that can output it. See for example the case of the string made of 10^9 1s, as described in my post #274 here. In that case, the dFSI is the dFSI of the algorithm, which is obviously the Kolmogorov complexity of the string.

  302. And zacho proves its agenda is obfuscation:

    #4 Anything dFSCI has no known deterministic explanation; therefore if something with dFSCI has a known explanation as per #5, it can’t be deterministic.

    That is still incorrect. Your inability to deal with what has been posted proves that you are still an insipid troll, just as telic thoughts says.

  303. I missed this stupidity by Mike Elzinga:

    ”Information” is the great, mysterious concept of ID/creationism on which all ID/creationist arguments appear to hinge.

    It’s the same mysterious information that allows for communication and information technology. The same mysterious information that people use every day.

    For some reason when evos hear “information” their entire being goes into convulsions- hey Mike, call 411 and ask them what is their purpose.

  304. Zachriel:

    Sure, let’s take a protein, say a random sequence that weakly binds to ATP. The specified complexity would be low as these proteins are relatively common in sequence space. Now, let’s replicate and mutagenate the sequences, and select those with the most binding function. The specified complexity has increased. After repeated generations, CSI.

    Umm replication is STILL the thing you need to explain. By just using replication you expose your desperation.

  305. R0bb, nothing to say in response to my posts?

  306. So first you had to CONVERT the text to a file and then the FILE was compressed.

    Joe, please. Comments like this don’t help.

    What do you think the FILE consists of, if not strings (of text). What do you think the program does if not read in the strings from the file, process them, and write them back out to a different file.

    Do you really think you can’t compress a string of text without having first saved it to file?

    # Ruby code
    require ‘zlib’
    hd = “Humpty Dumpty sat on a wall. Humpty Dumpty had a great fall. All the king’s horses and all the king’s men, couldn’t put Humpty back together again.”
    hd.length
    hdz = Zlib::Deflate.deflate(hd, Zlib::DEFAULT_COMPRESSION)
    hd.length
    hdz.length

    C:\projects>irb
    irb(main):001:0> require ‘zlib’
    => true
    “Humpty Dumpty sat on a wall. Humpty Dumpty had a great fall. All the king’s
    horses and all the king’s men, \n\ncouldn’t put Humpty back together again.”
    irb(main):005:0> hd.length
    => 149
    irb(main):006:0> hdz = Zlib::Deflate.deflate(hd, Zlib::DEFAULT_COMPRESSION)
    => “x\x9CU\xCC1\n\x800\x10D\xD1\xDESLg#\xDEA\xB0\xF0\x1A\xAB\xAE\x89\xB8n$\xD9 \
    xDE\xDE\xA0XXM\xF1>3\xE4\xFD\xB0\v\xFD;\x89\fAA8I\xA4\xC5\xF0COs\x11\x17\xB9D\xC
    B\xE3\x9D\b\xCC3\xB6U]\x9D\xE0CL\x9C@Z\xBA\xBF\xEC\xAC\r\xAAj\nYf\xAD\rG\xB6\xEF
    |\xA4i\x83\x05\xC7%\x8F G\xAB\xB67D\xC23\xE9″
    irb(main):007:0> hd.length
    => 149
    irb(main):008:0> hdz.length
    => 108
    irb(main):009:0> exit

    p.s. Water has three states.

  307. I just love it how the experts at TSZ weigh in when it’s convenient to make a point against ID, but remain strangely silent when one of their own goes about making a fool of himself/herself.

  308. gpuccio,

    Without having read Felsenstein’s comments yet in context, I suspect that by SI he means Shannon Information.

    Elsewhere, if need be I think I can find this, Elizabeth has argued that a maximally random string has the most Shannon Information (or maybe it was some ID’er =p).

    In any event, where has she or anyone else over there shown an increase in Shannon Information in her programmatically generated strings? I sure wouldn’t take JF’s word for it.

  309. OK Mung,

    You had to first convert the text to some compressible code and then compress that. Better?

    I called it a “file” because that is what R0bb did. So my mistake was saying that he had to first convert it to a file.

    ps accumlations of H20 have 3 states

  310. Umm replication is STILL the thing you need to explain. By just using replication you expose your desperation.

    Zachriel:

    We’re just concerned with defining and measuring CSI at this point.

    Maybe YOU are. To me it has been well defined and we measure it in bits. IOW you appear to have some personal issues.

    Also all of that would be moot if you could just support your position. No need to worry about ID. Even without ID you still don’t have anything- even less because without ID you wouldn’t even have that to misrepresent.

  311. Zachriel:

    If, however, you were to start with a single sequence of significant length, and subject the sequence to replication with variation, and assuming reasonable mutation rates, then it would form an objective fit to a single nested hierarchy, and you would be able to reconstruct the lines of descent with reasonable accuracy.

    And the connection to biological reality is?

  312. Mung:

    In any event, where has she or anyone else over there shown an increase in Shannon Information in her programmatically generated strings?

    More importantly, the unfortunately-named “Shannon information” is uninteresting. Indeed Shannon “information” is not even true information in any meaningful sense of the word; certainly not in the CSI sense we are interested in for technology, communications, bioinformatics, etc. It is, rather, a simple statistical measure of information carrying capacity. If we examine a string and see that it has a reasonably high information carrying capacity we might have a first clue that it could contain CSI. But that is all it gives us, some kind of initial hint of potential capacity of the string in question. Whether it in fact contains CSI depends on layers of information (syntax, semantics, pragmatics) that go well beyond the so-called Shannon “information.”

    I don’t doubt that a computer program could generate a bunch of random strings and that some of them could end up with higher Shannon “information” than we started with. Big deal. Anyone who thinks this demonstrates anything about CSI has no idea what they are talking about.

  313. Joe:

    Are the same number of words still used? If not what words are missing? And if the same number of words are used what was compressed?

    You seem to want me to pretend that you’re ignorant enough to ask this question sincerely, even though I know you’re not. So I’ll play along.

    After compression, it is no longer recognizable English text with spatially separated words or letters. But since the compression was lossless, no information was lost. The animals in Shakespeare’s plays weren’t harmed either.

    The compression engine produces a sequence from which the original text can be regenerated. That’s what it means to algorithmically compress something.

    You had to first convert the text to some compressible code and then compress that.

    The letters that make up words that make up English text are always encoded somehow. Shakespeare wrote glyphs on paper with ink, which were subsequently transcribed to similar glyphs on printing presses, which were eventually transcribed to ASCII on computers, which is what I compressed.

    English text is compressible in all of those encodings because, like most real-world languages, it’s extremely inefficient. In formal language terms, the vast majority of strings in Σ* are ungrammatical.

    If you think I’m cheating by using ASCII-encoded text, please tell me how you would go about testing the compressibility of text.

  314. To Zachriel (at TSZ):

    That’s right. #5 is a conclusion.

    Are you kidding?

    #5 is not a conclusion. It is an independent empirical observation.

    Just to be more clear. We define a property (dFSCI) and how to assess it in objects.

    Then we assess that property blindly in any number of strings of whoch we may know the true origin. For instance, we mix any number of meaningful strings designed by humans with any number of randomly generated strings, all of them long enough to be beyond the threshold of 500 bits. And then we ask independent observers to tell us which are the meaningful strings designed by humans and which are those that do not allow a design inference.

    IOWs we are empirically testing the specificity of the dFSCI property when it is used to infer design in a set of objects where the true origin can be known for certain.

    It is an empirical testing, and an empirical observation. Not “a conclusion”.

    Is it clear now?

  315. To Zachriel (at TSZ):

    Sure, let’s take a protein, say a random sequence that weakly binds to ATP. The specified complexity would be low as these proteins are relatively common in sequence space. Now, let’s replicate and mutagenate the sequences, and select those with the most binding function. The specified complexity has increased. After repeated generations, CSI.

    Exactly. You are obviously referring to the shameful Szostach paper.

    That paper is good evidence that Intelligent Selection can increase the complexity of a string in relation to a known function.

    That is not surprising at all. As I have said many times, RV + IS is a very powerful form of design.

    I have also offered the example of an algorithm that computes the first “n” decimal digits of pi. As “n” becomes greater, the complexity of the output will, at some time, become greater than the complexity of the algorithm that computes it.

    A protein could also be computed in a top down way. We cannot really do that at present, but we will be able to do it, some time in the future.

    And so? Design can create dFSCI. We know that. An algorithm can output dFSCI, but according to my definitions, we should anyway consider the complexity of the algorithm as the true complexity of the outputted string. In practically all cases, however, we will still infer design, because the algorithm is complex enough to infer design.

    The important point, however, is that no algorithm can create new dFSCI in relation to any new function that is not already described, or in some way implied, in the algorithm itself.

    The reason is simple: algorithms are not conscious. They have no experience of purpose. Therefore, they cannot recognize function, unless in their code something has alredy been defined as “functional”.

    So, Lizzie’s algorithm can compute answers to the question that is already embedded in it: it can do nothing else. My pi computing algorithm can compute pi: it can do nothing else.

    In some algorithms, the function can be defined more generically, so that they will be more flexible in their performance. But a new function, that is not covered by the definitions embedded in the algorithm, will never be recognized by the algorithm, and therefore no dFSCI related to that new function will ever be computed by the algorithm, because the algorithm cannot recognize that function.

    So, Szostack could easily engineer a protein with a strong binding to ATP (however useless in any biological context) because he knew what he wanted (an ATP binding protein), he measured and selected that function at very trivial levels in random sequences, he amplified, mutated, and intelligently selected the resulting sequences for that function. Good design, and very bad interpretation of the results, still echoed by yourself for bad reasoning.

    The only algorithm present in biological contexts is NS. It can generate some new information (not much of it) related to the function that is already embedded in the algorithm itself: reproduction by use of environmental resources. That easily explains many microevolutionary events.

    But it cannot do anything else.

  316. To Zachriel (at TSZ):

    gpuccio: It is possible to describe descriptive information (like Hamlet) in term of an explicit function, such as: a text that can convey all the information about the story, the characters, the meaning, and if we want even the emotion and the beauty.

    Sure. That’s easily put into quantitative terms.

    It is. That’s how it can be done.

    a) We define the function as the ability to convey the full set of meanings in the original text (we can refer to a standard version, for objectivity).

    b) We prepare 1000 detailed questions about various parts of the text.

    c) We define the following procedure to measure our function: the function will be considered as present if, and only if, an independent observer, given the text, is able to answer correctly all the questions.

    OK, that would not easily include the emotion and the beauty, but I had mentioned them just as a bonus (and a homage to S)!

  317. Thank you Robb,

    So you did NOT compress the text, but a digital representation of the text. Got it. Not the same thing and your bait-n-switch is more than a tad dishonest.

  318. To Zachriel (at TSZ):

    Gpuccio provided a definition of what he calls “dFSCI”, which, unfortunately, includes design in its definition, so can’t be used to argue for design.

    ???? What do you mean? Please, refer to post #320.


  319. Eric Anderson: Indeed Shannon “information” is not even true information in any meaningful sense of the word; certainly not in the CSI sense we are interested in for technology, communications, bioinformatics, etc.

    Zachriel:

    Um, Shannon Information is the theoretical backbone of information technology and communication systems.

    Ummm non-sequitur. Eric was posting about the “information” part which is not information in the ordinary usuage.

    We’re discussing definitions of CSI, which is supposedly a signature of design.

    And it will be until you get off of your lazy butt and demonstrate that blind and undirected processes can account for it. Your continued whining and misrepresentations sure as heck ain’t going to change anything.

  320. Joe:

    So you did NOT compress the text, but a digital representation of the text. Got it. Not the same thing and your bait-n-switch is more than a tad dishonest.

    LOL. And how do you think text should be represented when we’re testing its compressibility. Ink glyphs on paper?

    You claimed that Shakespeare and encyclopedias are not compressible. How did you come to that conclusion? How do you go about testing for compressibility?

  321. R0bb,

    Compress the TEXT, Did Shakespear know about ASCII? No- compress the TEXT R0bb or admit you cannot.


  322. Thank you Robb,

    So you did NOT compress the text, but a digital representation of the text. Got it. Not the same thing and your bait-n-switch is more than a tad dishonest.”

    tonto:

    Where does this leave kairosfocus and his example of ASCII characters?

    No affect- two different topics.

  323. Joe,

    a discussion of a word based text compression scheme:

    http://reference.kfupm.edu.sa/....._71379.pdf

  324. Great use it to compress the works of Shakesspear. Let us see the results.

  325. Zachriel chokes:

    And natural selection can often select for very specific functions,

    Nature does NOT select- there isn’t any selection taking place.

    So Zach doesn’t understand natural selection and there is no way it will ever understand CSI.

  326. Joe (330)

    Great use it to compress the works of Shakesspear. Let us see the results.

    I’m not that interested actually. I was just pointing out that text compression algorithms exist. I was interested in a previous comment and looked it up.

  327. Zachriel apparently wrote (I don’t check the other thread):

    Um, Shannon Information is the theoretical backbone of information technology and communication systems.

    Um, in what sense? Communication systems are interested in information carrying capacity. In addition, there are much more important aspects of information beyond this mere statistical measure of carrying capacity. Syntax, semantics, pragmatics. The Shannon so-called and unfortunately-called “information” says exactly nothing about these aspects.

    Unfortunately the term Shannon “information” confuses people who aren’t able or aren’t willing to understand that it is not the be-all-and-end-all of information. Anyone who thinks that Shannon information can ever fully describe, account for, or measure CSI has no idea what they are talking about.

  328. Eric,

    Zach is referring to the fact that Shannon first defined the bit and his work was concerned with the transmission and storage of data.

    Zach is unconcerned over the fact that Shannon information isn’t really information in the ordinary sense. That way he can conflate the two with no worries.

  329. WRT to being algorithmically compressible and Shakespeare, that would mean we cannot write an algorithm to produce them.

    For 500 1s, we could do so.

  330. By what criteria?

    Zachriel:

    The criteria would depend on the particulars. It turns out those particulars depend on the history.

    Nope, we do not construct nested hierarchies based on the history. If you think otherwise please provide a valid reference. but we know you won’t…

  331. And natural selection can often select for very specific functions

    How very teleological.

  332. Eric, Joe:

    Well, I disagree that Shannon Information is not somehow “true” information. The problem is that people often do not understand what Shannon Information is about. We just need to ask, what is Shannon Information about.

    That’s why I was asking earlier in the thread about the assumptions or pre-requisites for measuring the amount of information in a string of bits. Zachriel certainly seems to understand.

    You must know or assume a set of symbols, an alphabet as it were. You must know or assume the distribution or likelihood of a particular symbol or letter.

    So my example ws, how much Shannon Information in the following: 00101

    And the correct answer is, we can’t answer the question (without making some perhaps invalid assumptions), because there are things we just don’t know.

    Are 0 and 1 the only symbols? The next character was 2. 001012

    But say the next character was another 0: 001010

    We still can’t really say, because we don’t know if each symbol is equally likely. Perhaps we’re looking at only the first part of the following sequence: 001010011100101110111

  333. The word information in this theory is used in a special mathematical sense that must not be confused with its ordinary usage. In particular, information must not be confused with meaning.- Warren Weaver, one of Shannon’s collaborators
     

    in an article on Data

    For data to become information, it must be interpreted and take on a meaning.
     

  334. Again, what citeria, ie what traits? What is the nested hierarchy? Define the levels and sets, please.

    Zachriel:

    We provided an example above.

    No you did not provide any criteria. You did not say what the nested hierarchy was and you sre as hell did NOT define the levels and sets.

    Why do you insist on lying all the time?

  335. To Zachriel (at TSZ):

    I really have difficulties in understanding what you mean. Let’s see:

    You have defined dFSCI as follows: dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    OK

    You have also stated that the mechanisms of the modern synthesis are a “deterministic explanation” under your definitions.

    They are a RV + NS (where NS is the deterministic part of the algorithm) explanation, that cannot explain what it wants to explain.

    You therefore cannot claim that your #5 is an empirical observation when there is no possible empirical observation that could lead to a conclusion that dFSCI is present in an artifact known to have evolved.

    I am afraid that here I have lost you completely. Let’s see again my #5:

    “#5) Any object whose origin is known that exhibits dFSCI is designed (without exception).”

    And my explanation of #5:

    “Just to be more clear. We define a property (dFSCI) and how to assess it in objects.

    Then we assess that property blindly in any number of strings of whoch we may know the true origin. For instance, we mix any number of meaningful strings designed by humans with any number of randomly generated strings, all of them long enough to be beyond the threshold of 500 bits. And then we ask independent observers to tell us which are the meaningful strings designed by humans and which are those that do not allow a design inference.

    IOWs we are empirically testing the specificity of the dFSCI property when it is used to infer design in a set of objects where the true origin can be known for certain.

    It is an empirical testing, and an empirical observation. Not “a conclusion”.”

    What has that to do with what you say?

    there is no possible empirical observation that could lead to a conclusion that dFSCI is present in an artifact known to have evolved.

    Again: we test dFSCI with a set of long enough strings. Some of them are designed and meaningful, some of them are generated randomly. We know the origin of each string (if it was designed or randomly originated) because we have direct knowledge of how they were produced. Then we take some independent observer, who knows nothing about the origin of the strings, and ask him to infer desing, or not, using the evaluation of dFSCI for those strings. He will recognize the designed strings, with 100% specificity. Thius is the very simple meaning of my #5: an empirical test where dFSCI can easily recognize designed strings from non designed strings. Empirical test, nothing more.

    If an artifact is known to have “evolved” (whatever it means) by an explicit deterministic mechanism that is already present in the system, we will conclude that it does not exhibit dFSCI (in that system), and that there is no reason to infer design for it in that system.

    So, let’s take a protein domain in the system which already includes NS (after OOL). We want to decide of we can infer design for it, or not.

    So, we ask two questions:

    a) Is the string functionally complex in itself, beyond 150 bits (or whatever threshold we have chosen)? Let’s say it is.

    b) Is any necessity mechanism explitly known that can explain the emergence of that string in that system? IOWs, can any algorithm already present in the system lower the improbability of the emergence of that string?
    Now, the only determinstic mechanisms proposed for biological systems and biological information is NS. So, our question becomes: can NS explicitly intervene to explain this string?
    If we know functional, naturally selectable intermediates for that string, then our answer is yes, and we have to re-evaluate dFSI for the RV parts of the process. For instance, as I have explained in detali elsewhere, a “perfect” intermediate, fully functional and fully selectable, can lower significantly the probability of the emergence of the final string. According to our new calculations, we will decide if a design inference is still warranted.
    But if nothing of that kind is known, we will assume the total dFSI of the string as unexplained, and infer design.

    The concept is very simple: dFSCI that cannot be explained by any known mechanism warrants a design inference. Why? Because dFSCI is a very good indicator of design (100% specificity in empirical tests). The clause about possible necessity explanations is only a safeguard against cases of apparent functionality that are indeed the result of some known mechanism.

    The lack of dFSCI is a direct consequence of your definition, nothing else.

    This is simple folly. My definition has the purpose of distinguishing designed things from non designed things. And it succeeds empirically in that task. That is not a consequence of the definition. It could certainly fail in its task. For instance, if the following phrase:

    “Shannon was born in Petoskey, Michigan. His father, Claude, Sr. (1862 – 1934), a descendant of early settlers of New Jersey, was a self-made businessman, and for a while, a Judge of Probate. Shannon’s mother, Mabel Wolf Shannon (1890 – 1945), the daughter of German immigrants, was a language teacher, and for a number of years she was the principal of Gaylord High School. Most of the first 16 years of Shannon’s life were spent in Gaylord, Michigan, where he attended public school, graduating from Gaylord High School in 1932. Shannon showed an inclination towards mechanical and electrical things. His best subjects were science and mathematics, and at home he constructed such devices as models of planes, a radio-controlled model boat and a wireless telegraph system to a friend’s house a half-mile away. While growing up, he also worked as a messenger for the Western Union company.”

    were presented to our observer, he would certainly infer design for it using the dFSCI procedure. Would he be right? That is not a logical necessity. If that phrase were a phrase randomly generated, then you would have a case of false positive. That is perfectly possible. It will never be empirically observed, but it is possible.

    So, when I say that dFSCI has 100% specificity, I am stating an empirical fact derived from observation, and not a logical consequence of my definition.

    A more interesting question is whether or not evolution can generate functional complexity, by your definition, in excess of 150 bits. If it can, as numerous examples in these threads suggest, then whether you call it dFSCI or not is immaterial — evolution will have been shown to be a sufficient explanation for our actual empirical observations.

    Do you really believe what you are saying? Nothing in this thread suggests anything like that. You are purposefully using ambiguous words such as “evolution” to disguise your lack of arguments.

    So I ask: what, in this thread or elsewhere, “suggests” that RV + true NS can generate functional complexity in excess of 150 bits?

    No, because you have defined dFSCI as something without a known deterministic explanation, hence any object with dFSCI whose origin is known can’t have a deterministic explanation — by definition.

    Again! No. The origin can be known and yet no deterministic explanation could be there. If, as already said, a phrase like the one I quoted above were generated in a random system, we would know the origin (we know that it was generated in that system, and that no operator wrote it), and yet we would have no deterministic explanation for it. In that case, and only in that case, we would attribute dFSCI (correctly) to the phrase, and we would (incorrectly) infer design (a false positive). Is that so difficult to understand, even for intelligent people like you?

    Shameful? Seriously?!

    Absolutely!

    And natural selection can often select for very specific functions, just like in Szostak’s experiment.

    It was intelligent engineering, in Szostak’s experiment.

    A simple example is the evolution of antibiotic resistance which is often seen in natural settings.

    A typical case of microevolution for minimal loss of information. That’s exactly what NS can do. And we all know that. Do you really believe that this is an argument?

    Only as a thought-experiment is it possible to count them.

    Mine was exactly that: a thought experiment. Its purpose was to show that, in principle, descriptive information can be defined as function and measured. Do you agree with that?

  336. Mike Elzinga:

    It has become abundantly clear that the people over at UD have absolutely no clue about what any kind of information is.

    No Mike, what is abundantly clear is that you are a liar- and perhaps senile.

  337. For data to become information, it must be interpreted and take on a meaning.

    With the clear consequence that there is no “meaningless” information. Something I have been saying for a long time.

  338. With Shannon there can be meaningless information. That is why I defone CSI as shannon information that has meaning/ function and is also complex (see NFL)

  339. Mung @338:

    Well, I disagree that Shannon Information is not somehow “true” information. The problem is that people often do not understand what Shannon Information is about. We just need to ask, what is Shannon Information about.

    We have to distinguish the measurement from the thing measured. If what you are saying is that once we run an analysis on a string and come up with a measurement of the information carrying capacity of the string, then that measurement itself is “true” information, then sure, that measurement is new information we now have. And what does that information tell us? Well, it give us (within certain parameters) an idea of the information carrying capacity of the string. It tells us nothing about the underlying information content of the string itself. That cannot be measured by Shannon methodology.

    Look at it this way. I have a book on my desk. Now I can measure the book, its size, weight, number of pages, even number of words per page. Wonderful. Now I have described certain aspects of the book, and, yes, that description is real information. But it tells us precisely nothing about the quantity, quality, functionality, etc. of the underlying information contained in the book.

    The problem is that so many people are trying to use a Shannon calculation in an attempt to ascertain something about the quantity or quality of the underlying information contained in the string. Beyond a simple statistical description of the string’s carrying capacity, it is impossible. Shannon information is useless for this. It is not a question of trying harder or getting more clever with our calculations or defining ourselves into rhetorical knots; it simply can’t be done.

    It is very unfortunate that the term “Shannon Information” has become current use. A much less confusing and more accurate term would be “Shannon Measurement” or “Shannon Quotient” or something like that. Then maybe people wouldn’t be so confused into thinking that they can use a descriptive measurement of a string’s carrying capacity (Shannon Information) as a surrogate for the underlying content (information).

    This is why I feel it is critical in these discussions to keep in mind that the Shannon measurement (so-called “information”) is not really about the information in a string at all. It is simply a very basic first-order description of the string. Running a Shannon calculation on a digital string to determine how much information it contains is equivalent to weighing a book on a scale to determine how much information the book contains.

    —–

    (BTW, this is a somewhat different issue, but slightly similar to what we were discussing elsewhere — information “contained” in objects/events vs. information created in describing the objects/events.)

  340. Eric:

    It tells us nothing about the underlying information content of the string itself. That cannot be measured by Shannon methodology.

    I agree. The meaning of Shannon Information is independent of the meaning of the message being analyzed. It does not follow that Shannon Information is meaningless.

    Shannon Information is information about something else. It is still information. That’s my point.

    The problem is that so many people are trying to use a Shannon calculation in an attempt to ascertain something about the quantity or quality of the underlying information contained in the string.

    Most likely because they do not understand the nature of Shannon’s measure of the amount of information.

    But really I think it’s worse than that, or maybe it is the same thing and you are seeing it from a perspective I haven’t grasped yet. They think they can generate Shannon Information. Why do they think that?

    And then they think that if hey can just generate enough Shannon Information that it qualifies as CSI. lol

    I’ve seen critics here argue that because they can measure “the information content” of a meaningless string of characters using “Shannon Information” that they have demonstrated that information can be without meaning. It’s true!

    I think we are in essential agreement on all the points in your post. Thanks for your comments.

  341. Mike Elzinga on October 12, 2012 at 7:02 pm said:

    They think taking a logarithm to base 2 endows a calculation with “information” even though they can’t tell anyone what this “information” is about

    I have a bit string 8 bits in length (possible values are ’0′ or ’1′). One bit is set to a 1 by some method of random selection, all others are set to a 0.

    Your mission, should you choose to accept it, is to discover which bit is set to a ’1′ by asking questions. In response to each question I will respond with a yes or a no answer.

    Are you confused about what the information you will be getting is about? What is the total amount of information you will need to receive in order to ascertain the location of the ’1′?

    If you choose the following strategy, how many questions, on average, will you need to ask to discover the location of the bit which is set to a ’1′?

    Is bit 0 set to ’1′?
    Is bit 1 set to ’1′?
    Is bit 2 set to ’1′?

    Can you calculate the amount of information per query?

    Can you think of a better strategy?

    If you want to maximize the amount of information obtained by each question consider the following:

    log2 8

    Does that describe an upper limit upon the amount of information you can get per query?

    Can we get the total amount of information without adding together the amount of information from each query? You think maybe using log base 2 has something to do with our ability to add each amount to come up with a total amount?

    Seriously. What a dolt.

  342. Zachriel:

    Gpuccio provided a definition of what he calls “dFSCI”, which, unfortunately, includes design in its definition, so can’t be used to argue for design.

    Please show us how his definition of dFSCI includes design in the definition.

  343. To Zachriel and onlooker (at TSZ):

    I realize only now that by mistake I have conflated in my answer #341 to Zachriel comments made by Zachriel and comments made by onlooker. I humbly apologize to both for that.

  344. Zachriel: And natural selection can often select for very specific functions.

    Mung: How very teleological.

    Just a consequence of language, but it is the proper term. You avoided the point, of course. The environment can be such that organisms with a specific trait can have a significant reproductive advantage leading to the trait becoming predominant in the population.

    Don’t blame your sloppy use of language on language.

    So is it the environment that is doing the selecting for or natural selection?

    Neither the environment nor natural selection selects for any specific function.

  345. Mung @346:

    Thanks for your thoughts. I know I’m preaching to the choir on the substance, but perhaps you’ll indulge me a couple of clarifications on the terminology:

    Shannon Information is information about something else. It is still information. That’s my point.

    Agreed. Once I take a measurement of a string I now have information about the particular characteristic of the string that I measured. Yet that information is separate from the underlying information in the string and teaches us essentially nothing about the information in the string. I think we agree on this.

    Most likely because they do not understand the nature of Shannon’s measure of the amount of information.

    We need to focus on this for a moment. A key point is that Shannon information isn’t even a “measure of the amount of information.” This is part of where they are getting off track. It is only a measure of the information carrying capacity. Again, I can weigh a book or even count the number of words in the book, but in doing so I have not measured the amount of information. At most what I have done is determine the potential amount of information the medium can contain. I have not measured the actual amount of information; and I certainly haven’t ascertained anything meaningful about the content of the information.

    —–

    I don’t doubt someone has a simple computer program that can generate “Shannon information,” because when we look under the hood we find that it isn’t really generating any information at all. Think of it this way: We can easily take a string and make random changes to it and end up with various strings that have more or less information carrying capacity. However, and this is the key, in doing so we haven’t generated any information. All we have done is generate random pipes. Then, as a separate exercise after the fact we measure the pipes and, lo and behold, some pipes are bigger than others (surprise, surprise). There is no information in the pipes. The so-called “Shannon Information” that we think we have generated is not information in the pipe at all; it is simply an after-the-fact measurement of the size of the pipes.

    Again, people have to understand that they are not measuring the amount of information in the string. Shannon calculations cannot and never will be able to do that — it is a fool’s errand.

    I haven’t been following the other thread at all (or even this one too closely), so I’m not exactly sure what the TSZ folks are claiming. If Lizzie’s or anyone else’s program generates random strings, some of which have greater information carrying capacity (i.e., have a higher Shannon measurement) than other strings, big deal. There are two important takeaways: (i) it is an exercise in irrelevance, (ii) if she (they) think it has anything to do with CSI, then they have no idea what they are talking about.

  346. keiths:

    It’s written for Linux. Use ‘gcc -std=c99 lizzie.c’ and run it in a terminal window. The parameters are all in a block of #defines near the top. It’s currently configured to show the population after every generation.

    Yes, I see you do the same thing as Lizzie. You don’t actually calculate CSI.

    // program stops when this fitness threshold is exceeded
    #define FITNESS_THRESHOLD 1.0e60

    while (genome_array[0].fitness < FITNESS_THRESHOLD) …

    I've only taken a brief look, but it looks like you dispensed with any phenotype. Not saying that's bad. I didn't feel a need to add that extra layer myself.

    I don't suppose your fitness function smuggles in any information either. Doesn't it help favor strings with a higher product?

  347. Mung- helloooo- the threshold holds the PRIZE and getting the prize means you have CSI!

    Don’t you know nuthin’? :)


  348. “There are reasons certain people are no longer allowed to post here.

    tonto:

    Because they ask scientific questions that can only be replied to by your side with faith in the literal interpretation of Genesis.

    Nope, not even close. Keep trying though you may get it yet.

  349. R0bb, Jerad, Mung-

    Are we clear what is meant by compressibility wrt CSI?

    WRT to being algorithmically compressible and the works of Shakespeare, that would mean we cannot write an algorithm to produce them.

    For 500 1s, we could do so.

    My reference is the very paper that has been the focus of the TSZ ilk- pages 9-11

  350. Mung- helloooo- the threshold holds the PRIZE and getting the prize means you have CSI!

    oh. you mean they don’t have to do any calculation?

    Calculation is only required when they want us to do it?

  351. Zachriel:

    Per your own statements [gpuccio], there are some sequences with “functional complexity” and that some of these sequences have known causes! But you still conclude that those that don’t must be designed.

    That’s false. Given how long you’ve been debating against ID on the net you have to know this is wrong. You’re just another liar who has found a comfortable home at TSZ.

    gpuccio,

    You need to start resorting to copy and paste responses, since they just keep repeating the same old canards. What an intellectually bankrupt group.

  352. Mike Elzinga:

    Water has thousands of properties and functions that are not predictable by knowing the properties of hydrogen and oxygen. Properties and function emerge not only from the increased complexity itself, but from the interactions of emergent properties with other emergent properties extant in the environment.

    Emergence. Nice to know.

    So maybe species don’t evolve at all, maybe new species just “emerge”. I wonder how predictable that is.

  353. Joe (355),

    Are we clear what is meant by compressibility wrt CSI?

    WRT to being algorithmically compressible and the works of Shakespeare, that would mean we cannot write an algorithm to produce them.

    For 500 1s, we could do so.

    Are you sure pages 9 -11 is the section you want? I could only find one place that compressibility was discussed and that was on page 12:

    To sum up, the collection of algorithmically compressible (and therefore nonrandom) sequences has small probability among the totality of sequences, so that observing such a sequence is reason to look for explanations other than chance.

    Dr Dembski seems to be saying that the non-random sequences are algorithmically compressible. He’s not talking about an algorithm to produce such sequences.

  354. Mung:

    gpuccio,

    You need to start resorting to copy and paste responses, since they just keep repeating the same old canards. What an intellectually bankrupt group.

    I am, indeed, very disappointed. When darwinists recur to the pseudo argument of dFSCI circularity, that really means that they are desperate.

    Now, I must say that I fully expected such an attitude from some of them (just not to make names, Keiths), given their usual level of intellectual correcteness (I was saying honesty, but let’s keep it civil, at least this time).

    But I really did not expect it from others (just not to make names, Zachriel), who are usually intelligent and correct in their discussions.

    If even Zachriel can’t see that there is no circularity in the dFSCI procedure, after I have given him explicit examples of how it is empirically capable of distinguishing designed strings from non designed strings with 100% specificity, then there is really no hope. There must really be something wrong in how these people reason.

    I knew that cognitive bias is strong and powerful in humans, but I really believed that it can be partially controlled in intelligent and goodwilled people. Evidently, that is not always the case.

  355. Keiths (at TSZ):

    Thank you for giving me a precious example of your cognitive bias:

    I’m not aware of any argument that succeeds in showing that unguided evolution cannot generate biological complexity. I’m not aware of any argument that succeeds in showing that unguided evolution cannot generate biological complexity.

    You see, the correct statement is:

    “I’m not aware of any argument that succeeds in showing that unguided evolution can generate biological complexity.”

    But obviously, for you ideologically committed guys, a non design non explanation is anyway the default (indeed, the only admissible truth).

  356. To Zachriel (at TSZ):

    It’s always getting worse:

    Heh. You couldn’t have stated the God of the Gaps more explictly. Per your own statements, there are some sequences with “functional complexity” and that some of these sequences have known causes! But you still conclude that those that don’t must be designed.

    Complete nonsense.

    I don’t understand your reference to known causes. Either you misunderstand, or you don’t even read with a minimal attention what I write.

    The “known causes” have nothing to do with the assesment of dFSCI. The requisites to assess dFSCI are two (as I have said millions of times):

    a) High functional information in the string (excludes RV as an explanation)

    b) No known necessity mechanism that can explain the string (excludes necessity explanation)

    The “known causes” enter the scene only when we want to test the procedure against real examples. So, someone takes n strings of sufficient length whose origin he knows because he was responsible for their collection. Let’s say that 5 strings are taken from books, of which we know the author. 5 strings are generated by a random generator.

    Then another person, who does not know the origin of the 10 strings, evaluates dFSCI in them. He will correctly attribute dFSCI to the first 5, and infer design. Take for example the paragraph about Shannon’s biography from Wikipedia. The questions are:

    a) Is the dFSI of the string high? Answer: Yes.

    b) Do we know a necessity mechanism that can output that paragraph? Answer: No.

    So, we infer that the piece was written by a designer. And we are right. The first person, who collected the strings, knows that it was written by someone, and can confirm that the inference is correct.

    For the 5 randomly generated strings, I will not be able to recognize any function (meqaning) in them, and I will not infer design. Correctly. The first person will confirm that they were generated randomly, without any intelligent design.

    So, where does the necessity mechanism come into action?

    Suppose that one of the strings is a series of aaaaaa, of the same length as the Shannon biography. Will I infer design? No. Because such a string could be originated by a mechanism, such as the tossing of a coin which has the “a” symbol on both sides. Even if I did consider the string specified (for example, because it is compressible), I would not consider it complex (for the same reason, because it is highly compressible, and its Kolgomorov complexity is very low). Even if the string was designed, that would be a false negative.

    Three different kinds of strings. Three different empirical assessments of dFSCI. Three independent confirmations from the person who knows the origin of the strings. no false positives. Maybe a false negative.

    100% specificity.

    It’s simple, but you will probably not understand, or pretend that you don’t understand. I really don’t know, I have lost any hope to have a constructive discussion with you all.

  357. To Zachriel (at TSZ):

    Actually, that’s precisely how we read gpuccio’s statements. He defines functional complexity, excludes those with known causes, then concludes the remaining sequences are designed. Keiths summarized it above.

    Your “reading” is terrible, and comnpletely wrong.

  358. To Zachriel (at TSZ):

    So evolutionary algorithms can generate dFSCI, per your definition #2-4.

    Sure, why not? Dawkin’s Weasel (can we consider it an EA?) can generate the Weasel phrase. Not enough? Well, I suppose that a “big weasel EA”, which has the whole text of Hamlet, can generate the whole text of Hamlet through RV and IS in a reasonable time. That would certainly be dFSCI from an EA. What a pity that the algorithm would be much more complex than the solution! OK, it could also print the text directly, but then it woul probably not be any more an EA, but just an algorithm, and where is the fun?

    Your software can generate words. That’s fun again. What a pity that it has to have a whole dictionary inside to do that! But that’s not a proble, let’s just call the discionary “a landscape”, and not an oracle that is part of the algorithm, and the fun starts again.

    So yes, EA can certainly generate dFSCI. I have offered myself an example, maybe more intgeresting, of an algorithm that can generate a specified string more complex than the algorithm itsel: the algorithm that computes the first “n” decimal digits of pi, for values of “n” big enough. In that case, and only in that case, the dFSI of the solution would be the dFSCI of the algorithm itself. The very big limitation here is that such an algorithm can only increase the FSI for one given function: as “n” grows, the dFSI in the string grows too, but the specification remains the same. No algorithm, of any kind, can ever generate dFSCI for a function about which it has no direct or indirect information.

    So, to sum up, if I see a copy of Hamlet, I will infer design. The fact that an EA that knows the text of Hamlet can output it is of no relevance. The text of Hamlet in the algorithm would be designed just the same, and its dFSI would be the same. It’s the same reason why copying a string of DNA is not creating new dFSCI. But I am afraid that you guys cannot even understand that simple concept.

  359. To Zachriel:

    Just to avoid silly criticisms, let’s clarify that when I say:

    “as “n” grows, the dFSI in the string grows too, but the specification remains the same.”

    What I mean is that the apparent dFSI in the string grows with its length. But the Kolmogorov complexity, which is in effect the true dFSI, remains the same (the complexity of the algorithm, if lower than the apparent complexity of the string).

    Just to avoid silly criticisms.

  360. Jerad:

    Are you sure pages 9 -11 is the section you want? I could only find one place that compressibility was discussed and that was on page 12:

    It starts on page 9, Jerad. Pages 10 and 11 cover exactly what I am talking about.

  361. Joe,

    It starts on page 9, Jerad. Pages 10 and 11 cover exactly what I am talking about.

    Well, I looked through pages 9 – 11 . . . perhaps you could be more specific. I found compression only mentioned twice in that section, on page 11:

    It is a combinatorial fact that the vast majority of sequences of 0s and 1s have as their shortest description just the sequence itself. In other words, most sequences are random in the sense of being algorithmically incompressible. It follows that the collection of nonrandom sequences has small probability among the totality of sequences so that observing a nonrandom sequence is reason to look for explanations other than chance.

    and on page 12:

    To sum up, the collection of algorithmically compressible (and therefore nonrandom) sequences has small probability among the totality of sequences, so that observing such a sequence is reason to look for explanations other than chance.

    And both those quotes assert that nonrandom sequences are compressible and random ones are not. Is that your view?

  362. Zachreil relies on the dictionary to define natural selection:

    We used the accepted terminology.

    natural selection: a natural process that results in the survival and reproductive success of individuals or groups best adjusted to their environment and that leads to the perpetuation of genetic qualities best suited to that particular environment.

    Natural selection is a result and does not result in the best of anything.

    Whatever is good enough survives and reproduces.

  363. Jerad,

    Do you see the short descriptions for the strings on pages 10 and 11?

  364. Joe (369)

    Do you see the short descriptions for the strings on pages 10 and 11?

    Yup, after which Dr Dembski writes:

    The sequence ( R ), on the other hand, has no short and neat description (at least none that has yet been discovered). For this reason, algorithmic information theory assigns it a higher degree of randomness than the sequences (N), (H), and (A).

    In other words, random sequences are less compressible (if at all) compared to non-random sequences. Just like in the other two quotes.

  365. gpuccio:

    I knew that cognitive bias is strong and powerful in humans, but I really believed that it can be partially controlled in intelligent and goodwilled people.

    Yes, it can be controlled, among people of good will.

    Maybe you just caught them on a bad day.

  366. To Zachriel (at TSZ):

    Previously, you used said “no deterministic explanation for the string is known”. Now you use “necessity mechanism”. We suggested there was confusion with your terminology.

    Why? “Deterministic explanation” and “necessity mechanism” mean the same thing for me. What is the problem?

    Is evolution a necessity mechanism?

    “Evolution”, as I have said many times, does not mean anything if it is not better detailed.

    If you mean the neo darwinian explanation for biologic information, it is obviously an explanatin based on RV + NS acting sequencially. The RV part is a probabilistic explanation of the origin of new arrangements, the NS part is a deterministic effect that intervenes after RV, modifying the scenarion through differential reproduction. That’s why, as I have written so many times, and as I have also modeled, the effects of RV and the effects of NS must be considered separately for any proposed neo darwinist scenario. But the effects of NS can be taken into account only if and when NS is demonstrated (that is, when naturally selectable intermediates are shown to exist).

    I really can’t see where the confusion is.

    You seem to imply so when you exclude protein relatives from the set of dFSCI.

    I am not sure what you mean. A transition from a protein to another similar one, that implies only a few bits of modification, is not a transition that exhibits dFSCI, because it is not complex enough. It can be considered a microevolutionary event, of low functional complexity. Is that what you mean?

    Just a friendly advice: if you think there is “confusion” in my terminology, you could just ask for clarification, instead of attacking me for things I have never said. I am always willing to clarify my thought. I believe that if you read what I write with a minimum of attention and respect, you will probably understand what I mean.

    I am always respectful of different motivated opinions, like yours about the possibility of traversing the protein landscape, but I definitely don’t like having to answer repeated accusations of “circularity” which have absolutely no logical consistency and justification, if not in misunderstanding or (that’s not for you, I hope) ill faith.

  367. To Zachriel (at TSZ):

    Word Mutagenation can’t address biological evolution specifically, but it can address general statements about evolutionary processes, such as “isolated islands of function in vast seas of non function”.

    The fact remains that Word Mutagenation includes a dictionary as an oracle, and the dictionary is part of the algorithm, and should be included in the computation of its complexity.

    Your point is obviously that the same role which is of the dictionary in your software is performed by NS as an estimator of protein function in the biological context. I unbderstand that point, but I also understand that it is not based on any evidence. NS is not a library of sequences, while the dictionary is exactly that. You may believe that functional sequences that are naturally selectable are so connected that NS can act as a dictionary acts for words. I can find no support to such a strange assumtpion in all that we know about proteins, but I am happy to accept that point as “controversial”.

    But if you didn’t know the evolutionary origin of nylonase, you would conclude design, a false positive. Worse, you would know it with certainty!

    No. First of all, I never “know things with certainty” in science, and I believe that this should be true of all serious scientists.

    I would definitely make a design inference for nylonase, and I would be right. The protein, indeed, does exhibit dFSCI. The fact that it is derived from penicillinase does not change the fact. The penicillinase-nylonase group of proteins clearly exhibits dFSCI. Natural history can explain that penicillinase is the older form, and that nylonase is a recent variation, implying only one or two mutations at the active esterase site, with a shift in the affinity for specific substrates of the same kind.

    That’s why I always speak about “basic protein domains”, and not about a single protein. Similarly, Durston computes functional information for protein families. Similarly, Axe is interested in the evolution of protein domains.

    I have always admitted that if you can show a real ladder of intermediates that can build up protein domains through microevolutionary events, you win. But you have to do exactly that, not just invoke that “it could be possible in principle to do that, but unfortunately any trace of the intermediates has been cancelled, according to our theory, and unfortunately it is impossible to find those intermediates in the lab, according to our theory, but our theory is so beautiful, why should we give evidence fot it?”

    Frankly, I have no respect for theories like that.

    Moreover, if I remember well, it was you that, a short time ago, were so sure, with Ohno, that nylonase had originated as the result of a sudden frameshift mutation. Maybe you knew it with certainty! :)

  368. “I really can’t see where the confusion is.”

    I can. They don’t understand ID. They don’t understand their own theory of evolution. So of course they can’t understand what you’re saying. They lack the necessary mental concepts and categories.

  369. gpuccio:

    …but unfortunately any trace of the intermediates has been cancelled, according to our theory, and unfortunately it is impossible to find those intermediates in the lab, according to our theory…

    Does it seem to you like they are doing everything they can to avoid addressing the missing functional intermediates?

    Surely they must have once existed? Do they have an explanation for why they were lost? At least Darwin could appeal to a spotty fossil record to “explain” the absence of intermediates.

  370. Jerad:

    In other words, random sequences are less compressible (if at all) compared to non-random sequences. Just like in the other two quotes.

    And teh works of Shakespeare would appear, to any algorithm, to be random, as they haz no short and neat description

  371. I would love to hear in what way saying nylonase is the product of design is a false positive.

  372. Zachriel:

    Word Mutagenation can’t address biological evolution specifically, but it can address general statements about evolutionary processes, such as “isolated islands of function in vast seas of non function”.

    But it canNOT address general statements about blind and undirected chemical process. And that is all that matters.

  373. Zachriel:

    But if you didn’t know the evolutionary origin of nylonase, you would conclude design,

    Especially knowing the evolutionary origin of nylonase I say it evolved by design.

  374. Zachriel on October 13, 2012 at 2:20 pm said:

    I see they are still confused about fitness landscapes over at TSZ. And Joe Felsenstein thinks it’s irrelevant.

    They are traversed, not laterally, but vertically through inheritance.

    You’re confused. Lateral is differences in genomes. Vertical is rates of reproduction.

    This relationship is represented by a fitness landscape which returns relative fitness for a given phenotype.

    And that’s why it’s neither a model of evolution nor a model of any evolutionary process.

    You could use a physical environment instead, such as experiments with protein evolution, bacteria in the lab, or birds in the wild.

    You could. But then fitness would take on a different meaning. So, you’re equivocating.

    You seem to be confusing the model with the thing being modeled.

    You seem to be confusing the thing being modeled with the model.

    Word Mutagenation can’t address biological evolution specifically, but it can address general statements about evolutionary processes, such as “isolated islands of function in vast seas of non function”.

    No, it can’t.

  375. Zachriel finally gets something right:

    (Handwaving isn’t an argument.)

    Then why do you do it constantly? If it weren’t for your blatant misrepresentations, handwaving would be all you have.

  376. So apparently some members over at TSZ prefer “fitness wells.” One has to wonder why.

    A corollary of Fisher’s theorem is that, assuming that natural selection drives all evolution, the mean fitness of a population cannot decrease during evolution (if the population is to survive, that is).

    According to Darwin and the Modern Synthesis, movement across valleys is forbidden because it would involve a downhill component.

    – Eugene V. Koonin, The Logic of Chance

    For those following along, the population in a GA is under constant selection.

  377. Fitness

    Fitness (often denoted w in population genetics models) is a central idea in evolutionary theory. It can be defined either with respect to a genotype or to a phenotype in a given environment. In either case, it describes the ability to both survive and reproduce, and is equal to the average contribution to the gene pool of the next generation that is made by an average individual of the specified genotype or phenotype. If differences between alleles of a given gene affect fitness, then the frequencies of the alleles will change over generations; the alleles with higher fitness become more common. This process is called natural selection.

  378. Joe (376):

    And teh works of Shakespeare would appear, to any algorithm, to be random, as they haz no short and neat description

    I guess you’ll have to argue with Dr Dembski on that since he clearly states at least three times in the paper you referenced that non-random sequences are more compressible than random ones.

    Unless you think that the works of Shakespeare are random sequences . . .

  379. Mung: “For those following along, the population in a GA is under constant selection.”

    Zachriel:

    That’s not correct. Genetic algorithms can include drift, chance, relaxed or no selection.

    Yes, yes. Of course they can. They can include pink elephants for all I care. But it does not follow that they actually do.

    Mung: For those following along, the population in a GA isn’t made up of pink elephants.

    Zachriel: That’s not correct. Genetic algorithms can include pink elephants.

    sigh

    I could have chosen better wording. Lizzie’s program. keiths’ program. Probably even in your Word Mutagenation program. Constant selection.

    Is so. (Handwaving isn’t an argument.)

    You didn’t put forth an argument, you put forth an assertion. I replied in kind. Apparently handwaving is good enough if you’re the one doing it.

    Zachriel:

    Vertical means through inheritance. The connection between disparate groups can be found in common ancestors.

    Wikipedia:

    In evolutionary biology, fitness landscapes or adaptive landscapes are used to visualize the relationship between genotypes (or phenotypes) and reproductive success. It is assumed that every genotype has a well-defined replication rate (often referred to as fitness). This fitness is the “height” of the landscape. Genotypes which are very similar are said to be “close” to each other, while those that are very different are “far” from each other.

    The two concepts of height and distance are sufficient to form the concept of a “landscape”. The set of all possible genotypes, their degree of similarity, and their related fitness values is then called a fitness landscape.

    Zachriel:

    That’s not quite correct as the statement only applies to an infinite population. In a finite population, fitness can decrease even if natural selection drives all evolution (which it doesn’t).

    Well, let’s just throw out all of theoretical population genetics then.

    According to Fisher’s theorem, a population that evolves by selection only (technically, a population of an infinite size – infinite populations certainly do not actually exist, but this is convenient abstraction routinely used in population genetics) can never move downhill on the fitness landscape.

    – Koonin, Op. cit.

  380. To Zachriel (at TSZ):

    You ask, again:

    Which emphasizes that you are excluding known evolutionary transitions per #4 of your definition. Is that correct? Is your “deterministic explanation” dichotomous with design?

    I am not sure what is your problem. I have said that the neo darwinian mechanism is mixed, RV + NS. The functional complexity of a string, or of a transition, limits what RV can obtain. If evolutionary transitions that include a NS deterministic effect are documented, we have to take them into account. They do not exclude a design inference, is still there are transition that depend exclusively on RV and are beyond the threshold.

    I a string can be entirely explained by necessity mechanism already included in the system, then no dFSI can be attributed to it. But that is never the case with RV+NS, because the new arrangementes are always generated by RV, and NS can only act on what has already been generated.

    Therefore, in any “evolutionary transition”, there will always be a RV part, or parts, that must be evaluated in terms of dFSI.

    Let’s take the case of nylonase. We can split the evolution of nylonase into two separate steps:

    a) The emergence of the penicillinase structure, which could be identified with the emergence of the beta-lactamase/transpeptidase-like fold/superfamily.

    b) The recent emergence of nylonase from penicillinase.

    Assuming that b) implies one or two mutations as its RV part, and that the variant was naturally selectable because of its ability to degrade nylon, we can say that the second transition has very low dFSI, and does not warrant a design inference. It is probably a microevolutionary event, compatible with pure RV + NS, even if other alternatives (for instance, active adaptation) could be considered.

    For the emergence of the penicillinase structure, instead, a design inference is warranted. Indeed, the structure is extremely complex (a coli penicillinase is almost 300 AAs long), and no credible evolutionary path with selectable intermediates is available.

    So, I hope it is clear that there is nothing “dichotomous” in my definition. All my definitions are empirical, and not purely logical.

    The isdea is: we need an explanation for the functional complexity we observe. Both RV and deterministic effect such as NS can contribute to an explanation. While functional complexity is empiricall a marker of design. still we can accept that some functional complexity may emerge from RV or from the interaction of RV and NS. But we have to verify what these things can do, and what they cannot do.

    For pure RV, the limit is essentially probabilistic: RV cannot, alone achieve extremely improbable functional results. The evaluation of this limit relies essentially on the calculation of the dFSI of the observed string.

    For NS, we must have a real scenario, with real proposals that can be analyzed. Then, we can integrate the possible deterministic effects of the proposed, realistic scenario, on the RV components of the event, and calculate the final probability of the whole explanation (IOWs, claculate how the deterministic effect of NS changes the probabilistic scenario due to pure RV).

    I have given an example of how that can be done here:

    http://www.uncommondescent.com.....selection/

    starting more or less at post 62 and going on to the end (especially the last posts).

  381. Jerad:

    I guess you’ll have to argue with Dr Dembski on that since he clearly states at least three times in the paper you referenced that non-random sequences are more compressible than random ones.

    English text is not random.

    Over in the other thread CentralScrutinizer posted results from a simple Huffman encoding. In your opinion, is this the same sort of algorithmic complexity Dembski has in mind in his paper? Is that what Dembski means by algorithmically compressible?

  382. olegt:

    Will he change his conclusion when he reads the description of the program?

    Can’t say I blame you if you’re not keeping up with the conversations over here.

    The description of the program is the problem. We need a description of the pattern. Describing the algorithm that produced the pattern is not a description of the pattern.

  383. Zachriel on October 13, 2012 at 2:34 am said:

    A randomizer is sufficient to generate Shannon Information.

    Information about what?

    If I take a 504 bit string and “randomize” it, I’ve generated Shannon Information?

    How much?

    Actually, that’s precisely how we read gpuccio’s statements.

    How you read his statement doesn’t turn your statement from false to true.

  384. Zachriel:

    The fitness landscape is just a table of fitness values for each phenotype.

    Ah, progress! Post some examples (fitness values) from your program. Here are some examples from OMTWO:

    http://complexspecifiedinformation.appspot.com/

  385. gpuccio:

    Both RV and deterministic effect such as NS can contribute to an explanation.

    Seriously, I think folks over there at TSZ confuse the pattern with the process.

    NS cannot, even in principle, explain the origin of new traits.

    At best, it can explain why they persisted and/or spread through the population.

    In the end they are left with, it just happened, that’s all.

    I still want to know, how does the nylon make it through the cell membrane?

    For pure RV, the limit is essentially probabilistic: RV cannot, alone achieve extremely improbable functional results.

    It’s all they have.

    NS is not a creator. At best it’s a spreader.

    RV has to throw up something functional for NS to even take notice.

    Say that RV tosses up some functional selectable element _A_.

    How does the existence of _A_ change the probability that RV will throw up another functional element _B_?

    If it doesn’t, then aren’t the probabilities independent and therefore multiplicative?


  386. And teh works of Shakespeare would appear, to any algorithm, to be random, as they haz no short and neat description

    Jerad:

    I guess you’ll have to argue with Dr Dembski on that since he clearly states at least three times in the paper you referenced that non-random sequences are more compressible than random ones.

    Unless you think that the works of Shakespeare are random sequences . . .

    I am pretty sure I just said that:

    And the works of Shakespeare would appear, to any algorithm, to be random, as they haz no short and neat description

  387. Earth to toronto- Lizzie’s example does not produce CSI. Not by Dembski’s definition and definitely not by any definition I have read from an ID proponent.

    You are confused.

  388. So, as an example of someone over at TSZ who thinks mere ‘compressibility’ is enough to identify a specification I offer the following:

    madbat089 on March 17, 2012 at 4:51 pm said:

    This person also argues that Lizzie’s program follows the exact same logic.

    Forget for now whether or not Lizzie’s program follows the same logic. Is that even what Dembski says?

    Dembski:

    Even so, for such after-the-event patterns, some additional restrictions needed to be placed on the patterns to ensure that they would convincingly eliminate chance.

  389. toronto:

    If I have dFSCI above an agreed-upon UPB, I can safely say that the string containing that dFSCI, exhibits CSI, and that’s according to what I have read from gpuccio, and with different terminology, KF.

    All that is true. However Lizzie did not generate dFSCI- there isn’t any function, no meaning, nothing.

  390. Zachriel:

    We just recently described a simple evolutionary algorithm that includes no selection whatsoever.

    Then what makes it an evolutionsry algorithm?

    It shows how diverging descent with modification leads to a nested hierarchy.

    Doubtful. And you still haven’t demonstrated any understanding of nested hierarchies

  391. Patrick (aka MathGrrl) on March 18, 2012 at 12:41 am said:

    It turns out that the number of generations required is non-linearly dependent on the population size.

    With a population of 1000, I only got to 9.2x10e59 after 300,000 generations. Bumping the population size up to 10,000 reliably generates a solution over 10e60 in around 700 generations. My code and results are available here.

    I do hope the ID proponents appreciate that crossover (SEX!) generates CSI even faster than asexual evolutionary mechanisms.

    For certain we appreciate how intelligent tweaking can lead to results not otherwise simply achievable by random changes.

    I just love how they brag about how their intelligent actions can lead to unguided results of low probability.

    I’ll be looking at his/her code to see if I can find where he/she calculates the amount of CSI generated.

    The web page has a title: Evolving CSI

    haha.

  392. olegt on March 19, 2012 at 11:34 am said:
    In the same vein, Elizabeth is investigating whether CSI can arise through natural selection without asking where the fitness landscape came from. I think this is an entirely reasonable approach. For some reason, however, IDers dismiss such studies as trivial. Well, they aren’t.

    Of course you don’t ask. You know where it came from.

    That’s why we think it’s trivial. doh.

  393. Patrick: “Run the GA engine against the PROBLEM until the fitness is maximized.”

    heh

    No design here. Move along.

  394. Patrick: “Return a fitness comparator function that takes two genomes and returns T if the first is more fit according to the characteristics of the PROBLEM.”

    heh.

    No design here. Move along.

  395. Patrick: “Determine the number of bits required for a genome to solve the specified coin product problem.”

    heh.

    No design here. Move along.

  396. ;; “Imagine a coin-tossing game. On each turn, players toss a fair coin
    ;; 500 times. As they do so, they record all runs of heads, so that if
    ;; they toss H T T H H H T H T T H H H H T T T, they will record: 1, 3,
    ;; 1, 4, representing the number of heads in each run.
    ;;
    ;; At the end of each round, each player computes the product of their
    ;; runs-of-heads. The person with the highest product wins.
    ;;
    ;; In addition, there is a House jackpot. Any person whose product
    ;; exceeds 1060 wins the House jackpot.
    ;;

    Imagine a huckster who only pretends to toss a fair coin 500 times each run.


  397. All that is true. However Lizzie did not generate dFSCI- there isn’t any function, no meaning, nothing. “

    Toronto:

    Yes, there is “specific functionality” and that is a product of values in the string that result in a number larger than “1.0e60?.

    Umm that is not functionality…


  398. : And the works of Shakespeare would appear, to any algorithm, to be random, as they haz no short and neat description

    Zachriel:

    No, that is not correct.

    You are wrong.

    Random sequences are generally incompressible, but Shakespeare is quite compressible, one of many simple tests of randomness.

    Then please write an algorithm that can generate the works of Shakespeare. Oh shut up because you are talking abiout the worng type of compression.


    And you still haven’t demonstrated any understanding of nested hierarchies

    Zachriel:

    You don’t seem inclined even to define sets, much less a nested hierarchy.

    That’s you, in a nutshell-> afraid to define your sets and very afarid to define your nested hierarchy. OTOH I have presented you with examples in which everything was well defined.

  399. Mung @389 re: Zachriel:

    Again, this is the key (reiterating #351): a Shannon calculation does not measure the amount of information in a string. It is simply a statistical measure of potential carrying capacity (or, on the other side of the coin, if we have a pre-existing string of information, compressibility (i.e., how much capacity is required for a given string)).

    People can randomize all they want and can no doubt come up with some increasing amount of pipeline capacity (based on some “fitness” function) and it is entirely irrelevant to the generation of CSI. The whole ‘GA-generates-Shannon-Information’ discussion as it relates to CSI is a red herring, a rabbit hole, a dead end, a distraction, an irrelevancy.

  400. Eric:

    Again, this is the key (reiterating #351): a Shannon calculation does not measure the amount of information in a string.

    Sure it does! People are just confused about what that information is about. They are confused about the meaning of Shannon Information.

  401. Zachriel:

    We just recently described a simple evolutionary algorithm that includes no selection whatsoever. It shows how diverging descent with modification leads to a nested hierarchy.

    I missed it. Where was it posted?

    A no selection model would not favor the preservation of any particular trait. Agreed?

    Not just an assertion, but an algorithm that anyone can follow to verify the assertion, even recreating the algorithm independently.

    So we can toss the dictionary and re-run your program?

    As we said, we are using the word vertical to refer to refer to a common population diverging and climbing separate peaks, rather than a population traversing laterally from one peak to another.

    No, you didn’t say that.

    How close together are these separate peaks?

    Why isn’t the population evolving together?

    As we said above, and as your citation supports, the statement that fitness can never decrease only applies to infinite populations (which don’t exist, but provide a useful limit) *and* when natural selection drives all evolution (which it doesn’t). When a population is finite then fitness can decrease even if natural selection drives all evolution (which it doesn’t).

    Show us the runs from your program along with the mean fitness.

  402. Joe:

    Lizzie did not generate dFSCI- there isn’t any function, no meaning, nothing.

    Toronto:

    Yes, there is “specific functionality” and that is a product of values in the string that result in a number larger than “1.0e60?.

    See this line in the program: #define FITNESS_THRESHOLD 1.0e60

    sigh. and just when I was starting to like you.

    Lizzie’s program is written in MatLab. There is no #define FITNESS_THRESHOLD 1.0e60.

    Here is Lizzie’s code:

    while MaxProducts<1.00e+58

  403. Zachriel on October 14, 2012 at 2:32 am said:

    Keep in mind that Shannon Information is the theoretical basis of all modern digital communications, including the Internet.

    Digital communication was taking place long before Shannon.

    Why would a random sequence have more Shannon Information?

    I’m asking you.

    I have a randomly generated string. I ‘randomize’ my randomly generated string. According to you, I’ve generated “Shannon Information.”

    How and why?

    Mung: If I take a 504 bit string and “randomize” it, I’ve generated Shannon Information?

    Zachriel:

    Yes, that is correct. Do you understand why?

    No. I don’t.

    Say my string has a method that calculates the amount of Shannon Information:

    v1 = str.si
    puts v1

    Now I “randomize” it:

    str.randomize!

    Now I recalculate the amount of Shannon Information:

    v2 = str.si
    puts v2

    You say that the second value will always be greater than the first. Is that what you are saying?
    __________
    nb: Real codes have some degree of redundancy, so do not have the calculated maximum of capacity that is worked out for a flat random distribution. Which ideal value is irrelevant to the real world absent demo of a code that conveys meaningful, specifically functional, linguistic coded messages and has that distribution. KF

  404. Mung @406:

    OK, sure, whatever. Just like counting the number of pages in a book measures the amount of information in the book. Right.

    I’m a drive-by commenter on this thread anyway, so I’ll go back to lurking. Was just trying to inject some reason into the discussion. Sigh . . .

    Well, carry on with the discussion. It is quite obvious that it is possible to randomly generate strings with more or less carrying capacity (SI). So good luck talking sense into anyone as long as you are willing to grant that they are measuring the amount of actual, real “information” that is contained in the string . . .

  405. Zachriel:

    Random sequences are generally incompressible, but Shakespeare is quite compressible, one of many simple tests of randomness.

    Random sequences of what?

    I propose a test:

    Identify a text of Shakespeare. Randomly select n characters from that text. Randomly select a portion of contiguous text with length equal to n.

    Compress each using the same compression algorithm.

    Decompress and compare to the original to validate that the compression was lossless.

    Run multiple times and store averages.

    Display results

  406. Eric,

    Please don’t assume your input isn’t appreciated, it is. I am truly interested in what you have to say.

    OK, sure, whatever. Just like counting the number of pages in a book measures the amount of information in the book. Right.

    No! You know better than this.

    Counting the pages in a book has no relationship to information carrying capacity.

    That’s a seriously flawed analogy.

    It is quite obvious that it is possible to randomly generate strings with more or less carrying capacity (SI).

    Honestly, this makes no sense to me. What’s the encoding?

    If the information source can only generate two symbols, 0 and 1, and either symbol can be generated with equal probability, 1/2, how does one string of length 504 contain more Shannon Information than any other string of length 504 from the same information source?
    ________
    Real codes are not flat random in distribution of symbols, where meaningful, coded messages are conveyed. This implies that the info carrying bit bucket cannot in practice be filled to the brim. KF

  407. Toronto:

    So we don’t actually “calculate” CSI, we “calculate” “dFSCI” which is compared to a “threshold”.

    How do you calculate dFSCI and what is the threshold you compare it to?

  408. Toronto:

    keiths has posted a great comment with his “bucket of CSI” analogy.

    An IDist has a bucket of things containing CSI that have no known “deterministic mechanism” explaining their existence.

    Help me out here. What’s in the bucket and why is whatever is in the bucket in the bucket?

    As soon as he finds a reason for a thing’s existence, he takes it out of the bucket.

    So when the IDist finds out that the object really is designed, he takes it out of the bucket?

    ok. that makes sense, sort of.

    What’s left in the bucket?

    Stuff for which we have a reason to infer design?

    All the things he can’t explain! :)

    Well, no. What’s in the bucket and why is whatever is in the bucket in the bucket?

    What does he do next?

    Who cares? Design is objective.

    He attributes their existence to an “intelligent designer” that he can’t explain.

    The stuff still in the bucket, you mean?

    Why is it in the bucket?

    Please tell me you’re over 18. For some reason I feel like I’m beating up on children.

  409. Toronto:

    “dFSCI” is not just the fact that it is in this case a 500 bit string, but the “specific functionality” of the 500 bit pattern, which in this case is the information that results in a “product of terms embedded in the pattern, that exceeds THRESHOLD”.

    Assume that I know that a 504 bit string does not exhibit dFSCI merely because it is a 504 bit string.

    When would such a string exhibit dFSCI?

    …the information that results in…

    Where is this information? In the string?

  410. // total population size
    #define POPULATION_SIZE 2

    population size of 2? really? why?

  411. Mung:

    How does the existence of _A_ change the probability that RV will throw up another functional element _B_? If it doesn’t, then aren’t the probabilities independent and therefore multiplicative?

    No, they are not. This is not an easy point, but it is important.

    Porbabilities are independent and multiplicative as long as the two events hace to happen independently in the same individual or clone of the original population.

    So, let’s say that in a population of 10^15 bacteria, and in a certain Time Span, event A has a probability of, say, 10^-9 (a complexity of 9 bits), and event B too. The total probability, if having A and B in any individual clone of the population is then multiplicative, therefore 10^-18, 18 bits.

    But if A (or B), after one of them happens, expand to the whole population, for a deterministic effect like NS, in a short time, then the scenario changes. The probabilistic resources for the second event are multiplied by 10^9.

    I have wondered how such a scenario could be evaluated probabilistically and I have offered what I believe is a good approximation to the problem, in the posts many times linked, that, making extrene assumptions in favour of the NS mechanism (a perfect intermediate, perfect and quick expansion to the whole population), uses the binomial distribution to compute the probability of having two events of similar probability in a certain time span.

    The results show clearly that NS can indeed lower the probabilistic barrier, in a significant degree.

    That’s why I have always admitted that NS can in principle help explain biological information. The problem is not: it can’t. The problem is: how much can it help?

    The real reason why NS completely fails is that complex functions are not deconstructible into simpler intermediates, each of them naturally selectable. We have to stick to real reasons, and not to imagination.

    NS can do really very little, because really very little new arrangements generated by RV are naturally selectable, and those that are are simply variations of the existing information and of the existing functions, and in no way consitute steps towards new, not yet existing functions. Indeed, all acses of NS observed are cases of microevolution, one or two bits, function conserved or slightly changed.

    The most classical examples of NS (antibiotic resistance, expansion of Hb S due to malaria) are indeed examples of protection from extreme environmental attacks because of minimal loss of existing information, as Behe explains very well (the “burning the bridges” argument). In those cases, not even a true new biochemical function is created, and the survuval advantage is merely due to a loss of functions (or strucures) that already existed.

    None of that helps in generated new complex sequences for new biochemical functions that do not exist before. Therefore, NS is a myth where macroevolution is concerned.

    Neo darwinists have been dreaming for decades that macroevolution is a sum of naturally selectable microevolutionary events. That is simply not true. They don’t find the intermediates they are looking for, not because they have been cancelled by their theory, but because they simply do not exist.

    That’s also the reason why all neodarwinist arguments are made in terms of generic traits that would confer reproductive advantage. They hate to reason in terms of what I call “local functions”. A local function is the true biochemical function that makes a protein functional. The local function of an enzyme is to accelerate a biochemical reaction. That, in itself, has nothing to do with survival or reproduction. Darwinists never ask themselves: how did this local function come into existence? They reason in terms of abstractions, because in any other way their reasonings would appear for what they are: wishful thinking.

    One of the best papers that IMO support the ID views is the famous “rugged landscape” paper. In a context extremely favourable to true NS (an existing function, altered artificially, that must be retrieved, and a viral setting) the authors conclude:

    In practice, the maximum library size that can be prepared is about 10^13. Even with a huge library size, adaptive walking could increase the fitness, ~W, up to only 0.55. The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical and implies that evolution of the wildtype phage must have involved not only random substitutions but also other mechanisms.

    Darwinists should seriously reflect on this empirical evidence, before fantasizing about what true NS can really do.

  412. F/N: It seems the objectors, again, need to bone up on the design inference explanatory filter (also cf here in the ID founds series at UD and no’s 29 & 30 in the weak argument correctives). They have made so many strawman caricatures that they are confusing themselves. In particular, design is only inferred on tested, empirically reliable signs, e.g. digitally coded functionally specific complex info such as text strings in this thread or strings in functional programs. The D/RNA strings that produce proteins are coded, are specifically functional, are known to come in deeply isolated fold domains in potential AA chaining space, and are known to be complex, quite often well beyond any reasonable threshold of exhausting blind search resources. Step one, the relevant aspect is examined and if it can be explained on observed mechanical necessity is assigned to law, proteins are highly contingent. Step 2, can chance based statistical distributions explain, no as we are not coming from the bulk, but special, functional zones isolated to 1 in 10^60 or more. The reasonable explanation, then, is design. KF

    PS: GP, a decimal digit has 10 possibilities and can store up to 3.32 bits of info on avg. (And, that is a Shannon, info capacity metric.)

  413. Mung (387):

    English text is not random.

    Joe (392):

    And teh works of Shakespeare would appear, to any algorithm, to be random, as they haz no short and neat description

    Joe, if the works of Shakespeare are not random then, as Dr Dembski says, they are more compressible than random text strings because, to some extent, they are predictable. For example, in English ‘q’ is generally followed with a ‘u’. So that two letter combination can be compressed. The paper I linked to some ways back discussed such a scheme. Common letter combinations and words can be compressed. N fct, jst lvng t th vwls s knd f cmprssn. Not a good one but a compression nonetheless.
    ________

    Correct, as I noted earlier. What that means is that paradoxically nonsense random strings score higher than actual code-bearing ones on the Shannon info capacity metric, The bit bucket cannot be filled to the brim in practical cases. The import is, that ORDERED patterns, where a unit cell replicates say n times, are most compressible, but cannot carry real messages beyond what may be in the unit. Flat random strings likewise cannot carry info of any general practical use. Real message carrying strings lie in the middle, and have organisation based on functionality requisites. They are neither simply ordered nor flat random. But then, Wicken and Orgel were talking about such and defining functionally specific complex information and organisation by direct implication as an upshot of OOL studies in the 1970′s, as the IOSE discusses here on — try Midori browser BTW as a nice, tight fast running secondary. KF

  414. Mung (387):

    Over in the other thread CentralScrutinizer posted results from a simple Huffman encoding. In your opinion, is this the same sort of algorithmic complexity Dembski has in mind in his paper? Is that what Dembski means by algorithmically compressible?

    You read Dr Dembski’s paper, what do you think? Did he discuss Huffman encoding? What other things in the field have you read? Do the research, figure it out!!

  415. Joe (404):

    Then please write an algorithm that can generate the works of Shakespeare. Oh shut up because you are talking abiout the worng type of compression.

    No one is talking about generating Shakespeare. Compressing is like .jpg files as opposed to .bmp files. The information is all there (for lossless compression anyway) but in a condensed form. Or .zip files. You need the code to ‘uncompress’, the compressed version is not necessarily ‘readable’ on its own.

    What type of compression are you talking about?

  416. KF (419):

    Correct, as I noted earlier. What that means is that paradoxically nonsense random strings score higher than actual code-bearing ones on the Shannon info capacity metric, The bit bucket cannot be filled to the brim in practical cases. The import is, that ORDERED patterns, where a unit cell replicates say n times, are most compressible, but cannot carry real messages beyond what may be in the unit. Flat random strings likewise cannot carry info of any general practical use. Real message carrying strings lie in the middle, and have organisation based on functionality requisites. They are neither simply ordered nor flat random. But then, Wicken and Orgel were talking about such and defining functionally specific complex information and organisation by direct implication as an upshot of OOL studies in the 1970?s, as the IOSE discusses here on — try Midori browser BTW as a nice, tight fast running secondary.

    I’ll check out your IOSE discussion when I’ve got some time, promise. You’ve pointed out why Shannon information is not really the point of trying to find complex, functional, specified information. Which is why Dr Dembski came up with a similar but different definition.

    Midori doesn’t run on Macs. I’ll try Chrome for a bit, see if that is better. This forum looks different . . . different type-face for one. And some different formatting. Browsers, you’d think they’d all work the same. If Chrome doesn’t do it I’ll try Opera and then Firefox. I don’t generally like Firefox but Opera is pretty fast.
    ________
    I’ll vouch for Opera. Firefox too. Chrome and derivatives have a problem of the missing menu bar for me, and no the wrench plus stories about saved screen real estate do not hack it, I want those push-buttons where I can reach them pronto. Deal breaker. BTW, I also outright despise the MS Office 2007 Ribbon. KF

  417. KF:

    Thank you for the correction. You are obviously right. It was just writing in a hurry!
    ______
    Hear you, and you obviously underestimated bit capacity by about 2/3, but we deal with those who would pounce on anything to make a counter talking point with intent to deride and dismiss. KF

  418. Jerad: Algorithmic compressibility speaks to the possibility of squeezing out redundancy. As I noted, order is highly compressible, and that is one way of looking at laws of necessity such as F = m*a or the like or specify unit cell, repeat n times etc. Truly random sequences basically have to be quoted outright. Functionally organised coded ones will be somewhat compressible but nowhere near as much as simple order. WmAD’s discussion was general. This begins to be a side track from the pivotal issues and the challenge in this thread. KF

  419. F/N: Folks, day 19, still those crickets are chirping; no offers to submit a 6,000 word essay that warrants, blind watchmaker thesis molecules to Mozart per empirically grounded argument. KF

  420. Jerad:

    No one is talking about generating Shakespeare.

    I am- that is what I have been talking about. So please do TRY to follow along.

  421. Toronto:

    ” Yes, there is “specific functionality” and that is a product of values in the string that result in a number larger than “1.0e60?.”

    Joe: “Umm that is not functionality…”

    It is as functional as a “string” of DNA that is “code” for a functioning human.

    No, not even close.

    If a string of DNA contains “information” then so does Lizzie’s.

    Just cuz YOU say so?

    BWAAAAAAAAAAHAAAAAAAAAAHAAAAAAA

  422. Toronto:

    keiths has posted a great comment with his “bucket of CSI” analogy.

    keiths just erects strawman after strawman. Just because you are too clueless to recognize taht doesn’t mean anything to us.


  423. Then what makes it an evolutionsry algorithm?

    Zachriel:

    Because the genomes change over time. However, there is no adaptation without selection, of course.

    That is not all it takes to be an EA, Zachriel.

    What is the problem it is trying to solve?

  424. I see we are back to talking about latching-

    hey keiths, if there isn’t any latching then there wouldn’t be any nested hierarchy as a result.

  425. Joe Felsenstein is confused:

    The reason for its success (compared to pure random search) was … selection.

    Artificial selection towards a goal, Joe. Evolutionism doesn’t have such a mechanism.

  426. Joe (426):

    No one is talking about generating Shakespeare.

    I am- that is what I have been talking about. So please do TRY to follow along.

    The title of the section in Dr Dembski’s paper (on pages 9 – 12) is “Specifications via Compressibility” so I thought that’s what we were talking about.

    Shakespere’s work was not randomly generated and does not appear random therefore it’s more compressible than a random text string.

  427. Jerad,

    Obvioulsy you have reading comprehension issues as Dembski says compressibility = description. He makes it quite clear.

  428. As for Dawkins’ “weasel” and latching:

    The program was supposed to demonstrate CUMULATIVE selection. And you cannot have cumulative selection if the proper mutations do not latch. Otherwise it would be called back and forth and sometimes cumulative selection.

  429. Being algorithmically compressible means that you can produce it with an algorithm.

  430. Zachriel:

    There are many complex biological structures for which we can trace the history. A common example is the mammalian middle ear, where each step is selectable, while the final result is irreducibly complex.

    Reference please. And what is the testable hypothesis that accumulations of random mutations didit?

    Of course. It’s well-established that recombination is essential for traversing rugged landscapes.

    However it is not established that recombination is a blind watchmaker mechanism.

    Yes, apparently natural selection is capable of evolving quite adequate proteins

    And it is very noticeable you didn’t provide any evidence to support your bald assertion.

  431. “Artificial selection towards a goal, Joe. Evolutionism doesn’t have such a mechanism. “

    toronto:

    A question for all UDists!

    Here is a “mechanism”: “AREA = Len*Width”.

    In what way is that a mechanism?

  432. and petrushka’s daily nonsense:

    Joe has stumbled upon the same silliness that pervades gpuccio’s hypothesis: that a designer would focus exclusively on maximizing some single parameter of function, such as catalysis,

    Strange, I never said, thought nor implied such a thing.

    ignoring the hundreds or thousands of interrelated parameters that can be seen by natural selection.

    Umm natural selection is BLIND, so it doesn’t see anything, meaning nothing can be seen by natural selection.


  433. Joe: And what is the testable hypothesis that accumulations of random mutations didit?

    In the past, you’ve rejected any experiment showing mutation is random with respect to fitness, such as Lederberg & Lederberg, Replica Plating and Indirect Selection of Bacterial Mutants, Journal of Bacteriology 1952.

    In the past I have explained to you why random wrt fitness is meaningless gibberish because it does not mean that the mutations were not directed by an internal algorithm. Also the mutations allow for fitness- ie successful reproduction, so it would appear to be an example of built-in responses to environmental cues.

    So why do you insist on being so obtuse?

  434. Not to be out done, toronto shares its nonsense-


    Umm natural selection is BLIND, so it doesn’t see anything, meaning nothing can be seen by natural selection.”

    toronto:

    This is why Judge Jones had it so easy determining that ID is not science.

    1- Jonsey still isn’t in any position to say what is and isn’t science as he is still clueless on the subject.

    2- What I said has nothing to do with ID

    3- natural selection still doesn’t see anything.

    IDists constantly “misuse and extend” metaphors in a way that real scientists don’t.

    Just because you can say so that doesn’t make it so. And natural selection still doesn’t see anything. It is still a RESULT that doesn’t do anything.

  435. OMTWO:

    I asked Joe recently if he thought there was such a thing as a fair die. Or set of dice even.

    He said that there was such a thing.

    I wonder how he can possibly know that?

    What testable hypothesis did he test to determine that some dice are random I wonder.

    And I also wonder why that method, whatever it is, can’t be extended out to other systems.

    What say you Joe? How did you determine that your dice are fair and why can’t anybody else do a similar thing according to you?

    Are you special Joe? Only you can arbitrate chance/not chance?

    OMTWO I have constantly asked you to support the claims of your position and like teh coward you are you have always refused to do so. And instead always tried to push the onus back on me, as cowards always do.

    And cowards always throw in false accusations for good emasure, just as you have done, again. You are just a pathetic imp and apparently proud of it.

    To see if a die is fair, you would weigh and measure it. You would check its balance, its edges and corners and finally you would roll it to see what type of distributation you got.

  436. Mung @412:

    Thanks, Mung, for your kind words. Perhaps I can provide a couple of additional thoughts.

    No! You know better than this.

    Counting the pages in a book has no relationship to information carrying capacity.

    That’s a seriously flawed analogy.

    I am not being facetious, and I think in fact it is a decent analogy. A Shannon calculation does not tell us anything about the substance of the underlying information. It just tells us how much underlying information could be in the string. In the same way, if I count pages in a book (or the words in the book if you prefer), I have ascertained how much information could in principle be contained in the book. And that page count or word count is itself a piece of information, analogous to your “real” Shannon information. I’m using a physical example of the same principle so we can see clearly what is going on. People who are enamored with GA’s tend to get off in the weeds when they talk about strings and bits and fancy math, so I am using a simple physical example to highlight the issue.

    Honestly, this makes no sense to me. What’s the encoding?

    If the information source can only generate two symbols, 0 and 1, and either symbol can be generated with equal probability, 1/2, how does one string of length 504 contain more Shannon Information than any other string of length 504 from the same information source?

    Well, I haven’t looked at Lizzie’s program, so perhaps I should just keep quiet, but I’ll charge ahead anyway. :)

    There are at least three ways we can program a GA to easily generate more Shannon “information” through random changes. First, we can lengthen the string (the old accidental-extra-copy-of-a-gene kind of idea). Second, even if we keep the string length the same, if we introduce a previously unavailable character into the string. Third, if we change the relative distribution of the characters (i.e., change the probability of occurrence).

    I agree with you that if we: (i) keep the string length the same, (ii) establish beforehand a fixed, exclusive character set that cannot be changed, and (iii) establish beforehand that each character has an identical probability of occurrence, then, yes, the Shannon entropy calculation should be identical, regardless of whether we shuffle the characters around or not.

    I have no idea what Lizzie and company are claiming to have done. My suspicion, however, is that they have incorporated in their GA one of the three things I mentioned above could be done. That seems to be the only possible source of the confusion on the calculation. Otherwise, if they kept all the variables as you have proposed them (same length, identical character set, pre-set probability), then it should just be a question of math and there should quickly be agreement on the calculation. That there is an ongoing back and forth and disagreement suggests to me that they have (perhaps inadvertently) slipped in one of the three changes I mentioned.

    Anyway, I think you and I are on the same page w/r/t the calculation.

    —–

    Again, at a higher level though, I think the whole Shannon discussion in the present context of GA’s as an avenue for demonstrating evolution’s ability to generate new information is largely an exercise in irrelevance. This is because even if we have a fixed character set (say, ATCG), and even if we assume equal probability of occurrence, we still know for a fact that the string can lengthen or shorten in biology. So however we cut it, we can get more or less Shannon “information” through random changes to the string. Big deal. All we’ve done is increase our pipeline, our available resources, our number of available pages or words. It tells us nothing about the underlying information and is singularly unhelpful in determining whether we have CSI.

  437. toronto:

    Why don’t IDists like kairosfocus submit their theory to the same level of testing they do for the competition?

    Our “competetion’s position isn’t very amendable to testing. They always dry about too much time required.

    For instance, if the designer can’t see the future environment, how does he know what his new designs should look like?

    How is that relevant? Please make your case.

  438. Joe (435):

    Being algorithmically compressible means that you can produce it with an algorithm.

    Well, I’ve found a few different definitions:

    From http://kwelos.tripod.com/algor.....ession.htm

    “If a computer program or algorithm is simpler than the system it describes, or the data set that it generates, then the system or data set is said to be ‘algorithmically compressible’.”

    So in this definition it seems like algorithmic compressibility could refer to the system or the data and it could refer to the data being generated.

    But, as I said, there are other definitions:

    From Theories of Everything: The Quest for Ultimate Explanation, p. 14-15 by Barrow:

    “The goal of science is to make sense of the diversity of Nature. It is not based upon observation alone. It employs observation to gather information about the world and to test predictions about how the world will react to new circumstances, but in between these two procedures lies the heart of the scientific process. This is nothing more than the transformation of lists of observational data into abbreviated form by the recognition of patterns. The recognition of such a pattern allows the information content of the observed sequence of events to be replaced by a shorthand formula which possesses the same, or almost the same, information content. … On this view, we recognize science to be the search for algorithmic compressions. … Without the development of algorithmic compressions of data all science would be replaced by mindless stamp collection – the indiscriminate accumulation of every available fact.”

    From A Modest Proposal (by a Somewhat Modest Engineer)

    “A pattern’s algorithmic compressibility can be an objective measurement and all we have to do is make sure we are comparing measurements from the same programming language”

    Which seems to imply that algorithmic compressibility is a measurement or number.

    The paper Empirical Data Sets are Algorithmically Compressible: Reply to McAllister by Twardy and Gardner definitely uses algorithmically compressible to mean compressible by an algorithm.

    The paper is available as a pdf and discusses several real world data sets including DNA and might be worth some time.

  439. Context Jerad- I have explained my position. Sure you can ignore that and prattle on regardless. But I don’t care.

  440. To see if a die is fair, you would weigh and measure it. You would check its balance, its edges and corners and finally you would roll it to see what type of distributation you got.

    OMTWO spews:

    And that rules out it’s distribution being the product of an algorithm internal to the dice how exactly?

    The manufacturing process rules out any internal algorithm, duh.

    Perhaps you’d like to apply your “design detection” skills to the question I posed in this comment, in this very thread?

    No, you are obviously a loser with nothing to say. Not only that you don’t seem to understand anything beyond misrepresenation and strawmen.

  441. Mike Elzinga chimes in with more sunstance-free drivel:

    Joe G appears to have adopted and nurtured every possible characteristic that makes a person loathsome to other people.

    I will never be as loathsome as you are Mikey. And, thankfully, I will never be as dishonest and dispicable as you either.

    Now go melt some water, lser.
    _______

    Joe, kindly restrain yourself on tone. You are liable to fall off the wagon if you allow yourself to fall into intemperate language and personalities. KF

  442. The manufacturing process rules out any internal algorithm, duh.

    OMTWO:

    What’s that Joe?

    The way it is made and what it is made up of. That is the manufacturing process, duh.

    What do you know about that for any given die?

    Well there are ways we can tell the properties of any given die. Are you that ignorant of technology? Really?

    And in any case, nothing at all rules out an advanced alien species controlling outcome of a roll of dice via an internal algorithm.

    Physics.

  443. Also the mutations allow for fitness- ie successful reproduction, so it would appear to be an example of built-in responses to environmental cues.

    Zachriel:

    That’s exactly what the Lederbergs showed wasn’t the case. The mutations were not due to environmental clues. You could also look at the Luria–Delbrück experiment.

    We have already been over this Zachriel. Apparently you chose to be willfully ignorant. And that is not a good place to argue from.

    Decades ago the Lederbergs conducted an experiment using bacteria.

    This experiment demonstrated that the resistance to anti-biotics was already in the population when the anti-biotics were introduced (put on the plate).

    IOW the resistance did not come in response to the exposure.

    This was supposed to demonstrate that mutations are random with respect to fitness.

    However that “conclusion” was reached before we knew that bacteria communicate:

    Communicating bacteria

    More communicating bacteria

    Quorum sensing

    The point is the Lederbergs didn’t know about this communication.

    IOW as far as they knew the bacteria were communicating with each other and that communication sparked the variation that afforded the anti-biotic resistance.

    That woudl mean the mutations are not genetic accidents but part of some “built-in response to environmental cues”.

    However Zachriel cannot grasp that and instead blathers on and on about “random with respect to fitness”.

  444. The way it is made and what it is made up of. That is the manufacturing process, duh.

    OMTWO:

    No Joe. That’s the end result. The manufacturing process is the bit before that.

    That is incorrect and demonstrates ignorance. The die is the result. The manufacturing process is the way it is made and what it is made up of.

    Well there are ways we can tell the properties of any given die. Are you that ignorant of technology? Really?

    Yet nobody, never mind people decades ago, can properly design such an experiment except Joe.

    What? Balthering like a moron doesn’t help you make your case.

    And you say that “physics” rules out an advanced alien species controlling the outcome of a dice game?

    It rules out an internal algorithm. Algorithms need a means of being carried out and a die does not offer any.

    You can’t, just as nobody can rule out a “designer” controlling mutations or programming responses for later use as you say.

    Then it is strange that I have said how to do so. IOW you really think that your ignorance means something and that is hilarious.

  445. However that “conclusion” was reached before we knew that bacteria communicate

    Zachriel:

    That doesn’t change that the mutations were random with respect to the environment.

    Yes, it does. Obvioulsy you have no idea how bacteria communicate- they alter the environemnt Zach. They coomunicate with chemical signals- chemicals that enter the environment.

    But feel free to explain how intercellular communication explains the Lederbergs experiment. Please be specific.

    I have already done so, Zach. Why do you want me to keep repeating all the stuff you have already ignored?

  446. I communicate with some people and tell each one to bring something specific to a party. And when they all show up with what they were told to bring, by Zach’s “logic” it was all just random.

  447. Zachriel:

    Please be more specific. What is communicated? How is each bacteria to know what to bring?

    According to the reigning paradigm, specifics are not a requirement. However what is communicated would be 1- what everyone has and 2- what is required to ensure the survival of at least one in the population. And variation is what is required.

  448. Eric:

    A Shannon calculation does not tell us anything about the substance of the underlying information.

    Whatever else I may write, understand I absolutely agree with you on this point. :)

  449. Jerad:

    Well, I’ve found a few different definitions:
    From http://kwelos.tripod.com/algor…..ession.htm

    hehe. I came across that same site. But do you understand now why I posed my question in @387?

  450. OMTWO:

    Joe won’t help me out and use his “design detection skills” with my problem.

    Your problem is that you are a [snip]. So there isn’t anything my design detection skills can do for you.
    ______
    TONE, cf 443. KF

  451. OMTWO:

    If replication is imperfect Joe then variation is a given.

    No duh. However your position cannot explain replication.

    And no, the LEDERBERGs didn’t know about bacterial communication. Please at least try to stay focused.

  452. OMTWO:

    In fact, you could take a quick glance at the two documents I have and the problem I posed.

    No, I am not going over to the UK to appease some [snip]. And that is what I would have to do in order to conduct a proper investigation. But you wouldn’t know anything about how to investigate, properly or not.

    Also my design detection skills have already alerted me to the fact there wasn’t a tornado in the UK. Which means you are just lying, again, as usual.

  453. OMTWO barfs:

    Looks like Joe will just keep repeating the hollow refutation he can’t support.

    No duh. However your position cannot explain replication. And no, the LEDERBERGs didn’t know about bacterial communication. Please at least try to stay focused.

    It is supported by the fact that your position cannot explain replication and the fact that the Lederbergs didn’t know about bacterial communication which wasn’t elucidated untill many years AFTER that experiment.

    IOW far from being something I cannot support what I said is something you will never be able to refute.

  454. However your position cannot explain replication.

    OMTWO:

    Joe, falling back to that already?

    No fallback. Just the facts. And I know facts bother you because the fact is your position has nothing.

    So don’t blame me because you spew bald assertions.

  455. OMTWO:

    But if not then I guess your principles are more important to you then making me look a fool.

    You don’t need my help. You look like a fool regardless of what I say or do.

  456. OMTWO:

    And before you get all excited Joe with a quote mine I know they communicate, but they don’t broadcast the contents of their genome which is what you are saying with your “what they have” idiocy.

    Please do tell how YOU know what bacteria communicate. Chemical signals, or the lack of specific chemical signals could definitely communicate what each has at the ready. What do you think genomes are made of? Chemicals! What do you think does the chemical communicating? The chemically formed genomes!

    But anyways do tell how you know what is and isn’t communicated- show your work. And then tell us how it was determined that all mutations are undirected, chance events.

  457. And an ignorant OMTWO tries to tell me how to conduct an investigation:

    No, you can just look at the data.

    The data says there wasn’t a tornado in the UK which means you are a liar. And that means you are not to be trusted, which means I have to go there to look over everything.

    But thanks for proving that you are clueless.

  458. OMTWO:

    Joe, the only person bothered that “my position” cannot explain replication is you.

    Nope, it doesn’t bother me that you are a coward and cannot support the claims of your position.

    And the more we find out the less it looks like an intelligent designer’s services were called upon.

    That is a lie.

    Did this method of diversion ever actually work?

    YOU are the diversion. And no, evo diversions never work. But taht doesn’t sop you from trying, and trying, and trying

  459. OMTWO:

    And so? What Zach said.

    But Zach didn’t say anything.

  460. Please do tell how YOU know what bacteria communicate. Chemical signals, or the lack of specific chemical signals could definitely communicate what each has at the ready. What do you think genomes are made of? Chemicals! What do you think does the chemical communicating? The chemically formed genomes!

    OMTOO[snip]:

    Ah, but Joe. There’s more to the genome then just chemicals! If you recreated a genome with just the chemicals it in it would not work!

    Nice [snip] non-response. And Venter created a genome with just chemicals and it worked, [snip].

    Zach:

    That doesn’t change that the mutations were random with respect to the environment. But feel free to explain how intercellular communication explains the Lederbergs experiment. Please be specific.

    I already explained it above. Stop blaming me for your problems.

    Falling off the wagon . . . KF

  461. Zachriel:

    That’s exactly what the Lederbergs showed wasn’t the case. The mutations were not due to environmental clues. You could also look at the Luria–Delbrück experiment.

    Once AGAIN- the chemicals bacteria release alter the enivironemnt- IOW I have already answered this OMTWO- what is your malfunction?

  462. OMTWO

    Really? I guess it’s not surprising you don’t realise when one claim you make undermines another you’ve made.

    Just because you can spew flase accusations doesn’t mean anything to me.

    It is very noticeable that you can’t make a case…

  463. And OMTWO proves it is ignorant as can be:

    My statements are in bold-

    Why is it that artificial ribosomes do NOT function? If their functionality was the result of their physical and chemical components then artificial ribosomes should function just as the ribosomes found inside living organisms.

    Artificial ribosomes are lacking the programming required by compilers to function.

    Joe,
    Quote
    Nice [snip] non-response. And Venter created a genome with just chemicals and it worked, [snip]

    OMTWO:

    Make you mind up Joe!

    The two are NOT the same. One refers to RIBOSOMEs and the other to genomes. A ribosome is NOT a genome.

    Shameless and ignorant that is the anonymous coward’s way.

  464. Deep breaths, Joe. Deep breaths.

    Serenity now . . .

    :)

  465. Think nested sets.

  466. To Zachriel (at TSZ):

    Or due to neutral drift. Nor does it have to go all the way to fixation, but just a significant number.

    Wrong. Neutral drift does not change the scenario in any way. It is just a form of RV, and RV is alredy accounted for in the scenario.

    But you are obviously right that the expansion of the positively selected arrangement needs not be complete. Indeed, it would probably be only partial in most cases. That has two important comsequences:

    a) The effect of NS in reality would be much lower than waht I have hypotesized in my model.

    b) Funtional intermediates should absolutely leave traces in the existing genomes.

    In my model, I have made a few assumptions that are absolutely in favout of NS (a perfect intermediate, complete expansion of the selected intermediate, and so on). Therefore, my model can be considered a higher threshold of what NS can do.

    About the missing intermediates, we have been there many times, and I will not repeat myself.

    There are many complex biological structures for which we can trace the history. A common example is the mammalian middle ear, where each step is selectable, while the final result is irreducibly complex.

    This is interesting. Each time you are pressed for real examples of your theosy, you shift to macroscopic phenotipic effexts (indeed, to that single example). If I remember well, Petrushka does the same.

    But you must know very well that we have absolutely no idea of what genotipic modifications are the basis for those phenotipic changes. Therefore, it is completely impossible to analyze those “sequences” in terms of genomic information. Therefore, they are irrelevant to the ID-neodarwinism debate.

    And yet, you darwinists go on shifting to that kind of non arguments, that could be understandable 100 years ago, when we knew nothing of molecular biology, but are senseless today.

    Why? The answer seems rather simple: you have mo arguments at the level of molecular biology, and so you recur to the only things you have left.

    That’s really interesting.

    Of course. It’s well-established that recombination is essential for traversing rugged landscapes.

    It’s well established that something is essential for traversing rugged landscapes. THat recombination can do that in the biological context does not appear so well established, IMHO. Could you give references, please?

    And anyway, the experiment in that paper was dealing with a complete, and very favourable, biological setting for phages, where any natural mechanism was free to act. So, why was the rugged landscape not traversed?

    Yes, apparently natural selection is capable of evolving quite adequate proteins — even with one hand tied behind its back!

    I will not comment on that. I usually respect religious faith, in all its forms.

  467. To Petrushka (at TSZ):

    Along with making unwarranted extrapolations of the number of “required” steps, gp ignores recent research indicating that protein domains are themselves modular.

    I am ignoring nothing.

    Being modular is one thing.

    Being deconstructable into naturally selectable modules is quite another thing.

    You need functional naturally selectable intermediates for your theory, not just “modules”.

  468. To OMTWO (at TSZ):

    About your problem. It’s easy.

    I can try no design inference for any of the documents unless I can recognize and define a function for one of them, or both.

    Just by looking at them, I cannot say if they are functional or not, and therefore I will make no design inference for any of the two.

    But I can suggest a few ways to investigate that problem, if your limited funds allow that.

    The simplest way would be to assume that they are sequences of protein coding genes, and compare them with existing databases.

    The second way would be to decode them into AA sequences, and compare them with existing databases (that’s essentially, but not exactly, the same as in the previous step).

    A third way would be to synthesize the proteins themselves, and test them for structure and biological function.

    Unless and until some definite biochemical function is found, I will not make any design inference for sequences like those ones.

    If any of the sequence is found to correspond to a functional protein, I will make a design inference for it: we are speaking of hundreds of AAs here, and length is in our favour).

    Just to be fastidious, we could also infer design for the simple function of being sheets of paper with characters printed on them. That could probably warrant a design inference for both, but in a completely different sense: the printed sheet of paper is certainly designed, but the printed sequence could still be random.

  469. H’mm:

    Skimming through I see Toronto tries a turnabout as clipped at 443 above.

    Doubtless, he is stinging from the fact that sunrise will mark a full three weeks that I have offered to host at UD a 6,000 word essay on the positive, empirically warranted case for blind watchmaker molecules to man OOL and body plan level evo without any serious response from the many objectors to design thought that are ever so eager to get back to their favourite tactic of objecting objecting objecting while assuming their own position as a default per implicitly imposed assumed materialism per Lewontin et al:

    Why don’t IDists like kairosfocus submit their theory to the same level of testing they do for the competition? . . .

    Here Toronto is being willfully misleading, hoping to profit by his misrepresentation being perceived as truth.

    How do I know this?

    Simple, he first knows or full well should know that all along there has been a briefing note linked from every post I have ever made at UD.

    Secondly he knows that right from the point of making the hosting offer, I have linked the IOSE. Which — albeit a course reader length document, cf. here on — does lay out the matter of origins science at length, and in so doing lays out the issues of warranting scientific claims including on origins, and in so doing lays out at introductory level the warranting case of modern design theory. Not to mention, third, this very post is a part of the UD ID foundations series that I have sustained since Jan 16, 2011 at UD.

    In the IOSE and at UD, as Toronto full well knows or should know, the key issue is that in origins science we seek to reconstruct the remote past beyond our ability to observe and have written accounts by direct observers. Indeed, here is how the linked page at IOSE begins:

    FOCUS: The scientific study of our origins, and that of our world, are both highly important and quite controversial. The recent imposition of a priori materialism on origins science, through implications of methodological naturalism is a key aspect of that. The recent rise of the design inference on empirically reliable signs of intentionally directed configuration (i.e. design) is another. So, in this summary for the IOSE course these pivotal issues are documented and explored in some detail. Then, a step by step summary of the main topics of the full course is presented, for those who want to see the overall structure of the course in a nutshell. Some points for discussion give a flavour for what is to follow in detailed units, and serves as a stimulus for one’s own independent thought . . .

    The problem is that there has been a latterday imposition of a priori materialism, in the guise of a mere methodological constraint, which effectively decides the issues before facts can speak. (In the page, there are FIVE illustrative examples of the problem. These include the US NAS and NSTA, so one cannot say this is not a problem at institutional levels.)

    By contrast, I and many others have argued that the main focus of design theory is the question of inductively warranted inference to best explanation of the past of origins, per empirically tested and credible signs. This is no novelty, for almost a decade, the leading ID scientist, Wm A Dembski has gone on record:

    intelligent design begins with a seemingly innocuous question: Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? . . . Proponents of intelligent design, known as design theorists, purport to study such signs formally, rigorously, and scientifically. Intelligent design may therefore be defined as the science that studies signs of intelligence.

    In short, from deer-track, we reasonably infer deer. Similarly, from FSCO/I, per inductive investigation and billions of cases in point, we infer to design as empirically reliable cause. Actually, the inference is stronger as we have cases of deer tracks being imitated or the like, so the inference is on an unless there is reason to think otherwise basis. To date, there are no significant exceptions to the premise that where we see FSCO/I and we have access to direct checks on the cause, design plays a significant role. There have been many attempted counter examples, but once we rise above the level of rhetorical talking points and dismissals, we can see that uniformly, purposeful design by intelligent action is critically involved, genetic and similar algorithms being an excellent case in point.

    (These invariably start within an island of function, T and/or on a pattern of moving to such a target zone via a warmer/colder oracle via some form of generalised hill climbing. The use of fitness functions and rewarding degree of success is diagnostic. That, BTW, is why the essay offer insists on starting with OOL from a plausible pre-life environment, as the self-replication mechanism pivotal to the tendency to appeal to the almost magical claimed powers of “natural selection” distracts form the key question, origin of FSCO/I. In short the performance beyond random chance is consistently explainable on intelligently inserted active information.)

    Going beyond this, the number of peer reviewed pro deign papers and the like in the technical literature is now about 50, never mind a pervasive climate of deep ideological hostility.

    So, Toronto is being irresponsible and distractive.

    Sadly, no surprise.

    Back on track.

    Now, therefore: after almost three weeks, is there anyone willing to step up to the plate and offer an essay?

    We are waiting and it is clear that there are many silent onlookers who are noting. (Where the snips from Wiki give us an idea of why there is such a strong aversion to say posting the same essay at TSZ and allowing it to be hosted here at UD for a parallel discussion. I must make one honourable exception, Jerad did try a discussion, but unfortunately, he did not start from OOL, and later said tha this is an area where there is no well grounded dominant account. Unfortunately, as has been shown by court cases, one dares not say this to students in school lest they be led to doubt the “gospel” according to St Charles.)

    Quite revealing . . .

    KF

  470. F/N: This at 448 per a clip from TSZ shows just how little design theory is understood by objectors, in particular the explanatory filter:

    nothing at all rules out an advanced alien species controlling outcome of a roll of dice via an internal algorithm.

    First, if there is design at work, but he pattern shown is one that exhibits the statistics of a chance based random process i.e. a probability distribution, the filter will infer to chance contingency. That is a false negative and is part of the price paid to make sure that inferences to design are morally certain.

    Second, if there is a pattern that is not consistent with being flat random, e.g. there is a code that is known and the die rolls out the equivalent of the first 72 letters of this post, just for example; only when the evident pattern shows not only functional specificity but when it is complex enough in the same aspect that it is unreasonable to infer that by chance one could hit on such a special zone T, will there be an inference to design. (And yes, if Vegas houses operated by ID filter rules, they would go broke; they are pursuing a very different goal.)

    Recall, the 500 bit solar system resources limit, is effectively the same as saying set up a cubical haystack 1,000 LY across (about as thick as our galaxy), and then take a blind random sample of one straw-sized object. Sampling theory tells us strongly that by overwhelming likelihood, the sample will be straw. This is the needle in the haystack challenge on steroids. The 1,000 bit cosmos we observe resource limit is far more stringent than this.

    So, the talking point tilts at a strawman [oops, a hay stack, 1,000 LY across], and imagines it has disposed of the real case.

    Yet another illustration of the sad irresponsibility of too many of the objectors we are dealing with.

    KF

    PS: Does this objector understand that Vegas houses stipulate transparent plastic dice (presumably of accepted manufacture) tossed against a wall and allowed to bounce to the table, the wall having a grid of projections, all of which is meant to assure that there will be clashing uncorrelated causal chains sensitive to initial and intervening circumstances, leading to effective randomness? The basic design of such a die of course includes eight corners and twelve edges, which makes the system sensitive to the butterfly effect. I have also long since repeatedly pointed out how my dad used to use phone books to get effectively random numbers as loop codes and names c 1960 are usually uncorrelated.

  471. Joe: Remember, I cannot spend a lot of time policing and cleaning up threads. Kindly, restrain yourself. Namecalling and personal attacks are patently counter-productive: answer a fool according to his folly, and you will be as him, down in the mud of a fever swamp wrestling amidst the filth, where he can probably beat you on experience. Yes, I can see where a well warranted negative conclusion where someone has gone to the point where his/her behaviour goes to character is appropriate; but even in those cases, remember you are dealing with a human being and should not say anything you would not wish said about you in polite company. Thank you. KF

  472. My aplogies kairosfocus, I lost my focus yesterday

  473. OMTWO:

    So given that the Lederbergs shows that the mutations were not due to environmental clues (if you actually read the paper this is obvious) they were not built-in responses.

    Except they didn’t do that.

    Put simply, if they were built in responses that mechanism is not working very well because the mutations happen regardless of the environment.

    You don’t know that.

    So yes, if all you have to do is baldly assert something, which is all you ever do, then evolutionism wins. However science requires evidence and you don’t have any.

  474. To OMTWO (at TSZ):

    You cannot make a design inference unless you can determine the function it was designed to provide? Really?

    Really! Why are you surprised? Thatìs clearly stated in my definition and procedure for dFSCI evaluation. I need a specifiction, and in my specific definition (dFSCI) the specification must be functional.

    The same could be said for any string. If you happen not to know the function then all strings look the same, right?

    Right.

    So Hamlet is designed because you can read and understand it but if you lack that you are stuck?

    Yes. That is one of the main causes of false negatives.

    Does ID not have more robust design detection mechanisms then that?

    No. And it does not need it. For most biological strings, especially proteins and protein coding genes, the function is well known and measurable. We are quite satisfied with that.

    The problem is I only have enough money to do that for one of the documents. If only ID could provide a way to determine which of those documents I should study.

    Get more money.

    Again, the same problem, which document to choose?

    Try tossig a coin. Or some form of divination.

    Again, the same problem.

    Again, the same answer.

    So you determine design by taking the blueprint and building something from it?

    No. I recognize, define and measure the function, and then I must assess the target space/search space ratio. It’s all explained in my detailed description of the procedure to assess dFSCI.

    By definition blueprints refer to designed objects.

    I don’t know what you mean with “blueprints”. I have spoken of functions. I can define a function for a stone, as a paperweight, but that does not mean that the stone is designed. So, your statement is simply wrong.

    And your claim is that all proteins are designed, so if a protein is the end product then design is a given?

    Where have you been while we were discussing things? My claim is that if a protein exhibits enough functional complexity (let’s say more than 150 bits), and no credible neo darwinist path is known for its emergence, I infer design for it. I agree that I would infer design for many proteins, or more precisely protein superfamilies.

    Is that the only possible way that ID can come to a design inference for long strings of data like this?

    Yes. ID is not divination. It is scientific, and science has its limits.

    What if I told you it was a signal from space. Would it automatically become design then?

    No. The same requirements would apply.

    Or would we still have to examine proteins?

    If our working hypothesis is that the strings codes for DNA sequences coding for proteins, then certainly yes. If we have other possible functional meanings for the strings, we can certainly pursue them too.

    Sigh. Then why don’t you start there? I’ve already made it clear that the fact it was originally on paper but that is irrelevant, the data is what is important.

    If the data is important, I will analyze the data. The way I explained.

    And if all ID can say about this situation is “well, those sheets of paper with printing on, they are designed they are!” then forgive me for being singularly unimpressed.

    You are making up things. I never said that. I said that a design inference for the sheets is obvious, because they are printed sheets with characters, and those things don’t usually come out in natural settings, without any designed intervention.

    That has nothing to do with the qeustion if the data is designed. I answered in detail to that question, and showed how ID can give a very definite answer, making a design inference in some cases, and not making it in others. It requires, obviously, some work and some reasoning. If you are not interested in doing the work, you will not get any answer. In the absence of any recognized function, no design inference can be made.

    I am not so interested in impressing you. If ID cannot solve your problem because you have not the money to use ID for solving it, well, I can survive. I am quite satisfied that ID can solve the problem of the origin of biological information, which is frankly more interesting to us all than your personal (imagined) disadventures.

  475. Thanks Eric- I know, I know but some days I just get all caught up in the stupidity and cannot just back off.

    It’s a limbic issue, I am told….

  476. So Hamlet is designed because you can read and understand it but if you lack that you are stuck?

    I would say that if you cannot read then conducting a scientific investigation would be out of the question.

  477. And Allan Miller sez:

    And … if articifial ribosomes don’t function, how come one can Google numerous papers on functional artificial ribosomes? http://www.technologyreview.co.....m-scratch/

    Allan, the only part they synthesized, ie the only part that is artificial, is the ribo RNA:

    Using the bacteria E. coli, Church and Research Fellow Michael Jewett extracted the bacteria’s natural ribosomes, broke them down into their constituent parts, removed the key ribosomal RNA and then synthesized the ribosomal RNA anew from molecules.

    And even then, the ribosome now only produces one polypeptide, albeit a polypeptide that as not present in the bacteria the ribosome came from.

    That said how was it determined that all mutations are randomin any sense of the word? Please, do tell.

  478. To Zachriel (at TSZ):

    It’s not “wrong”. It may be superfluous, as you said effects “like NS”. We were clarifying that point. As Lenski demonstrated, drift can be important in adaptation.

    The point is: drift does not change the probabilistic scenario. That is the simple truth, whatever you say.

    Not sure we’ve seen your math.

    I have linked it many times.

    Oh? Why is that? Indeed, natural selection should tend to purge the extraneous over time.

    The same NS that, according to major darwinist thinkers, leaves more than 95% junk DNA in our genome? Really strange…

    In any case, small changes to certain genes can be shown to cause relevant changes to the mammalian middle ear.

    I have read the apper you linked. I can’t find there any molecular information about the evolution of the middle ear, although there is a lot of interesting information about the complex molecular control of the development of that structure, based mainly on gene inactivation experiments. Interesting, certainly, but not relevant to your argument.

    That recombination is important in traversing rugged landscapes is a mathematical result. Try running a few evolutionary algorithms.

    Like yours?

    So, my statement remains true:

    “That recombination can do that in the biological context does not appear so well established, IMHO.”

    (emphasis added).

    Because simple point mutation algorithms will climb the nearest peak and stop. If there are billions of peaks, then you have to start with billions of initial sequences in order to have a decent chance of finding the highest peak. Recombination can largely overcome this problem.

    I ask again: why did’n recombination act in that experiment?
    Where is an experiment where recombination “largely overcomes” the problem? Facts, not words.

    It’s an empirical statement. Adequate proteins evolved, even without recombination.

    Should I laugh?

    You, who have accused me (without reason) of “circularity”, come out with such a statement?

    Have you forgotten what we are debating here? Do you believe we write here just to spend our idle time?

    Are you aware that the problem is: what is the cause of protein emergence, RV+NS or design?

    So, you just say: proteins exist; they must have evolved; therefore they evolved.

    And you call that “an empirical statement?

    My compliments!

  479. OMTWO:

    Here are two documents. I’m heavily implying that ID should be able to tell us something about each of them.

    BWAAAAAAHAAAAAAHAAAAA- design detection tells us if agency involvement was required. From there we investigate further.

  480. To OMTWO (at TSZ):

    My understanding is that both documents are functional in some way. I just don’t know what that function is.

    Your “understanding” is very strange. How do you know that both documents are “functional”?

    Can you give me an example of just one “biological string” and what it’s “function” is and how you determined that function is “the” function.

    Yes:

    Please look at protein P11413 (G6PD_HUMAN) in Uniprot database. It’s 515 AAs long (I don’t paste here the sequence, to avoid problems).

    Its function? It’s clearly defined in the database:

    Function: Produces pentose sugars for nucleic acid synthesis and main producer of NADPH reducing power.

    Catalytic activity: D-glucose 6-phosphate + NADP+ = 6-phospho-D-glucono-1,5-lactone + NADPH.

    There is no nedd to “determine that function is “the” function”. If you had read my definition of dFSCI, you would know that we can define any function for the observed object, and that the computation of dFSI will be made for the function we have defined.

    It’s a thought experement. It’s abstract. I don’t really have an office. There was really no tornado. That was all for Joe “literal” G’s benefit.

    I should have known that irony is wasted with some people…

    So when ID is presented with a set of unknown strings and is asked to choose which is the more interesting with no further data we have to “toss a coin”?

    Or, like any serious investigator would do, analyze all the strings. You asked to decide which string we should analyze without analyzing them. That is divination, not science.

    As before, what is the function of HIV and what it’s it’s dFSCI?

    HIV is a virus, and it synthesizes a few well defined proteins, with well defined biological functions.

    The whole virus can be described as a virus having the ability to infect specific cells, and to reproduce itself through that process.

    For the evaluation of dFSCI, it is better to consider individual proteins and their local biochemical function. That makes the computation much easier.

    Whole organisms, even if relatively simple like the HIV virus, are much more intractable to a detailed analysis.

    Then what is the function of HIV?

    Do you specially like to ask questions twice?

    Credible to who? You? Let me rephrase the question. Does either of those two documents have “functional complexity”? If so, how much.

    Let me rephrase the answer: please read again my post #474, or just read here:

    Answer: I don’t know.

    Or, if you prefer, go on making a fool of yourself.

    So ID can only be applied in the specific case of DNA sequences by building proteins and seeing if they are “functional”?

    You may not know, but that kind of research has been done for decades. That’s why huge databases exist, like Uniprot, that list known proteins and their functions, and their coding genes.

    This is quite different from the version of ID usually given.

    I am sorry if I have disappointed you.

    As yet we’re not at that point. The point we’re at is “Can ID do any better then tossing a coin when determining which of these two sequences are worth investigating, given that only one can be in this example”. The answer, so far, is no.

    The answer id definitely no. ID cannot say “which of these two sequences are worth investigating” without investigating them. If the sequences were in english. it would be rather easy, even for a darwinist like you, to understand at first sight which make sense aand which does not make sense. But how do you believe that I, or anyone else, can decide “at first sight” if a nucleotide sequence corresponds to a functional proteins, without making any attempt at studying the sequence? Tossing a coin remains the best option.

    You make strange requests indeed.

    If you can explain to me how to “do the work” then I’ll happily do it.

    Please, read again my post #474 (well, that’s becoming boring). OK, I paste it again here:

    “But I can suggest a few ways to investigate that problem, if your limited funds allow that.

    The simplest way would be to assume that they are sequences of protein coding genes, and compare them with existing databases.

    The second way would be to decode them into AA sequences, and compare them with existing databases (that’s essentially, but not exactly, the same as in the previous step).

    A third way would be to synthesize the proteins themselves, and test them for structure and biological function.”

    But so far the choice is simple – which of the two documents would *you* given you information/design expertise choose to examine in detail and why.

    As my “information/design expertise” does not make of me a prophet, I say: both. If I can only investigate one, I will toss a coin. And infer design (or not) for the one I have investigated.

    It’s a thought experiment. I would have thought that did not need to be explained.

    :) (see my previous note on wasted irony; I would have thought that did not need to be explained)

    It’s a test. Here are two documents. I’m heavily implying that ID should be able to tell us something about each of them. So far it’s all been excuses.

    This is half funny and half sad…

    If you don’t want to play, that’s fine, but simply saying “well ID can’t do anything of practical use but personally I’m satisfied that it explains the origin of life” is not even trying.

    I have really no reason to play with you. I choose my playmates very accurately.

    I give you a final answer: with what I know at present, I cannot make a design inference about your two strings. That’s all. Sorry for you (in many senses).

  481. gpuccio,

    From what I am learning about our opposition is that they do not seem to understand that functionality (a function) is something that is observed.

    Science says we observe phenomena and then try to figure out what is causing it, what it is, what’s it all about.

    We observe the specified complexity of living organisms. We observe complex multi-protein configurations doing something. Stuff doing something catches our eye. So we investigate and try to answer science’s three basic questions.

    OK guys- the function is an observation. Intelligent Design is NOT about looking at things and trying to guess their function. We look for signs of agency involvement because we know if an agency was involved that changes the investigation and opens up new questions. Which means the design inference is not a dead end, but a new beginning.

  482. OMTWO:

    Then the question is: Was agency involvement required in the creation of either of those data sets?

    What data sets? And who cares about your infantile parlour game?

  483. OMTWO:

    Then which, if any, of those data sets had agency involvement in their creation?

    Well context is important. And context is missing. I know those letters didn’t appear via nature, operating freely. So I would say the existence of those letters on the the intertubes was the result of some agency.

    That said if the data represents DNA sequences there isn’t any evidence that blind and undirected processes could produce either of those. So that is a start- we know your position’s mechanisms didn’t doit.

  484. Zachriel:

    That’s fine, but if you didn’t know the origin of nylonase, you would still conclude design.

    Knowing the origin of nylonase helps us infer design. :razz:

  485. gpuccio (472):

    b) Functional intermediates should absolutely leave traces in the existing genomes.

    Not only that, but given an existing population’s genome, we should be able to re-construct a phylogenetic tree of genes and how they are related to other genes in the same genome by descent with modification.

    So OMTWO (aka keiths?) seems to finally be catching on, but as soon as that happened decided to go off the rails. We’ll have to see if any of that new found knowledge sinks in.

    One issue I have with Lizzie’s program is that her ‘function’ is read off her strings. In yours, the specification ‘function’ is independent.

  486. Knowing the origin of nylonase helps us infer design.

    Even Jerad acknowledges that.

    But I want to know how the nylon got into the cell in the first place.

  487. For posterity.

    Zachriel:

    Because simple point mutation algorithms will climb the nearest peak and stop. If there are billions of peaks, then you have to start with billions of initial sequences in order to have a decent chance of finding the highest peak. Recombination can largely overcome this problem.

    You mean, with a little intelligent tweaking we can increase the odds of success of our intelligently designed program?

    You have to just love how they appeal to recombination when they feel they need to, but at other times it seems they think it totally irrelevant.

    It’s like they have this smorgasbord that they get to pick and choose from as needed to make an argument unfalsifiable. Talk about ad-hoccery extremis.

    What about Lizzie’s program. Is that a rugged landscape or not, and why?

    And think back to my earlier arguments about how there is a reason for randomizing the genome at the start of a run, and how that is very unlike natural populations.

  488. From the Uniprot main page:

    Nothing is perfect. And nature is no exception. This said, we should be grateful for nature’s imperfections because, were it not for them, we would not be here…

  489. To OMTWO (at TSZ):

    Your comments are frankly stupid and irritating. Enough is enough.

    I should not have done that, because you don’t deserve any serious attention, but I blasted your two sequences and found no similarities, wasting so 5 minutes of my time. So, I maintain that I have absolutely no reason to infer design for those two strings.

    Gpuccio had a go, which was great. He thinks the data represents DNA and as such we need to instantiate it, see what it does and that’ll determine “design or not”. Once instantiated if there is any function at all then the original data was designed, as function is so rare in the total space that finding any function at all is a strong indicator of design. So far this is the best idea, with at least an outcome that either indicates design or not. So it’s doable.

    Is that an admission?

  490. hi gpuccio,

    Do you have a ‘favorite’ protein domain or family you like to refer to in your arguments?

  491. OMTWO seems to think that if you can’t infer design based upon his sequences you therefore have no warrant to ever make a design inference.

  492. To Zachriel (at TSZ):

    That would have been a good place to put the link.

    Here it is:

    http://www.uncommondescent.com.....selection/

    The long discussion with Lizzie starts more or less at porst #62, but you could look mainly at the last posts, especially #216.

    I will sum up here the general idea. The model refers to a transition from an unrelated state to a functional protein, with a defined dFSCI.

    So, if the whole transition haooens by RV, the dFSI of the transition is the same as the dFSI of the protein.

    The question is, how does a selectable intermediate change the scenario?

    I assume a perfect selectable intermediate which is equidistant (at sequence level) from the starting state and the final state. I am not interested her in what its function may be, just in the fact that it is naturally selectable. I also assume that in a negligible time span, the selectable trait is expanded to the whole original population. Those are, as already discussed, extremely generous assumptions in favour of the mechanism of NS.

    I assume that the intermediate splits in two the dFSI of the final protein, and therefore the transition.

    Then I compute the probability of the protein arising in a certain time span, given certain probabilistic resources, by pure RV.

    Then I compute the probability of the protein arising in the same time span of the intermediate is generated about at half time span and expands to the whole population. For that, I assume that the final probabilities are more or less equal to the probability of having two events of the same probability in that time span, where the probability of each event is computed from the dFSI of each transition that still has to happen by RV (the transition from the starting state to the intermediate, and then the transition from the intermediate to the final state).

    I compute the final probability using the cumulative probability for two events with the same probability in a binomial distribution.

    Then I compare this new probability with the original one, and that is the reduction in improbability given by the selectable intemediate in that particualr scenario.

    That is only a gross proposal, and it can certainly be wrong in many points, but it is an attempt to compute a quantitative model for RV + NS. It is not about population genetics, but about the logical interaction of the random part of the algorithm (RV) and the deterministic part (NS). I am interested in any serious contribution to the model.

    Your nomenclature is poor. Darwin identified the existence of vestigial structures. Darwin would be, presumably, a darwinist. Generally, darwinists (those who think natural selection is the primary mechanism of evolution) have resisted the idea that the genome is mostly junk. However, polyploidal genomes and some amoeba with genomes far larger than human genomes tends to indicate that some genomes contain a lot of redundancy.

    I appreciate that you don’t agree with the ideas of people like Moran, Myers and simiular about normal genomes. I am as ionterested as you are in huge genomes, but I have found no detailed information about them. If you ahve something on the matter, I would appreciate if you could share.

    So we have an almost unbelievable prediction from embryology, that the irreducibly complex structure of the mammalian middle ear evolved from reptilian jaw bones. Astoundingly, we find fossils of intermediate structures buried in the rocks. And, we even have evidence of that small changes to genes directly affect the related structures

    OK. And that would be evidence of common descent (no problem for me, as you know). And that minor molecular changes can have big phenotipic effects (no problem with that too, if it is really true).

    But minor molecular changes are not complex functional information…

    Your claim was that recombination was “wishful thinking”, when we know from mathematical studies that recombination is effective in rugged landscapes. You reject a plausible mechanism without evidence.

    I don’t know if I have expressed my claim with siffucuent clarity (I am too lazy tio check), but my claim is that affirming that “recombination can help solving the problem of rugged landscapes in the biological context” is not supported by any evidence. Again, I am not “rejecting a plausible mechanism without evidence”. The opposite id true: I don’t accept any mechanism, plausible or not, if it is not supported by any evidence. Accepting mechanisms without any empirical support is, for me, wishful thinking.

    Anyway, I will look at the papers you linked (not now. I have not the time).

    But it doesn’t address the point that even lacking one of the primary mechanisms of evolutionary novelty the experiment still resulted in adequate function. This is expected when exploring a rugged landscape.

    Yes, the experiment resulted in adequate function, but it was an experiment of function retrieval, where the function was not suppressed, but was still present from the beginning. We could discuss that more in detail, if you want, but now I have not the time.

    That’s fine, but if you didn’t know the origin of nylonase, you would still conclude design.

    Correctly, as I have explained. Because I would infer design for the whole structure of nylonase (and I would be right), and not for the transition from penicillinase to nylonase.

  493. Mung:

    No. I often use those studied by Durston, because for them I can give a detailed value of dFSI. Other times I just pick some protein important enough, and long enough, whoe function is well known.

    We have thousands of examples. We can choose.

  494. Zachriel:

    gpuccio: “recombination can help solving the problem of rugged landscapes in the biological context” is not supported by any evidence.

    Yes, it’s supported by studies of evolutionary algorithms and how they work on rugged landscapes.

    lol. We have studies of evolutionary algorithms and how they work on rugged landscapes. This is evidence that rugged landscapes exist in a biological context and that recombination in a biological context can help solve the problem of rugged landscapes in a biological context.

    Zachriel:

    Handwaving isn’t an argument.

    Is so.

  495. Zachriel to gpuccio:

    Your claims nearly always are general claims about the evolution of complexity.

    What utter unmitigated bs. What a liar.

    Why do you bother, gpuccio, with people so obviously disconnected from reality? The fact is that you have repeatedly attempted to have them focus on one issue, and they just keep making every attempt they can think of to take the discussion off in some other direction.

  496. Zachriel:

    Adequate proteins evolved, even without recombination.

    Then why do you keep feeling the need to appeal to recombination? I am sure gpuccio would love to discuss some of those proteins. So please, post some.

  497. OMTWO:

    Both string are designed. They both have a length of 2001.

    Ain’t that funny. Can we move on now?

  498. Mung: And think back to my earlier arguments about how there is a reason for randomizing the genome at the start of a run, and how that is very unlike natural populations.

    Zachriel: That’s irrelevant with typical rugged landscapes. Randomized genomes will quickly climb local peaks.

    Zachriel:

    If there are billions of peaks, then you have to start with billions of initial sequences in order to have a decent chance of finding the highest peak.

    What does a “randomized genome” look like in a natural population?

  499. OMTWO:

    The simple fact is that you are wrong with your opinions about protein domains and the probability of their origin etc.

    At lease one person at TSZ is paying attention, even if they do disagree.

  500. To Zachriel (at TSZ):

    I am afraid we are at a point where communication is becoming difficult, and not so constructive. Interesting ideas have been expressed up to now, IMO. But we cannot go on forever with the same arguments.

    So, I will not comment on the points about which we have probably said all we had to say.

    I will commetn, instead, about my calculation, and the clarifications you ask.

    Confused on this. If the transition from A to A1 is naturally selected, then why is the probability 1:2^150? In a large population, beneficial mutations will reach fixation 1/2s, where s is the selection coefficient.

    No. A1 is naturally selected. A1 is the midpoint selectable intermediate. The transition fron A to A1 happens by RV, and its dFSI is if 150 bits (it splits in two the original dFSI of 300 bits).

    Similarly, the transition from A1 (expanded to the whole population) has a dFSI of 150 bits, and probabilistic resources comparable to those of the first transition (because A1 has been expanded).

    If A1 did not expand, the probabilistic resources for the second transition would be extremely lower, because the second transition could occur only in one clone of the original population. Under those circumstances, the probabilities are multiplicative, and the whole complexity of the final event would be 300 bits.

    The expansion of A1 changes radically this scenario. Now the second transition has more or less the same probabilities of the first one. That’s why O use the binomial distribution to compute the probability of having two events like that in the time span.

    That has nothing to do with the probability of fixation. Here fixation is assumed to happen, in a deterministic and complete way.

    I hope that clarifies some aspect of my reasoning.


  501. So I would say the existence of those letters on the the intertubes was the result of some agency.

    OMTWO

    Yes, that’s right.

    Yeah baby, I win!

    But that’s not the point.

    Yes, it is.

    By definition all letters printed in a book or on a screen are there via some agency. But none of this speaks to the content of the data itself.

    That is a separate question We do NOT have to know the content to infer design.

    If those letters were scratched on a monolith on the dark side of the moon that they were put there by “an agency” would be the least interesting thing about them.

    Perhaps to you. But then again you think a ribosome is a genome.

    What they mean would be far more interesting.

    They may not mean anything. And without a “Rosetta Stone” or and endless supply of funds, we would most likely never figure it out. However just it’s existence would tell us more in the short term. And there would be no reason to look for any meaning without first determining design.

    Yet it seems you would be happy to leave it at that.

    In your case, absolutely. In some real world case, it would all depend.

  502. Zachriel on nylonase:

    The new function wasn’t designed. It evolved.

    Evolved by design via “built-in responses to environmental cues”.

  503. OMTWO:

    I’m simply asking can ID tell us anything at all about the strings in question.

    But that doesn’t have anything to do with ID. And it doesn’t have anything to do with evolutionism.

    So what is your point besides proving that you are a clueless strawman designer? Or is that what you shooting for?

  504. Zachriel:

    Randomized genomes will quickly climb local peaks.

    Please show us an organism with a randomized genome capable of replication.

    Good luck with that, mouth…

  505. Eric,

    David L. Abel puts it like this:

    The number of binary decision nodes is measured in “bits.” Note that bits never measure binary choices. Bits measure only the number of binary decision nodes. Bits are a measure of binary choice opportunities, not the specific binary choices themselves…

  506. OMTWO:

    But if Seti were ever to post a signal they want the world to help decode it’ll be quite clear what’ll happen at UD with regards to it.

    It all depends on what they are paying.

  507. And so you are 1 out. It really is only 2000 characters. </blockquote

    2000 without the newline character.

    Two strings of exactly the same length composed of exactly the same 4 characters from the English alphabet, that’s pretty improbable.

    i’d say designed. so yeah, lump me with Joe.

  508. dr who:
    What is the difference between 4^10^7 choices and an infinite number.

    One is finite and the other isn’t?

    One is a number and the other isn’t?

    http://scienceblogs.com/goodma.....-a-number/

    http://mathforum.org/library/d.....62486.html

  509. So, PaV. I guess dr. who was just quoting you?

  510. Zachriel:

    But we can see how the complex structure evolved in incremental, selectable steps. There’s no barrier.

    So complex stuff that already existed shifted around and you say this is proof that the complex stuff evolved in incremental steps?

    So by analogy, since recombination moves complex stuff around, recombination is proof that the complex stuff evolved?

  511. Allan Miller:

    For my part, I never shut up about recombination. It is a very important force. And it has clearly been of great historic significance, as witness the many recurring sequences, in both sense and antisense orientations, in functionally unrelated parts of the genome. If one is lukewarm about common descent, of course, one will argue that these are all the same or similar due to common design. But ‘lateral’ within-genome duplication makes exactly the same prediction as whole-genome duplication in descent: a nested hierarchy of markers. The same techniques of phylogenetic tree-building yield the same very strong support for either:

    Yes, I have pointed this out a number of times and asked for evidence that it is in fact the case. Where are the within-genome phylogenetic trees? I’m sure gpuccio would love to see some that show how protein domain superfamilies are related by descent with modification from a common ancestor.

    http://supfam.cs.bris.ac.uk/SUPERFAMILY/

    As for recombination and rugged landscapes, do you suppose the evolution of recombination itself took place on a nice smooth landscape?

    http://en.wikipedia.org/wiki/Recombinase

  512. Mung:

    I’m sure gpuccio would love to see some that show how protein domain superfamilies are related by descent with modification from a common ancestor.

    I definitely would! How did you guess? :)

  513. :)

  514. To OMTWO (at TSZ):

    So what I’m asking, in essence is that you test or evaluate my documents in the same manner as scientists daily test for design in other sciences. So it seems that nobody is able to recognize patterns arranged by an intelligent cause for a purpose, if those documents indeed contain such a pattern. Just knowing that one did and one did not for example would essentially solve my problem but it seems despite this being the self proclaimed reason for ID’s existence nobody can actually do it!

    Don’t lie!

    I have asnwered very clearly that no design inference can be done for both strings. That should solve your problem. Neither string is designed.


  515. That is a separate question We do NOT have to know the content to infer design.

    OMTWO:

    But you are not inferring design at all.

    I have determined agancy involvement was required. That is all I have to do.

    You think that ribsomes have a non-physical component but can’t prove it.

    Scince is not about proving things. I infer there is a non-physical component because artificial ribosomes do NOT work- and artificial ribosomes have teh SAME physical components as the real thing.

    You’ve established design in both my documents (they are on the internet!) but have failed to look for “meaning”.

    As I said content is irrelevant, just like you.

  516. And OM, as I have already said, biological information refers to function. We OBSERVE the functionality. We do NOT try to guess what the function, if any, is.

    Just because you are an scientifically illiterate dullard, doesn’t mean you trope refutes ID.

  517. Zachriel:

    The reptilian middle ear is much less complex than the mammalian middle ear.

    Perhaps but your position cannot account for either of them. Nor can it account for reptiles nor mammals.

    So what do you have besides your misunderstandings, equivocations and bald assertions?

  518. gpuccio:
    The “known causes” have nothing to do with the assesment of dFSCI.

    onlooker:

    Yes, they do.

    No, they do NOT. Ya see just because you can misinterpret what has been said doesn’t mean it is true. And when you act like a little child when clarification is offered proves what I have said all along- you are nothing but a loser.

    But I am sure that you are impressed with yourself.

  519. Mung @511:

    I hadn’t seen that quote before. Thanks for vindicating me!

  520. Hi Eric,

    I’ve had Abel’s book The First Gene for almost a year now and have finally decided to have a serious go at completing it.

    I like how he puts things a lot of the time.

  521. To Zachriel (at TSZ):

    See Keith’s description

    No, thank you. Already did, and it made my views about human nature even worse than they already were.

    I leave Keith’s masterpieces to you, who seem to appreciate them.

    You are always welcome to comment on more serious issues, as you can do.

  522. I’m still interested in how recombination itself evolved on a smooth fitness landscape.

  523. Zachriel:

    Also, we’re still left with your leaky bucket explanation. See keiths’ description.

    In spite of it’s utter stupidity we addressed it. We’re still awaiting his response.

  524. Mung (528):

    I’m still interested in how recombination itself evolved on a smooth fitness landscape.

    Look it up!! He’s something I found quite easily:

    http://www.ncbi.nlm.nih.gov/pm.....MC1208206/

  525. To Keiths (at TSZ):

    You go on stating nonsense. What is your problem?

    I consider that a string exhibits dFSCI only if both these criteria are satisfied:

    a) High functional information in the string (excludes RV as an explanation)

    b) No known necessity mechanism that can explain the string (excludes necessity explanation)

    Then I infer design.

    Your rantings have nothing to do with that. Again, what is your problem?

  526. To Zachriel (at TSZ):

    Can’t seem to resolve the apparent contradiction between the first statement and b).

    The contradiction is not even apparent. It just does not exist.

    What would it be?

    The “first statement”, as far as I understand, would be:

    “The “known causes” have nothing to do with the assesment of dFSCI. ”

    And it’s perfectly true. First of all, “known causes” does not seem to be something that I have said. I have checked this thread, and it only appears in Mung’s post #357, where he quotes you. So, it is your concept, and your words.

    My concept and words you can find in my many times quoted statement #5:

    #5) Any object whose origin is known that exhibits dFSCI is designed (without exception)

    And you can find a detailed explanation of that point in my post #341. I paste it here again:

    Just to be more clear. We define a property (dFSCI) and how to assess it in objects.

    Then we assess that property blindly in any number of strings of whoch we may know the true origin. For instance, we mix any number of meaningful strings designed by humans with any number of randomly generated strings, all of them long enough to be beyond the threshold of 500 bits. And then we ask independent observers to tell us which are the meaningful strings designed by humans and which are those that do not allow a design inference.

    IOWs we are empirically testing the specificity of the dFSCI property when it is used to infer design in a set of objects where the true origin can be known for certain.

    It is an empirical testing, and an empirical observation. Not “a conclusion”.

    And you can find an even more detailed explanation in my post #362. I paste it here again:

    I don’t understand your reference to known causes. Either you misunderstand, or you don’t even read with a minimal attention what I write.

    The “known causes” have nothing to do with the assesment of dFSCI. The requisites to assess dFSCI are two (as I have said millions of times):

    a) High functional information in the string (excludes RV as an explanation)

    b) No known necessity mechanism that can explain the string (excludes necessity explanation)

    The “known causes” enter the scene only when we want to test the procedure against real examples. So, someone takes n strings of sufficient length whose origin he knows because he was responsible for their collection. Let’s say that 5 strings are taken from books, of which we know the author. 5 strings are generated by a random generator.

    Then another person, who does not know the origin of the 10 strings, evaluates dFSCI in them. He will correctly attribute dFSCI to the first 5, and infer design. Take for example the paragraph about Shannon’s biography from Wikipedia. The questions are:

    a) Is the dFSI of the string high? Answer: Yes.

    b) Do we know a necessity mechanism that can output that paragraph? Answer: No.

    So, we infer that the piece was written by a designer. And we are right. The first person, who collected the strings, knows that it was written by someone, and can confirm that the inference is correct.

    For the 5 randomly generated strings, I will not be able to recognize any function (meaning) in them, and I will not infer design. Correctly. The first person will confirm that they were generated randomly, without any intelligent design.

    So, where does the necessity mechanism come into action?

    Suppose that one of the strings is a series of aaaaaa, of the same length as the Shannon biography. Will I infer design? No. Because such a string could be originated by a mechanism, such as the tossing of a coin which has the “a” symbol on both sides. Even if I did consider the string specified (for example, because it is compressible), I would not consider it complex (for the same reason, because it is highly compressible, and its Kolgomorov complexity is very low). Even if the string was designed, that would be a false negative.

    Three different kinds of strings. Three different empirical assessments of dFSCI. Three independent confirmations from the person who knows the origin of the strings. no false positives. Maybe a false negative.

    100% specificity.

    It’s simple, but you will probably not understand, or pretend that you don’t understand. I really don’t know, I have lost any hope to have a constructive discussion with you all.

    IOWs, knowing the “origin” of an object is an empirical, historical data that is usefule when we want to blind test our procedure.

    My point b), instead, says:

    b) No known necessity mechanism that can explain the string (excludes necessity explanation)

    I am speaking, as everyone can see, of “known necessity mechanism that can explain the string”. That has obviously nothing to do with historically knowing ots origin.

    A known explanatory mechanism is one thing. A cause is another thing. An empirically known historical origin is something else again.

    Have you problems with words? I had never noticed that. :)

  527. To OMTWO:

    You said:

    So what I’m asking, in essence is that you test or evaluate my documents in the same manner as scientists daily test for design in other sciences. So it seems that nobody is able to recognize patterns arranged by an intelligent cause for a purpose, if those documents indeed contain such a pattern. Just knowing that one did and one did not for example would essentially solve my problem but it seems despite this being the self proclaimed reason for ID’s existence nobody can actually do it!

    Emphasis added.

    Now you say:

    Joe and Mung say it’s designed.

    Gpuccio says it’s not.

    So, you are definitely lying.

    We asnwered your question. Maybe one of us is wrong. Maybe we considered different questions.

    I have clearly stated that we could infer design for both sheets with strings printed. If instead we consider the strings themselves, we cannot infer design.

    That is in perfect accord with the definition of dFSCI and of design inference. I challenge you to demonstrate the contrary.

    So, in the end, you are simply lying.

  528. 534

    Hats off to you GP.

    You have done a great job of repeating clarity against an unyielding and unreasonable pack of ideologues.

    I hope Liz is happy with the anti-intellectual subterfuge she’s handed off the keys to.

  529. To OMTWO (at TSZ):

    You are a liar just the same.

    I did not infer design for neither string.

    Even if one of them, or both, ahve a function that I did not recognize, ia have given one or two false negatives.

    Which is exactly what can be expected in a design inference.

    If I had given one or two false positives, I would have failed.

    But not so.

    You don’t understand the ID theory, do you?

    Or you are just a liar.

  530. I hope Liz is happy with the anti-intellectual subterfuge she’s handed off the keys to.

    Got to give her credit for one thing at least, showing how easy it is to confuse natural selection with intelligent selection and thinking one can produce the same effects as the other.

  531. You don’t understand the ID theory, do you?

    Bet on it.

  532. To Zachriel (at TSZ):

    Okay. So we’re working with a trichotomy. It’s really just another restatement of the Explanatory Filter.

    It definitely is. Whoever said anything different? The important point, however, is that it is an empirical trichotomy, not a logical one.

    The specific problem is that evolution has both random and deterministic aspects

    Sure.

    Gpuccio will argue that evolution alternates the two mechanisms, therefore is excluded.

    No. I simply argue that neo darwinian evolution has to offer explicit paths for what it tries to explain. That the deterministic effects in those paths must be verified and demonstrated. And that what reamins for RV to do must be in the range of what RV can do (IOW, dFSCI can never be reached by RV alone). My modeling of RV and NS had exactly this puspose.

    That argument doesn’t work, though, because the test for “highly functional information” only precludes completely random sequences,

    The argument does work, because, as you can verify in my examples, it is applied only to the RV part (each random transition). My model also allows to compute what remains to happen by RV in a system after we have considered the deterministic effects of NS, and to compute a global probability for the whole system.

    not incremental increases in functional complexity.

    You must be distracted. I have always admitted, many times to you directly, that incremental selectable increases in functional complexity could do the job. But they simply do not exist in biology.

    So please, show those incremental increases in functional complexity, each of them of low complexity, each of them naturally selectable in respect to what was there before, for most basic protein domains (you can just start with one, then we will see).

  533. I just love it when people find out for themselves what I have, unfortunately, known for years. Thank you to the regulars of TSZ for helping me prove my point, again.

  534. They still think that natural selection can create the appearance of design. NS spreads alleles around. That’s it.

  535. To OMTWO:

    In the meanwhile you may continue to call me a liar

    Definitely.

  536. Petrushka (at TSZ):

    Thank you for offering again the usual, old, trivial non arguments. I am in some way affectioned to them, as you may know.

    And they are a refreshing shower of relative sense and integrity, after the experiences with Keiths and OMTWO.

    So, thank you!

  537. Jerad (530):

    Mung: I’m still interested in how recombination itself evolved on a smooth fitness landscape.

    Look it up!! He’s something I found quite easily:

    http://www.ncbi.nlm.nih.gov/pm…..MC1208206/

    Did you even bother reading it? Just what is it you think you found? A title that makes it sound like the paper might actually explain the appearance of the mechanisms by which recombination occurs?

  538. Zachriel:

    The specific problem is that evolution has both random and deterministic aspects.

    The specific problem is, that only the RV part can throw up anything novel and it may or may not even be functional when it does. All the ‘determinist’ part can do is spread it around through the population once it’s arisen.

    And I even question the deterministic aspect. Even given a new function that confers some change in fitness there is still a huge chance component.

  539. OMTWO:

    So frankly, your opinion of what I do and do not understand with regard to “ID Theory” is irrelevant until and unless you can prove that you can actually do something with “ID Theory” that does not revolve around your misunderstandings of evolution.

    I take it you didn’t even bother to follow my links.

    And you even described the steps that I took myself, and could obviously do more if I really cared to, so you can’t even be honest.

    For example, I looked at frequencies of the various letters and looked for patterns that would indicate that the two strings were somehow related. But I frankly think it’s a waste of my time because it won’t prove diddly.

  540. Zachriel:

    That’s right. It requires that incremental steps connect the various “islands of function”.

    That’s what gpuccio has been saying.

    That diverging descent with modification leads to a nested hierarchy is important evidence.

    Evidence of what? That intermediate steps once existed? I’m not sure how it does that, but if you care to say more we’ll consider it.

    We have all sorts of examples of intermediates in nature.

    Intermediates between protein domain superfamilies?

    gpuccio has expressed quite some interest in seeing those, if you could just ever be bothered enough to post them.

    Intermediates between the major phyla which first appear in the Cambrian? Show us.

  541. Zachriel:

    That’s right. It requires that incremental steps connect the various “islands of function”.

    Some mutation magically appears, a single ‘step towards’ some island of function which at least has the potential of resulting in some lucky organism leaving more offspring. What is the chance that the mutation will be lost due to chance alone, rather being spread through the population by this ‘deterministic’ force you mention?

    Assume another mutation magically appears which at least has the potential of resulting in some lucky organism leaving more offspring. What is the chance that B can be added to A as a coordinated ‘step’ towards the same ‘island’ as A? What is the chance that B even takes place in an organism carrying A?

  542. keiths:

    No, Dembski explicitly states that P(T|H) represents the probability of producing the object in question via “Darwinian and other material mechanisms.” If that probability is low enough, then natural selection and other “material mechanisms” are ruled out.

    I don’t see how the LCCSI, even if it were correct, would solve the circularity problem, since P(T|H) is right there in the CSI equation, and H includes “Darwinian and other material mechanisms.” You have to know that something couldn’t have evolved before you attribute CSI to it.

    You’re either ignorant, a liar, or both.

    Assigning a probability that something evolved by Darwinian and other material mechanisms isn’t the same as knowing that it couldn’t have evolved.

    And your claim that if the probability is low enough the probability is ruled out is just stupid. The probability is what it is. It doesn’t change because of how high or how low it is. And the probability isn’t ruled out, it’s taken into account. That’s what P(T|H) means.

    liar.

  543. keiths:

    You have to know that something couldn’t have evolved before you attribute CSI to it.

    Mung:

    Assigning a probability that something evolved by Darwinian and other material mechanisms isn’t the same as knowing that it couldn’t have evolved.

    keiths:

    I didn’t say it was. I said “If that probability is low enough, then natural selection and other “material mechanisms” are ruled out.”

    keiths:

    You have to know that something couldn’t have evolved before you attribute CSI to it.

    Looks to me like you said it was.

  544. Mung (543):

    Did you even bother reading it? Just what is it you think you found? A title that makes it sound like the paper might actually explain the appearance of the mechanisms by which recombination occurs?

    Did you read it? You asked a question, I found something that addresses that question. What’s the problem?

    Why did you ask the question if you didn’t want to find out?

  545. Mung:

    I don’t know if Keiths and OMTWO are the same person, as somebody has suggested, but they certainly share many moral attitudes. A tendency to lying seems to be one of them.

    They seem to believe that, if one is smart enough, one can tell any lie and people will believe it.

    Well, it may be partially true: some people, maybe many, will believe it.

    It seems that those committed to ideology are ready to believe anything. Why not a smart lie?

  546. To Joe Felsenstein (at TSZ):

    That is where gpuccio invokes the ruling-out of deterministic natural causes, and where there seems to be circularity as he does so.

    Thank you for the “seems”. At least you have avoided an explicti lie.

    There is no circularity. That’s the simple truth.

    You must have the courage to defend your theory where it can be defended: showing that RV + NS could do the job. At that level, we can constructively and honestly discuss.

    All the nonsense about circularity, smartly started by Keiths, who obviously has no moral constraints, and happily followed by many others, including Zachriel, is really degrading and cognitively infamous.

    Have I used strong words? Yes, I have.

  547. To Petrushka (at TSZ):

    I think everyone accepts the possibility that there could be insurmountable gaps. At least in principle.

    It’s never worked in any other branch of science, but there could be a first time.

    Thank you for the admission. I always said you are the best! :)

    Let’s say it could work for designed things.

  548. Mung (548):

    One of Dr Dembski’s examples from his paper:

    Next, define p = P(T|H) as the probability for the chance formation for the bacterial flagellum. T, here, is conceived not as a pattern but as the evolutionary event/pathway that brings about that pattern (i.e., the bacterial flagellar structure). Moreover, H, here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms.

    and further on . . .

    consider first that the product ?S(T)·P(T|H) provides an upper bound on the probability (with respect to the chance hypothesis H) for the chance occurrence of an event that matches any pattern whose descriptive complexity is no more than T and whose probability is no more than P(T|H). The intuition here is this: think of S as trying to determine whether an archer, who has just shot an arrow at a large wall, happened to hit a tiny target on that wall by chance. The arrow, let us say, is indeed sticking squarely in this tiny target. The problem, however, is that there are lots of other tiny targets on the wall. Once all those other targets are factored in, is it still unlikely that the archer could have hit any of them by chance? That’s what ?S(T)·P(T|H) computes, namely, whether of all the other targets T for which P(T |H) ? P(T|H) and ?S(T ) ? ?S(T), the probability of any of the targets being hit by chance according to H is still small.These other targets T are ones that, in other circumstances, S might have picked to match up an observed event with a pattern of equal or lower descriptive complexity than T. The additional requirement that these other targets have probability no more than P(T|H) ensures that S is ruling out large targets in assessing whether E happened by chance. Hitting large targets by chance is not a problem. Hitting small targets by chance can be.

    We may therefore think of ?S(T)·P(T|H) as gauging the degree to which S might have been self- consciously adapting the pattern T to the observed event E rather than allowing the pattern simply to flow out of the event. Alternatively, we may think of this product as providing a measure of the artificiality of imposing the pattern T on E. For descriptively simple patterns whose corresponding target has small probability, the artificiality is minimized.

    keiths has it right: Dr Dembski is trying to find a way to rule out chance and natural processes by looking at the probability of something arising. keiths didn’t say the probability is ruled out, he said that ‘natural mechanisms’ are ruled out.

  549. Jerad:

    keiths has it right: Dr Dembski is trying to find a way to rule out chance and natural processes by looking at the probability of something arising. keiths didn’t say the probability is ruled out, he said that ‘natural mechanisms’ are ruled out.

    The simple point is: they are “ruled out” empirically, in the sense that they are not a good scientific explanation.

    The correct scientific term would be “they are rejected”. Indeed, chance + natural processes, here, are the null hypothesis. As in any other Fisherian hypothesis testing, we reject the null hypothesis if we find that the data we observe are too unlikely under the null hypothesis.

  550. GP: And, the result does not so much hinge on a precise probability metric as on the results of sampling theory that show why a relatively tiny sample of a very large pop will most likely pick up the bulk of the distribution. The needle in the haystack effect. KF

  551. gpuccio & KF (555, 556):

    I was merely trying to explain to Mung why keiths was not lying, that his, keiths’, interpretation of what Dr Dembski published was essentially correct.

  552. To Keiths (at TSZ):

    I have calmly explained to you, in three separate comments (link, link, link), why the dFSCI concept (as you’ve defined it) is circular. You obviously don’t agree, but instead of responding rationally, you’ve chosen to impugn my character, as if that were somehow a rebuttal.

    No. There is nothing to rebut. Your “argument” is not about dFSCI, but about a parody of it, that you have invented. That’s why I have impugned your character. There is nothing correct in your argument, either cognitively and morally.

    The truth or falsehood of my argument doesn’t rest on whether I am an angel, a devil or something in between.

    Certainly not. But the quality of your “argument” certainly shows that you are not an angel.

    If you disagree with my argument, show us precisely and explicitly where it fails.

    I still have to see what connection it has with what I say. You have never shown that. You give a reasonable and pertinent argument, and I will rebut it. I cannot rebut arguments that have no meaning.

    If you can’t rebut my argument, then you’ll have to come to grips with the fact — excruciatingly painful though it may be — that I’m right, and that your argument really is circular.

    I would definitely say that the probability of that is well beyond Dembski’s UPB.

    Yes, and you’ve made yourself look foolish and intemperate. I suspect you’ll regret it once you cool down.

    I am very cool now, and strangely I don’t regret it at all. Your suspect (or was it an inference by analogy?) seems to be wrong.

  553. To Keiths (at TSZ) and Jerad here:

    I will show the blatant mistakes in Keiths’post about Dembski, just because it is in a more explicit form than his “bucket” parody. I will refer, however, to my terminology and to my definitions, not Dembski’s, as Keiths’point seems to be that I have inherited the circularity from Dembski.

    Keiths’wrong points are:

    1. To safely conclude that an object is designed, we need to establish that it could not have been produced by unintelligent natural causes.

    Wrong, ar at least grossly incomplete. The main requirement to assess that a string exhibits dFSCI is that we must recognize and define a function for it. That is the specification, which is the true sign of design. But we must also assess that the string exhibits complex specification, and to do that we have to show that:

    a) The dFSI linked to the function we have defined is high.

    b) It is not compressible by a deterministic mechanism.

    2. We can decide whether an object could have been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold).

    Wrong.

    a) We can decide if RV alone is a good explanation for the string by evaluating the dFSI in the string itself.

    b) We can decide if some explicit necessity mechanism can explain the string just by critically analyzing and testing the proposed mechanism.

    c) We can decide if some explicit necessity mechanism can lower the improbability of obtaining the result by RV by evaluating the compressed dFSI.

    3. To determine whether something has CSI, we use a multiplicative formula for SC that includes the factor P(T|H), which represents the probability of producing the object in question via “Darwinian and other material mechanisms.”

    P(T|H) is exactly what I said at point c): the probability of getting the observed string by RV after having taken into account the necessity effects.

    4. We compute that probability, plug it into the formula, and then take the negative log base 2 of the entire product to get an answer in “bits of SC”. The smaller P(T|H) is, the higher the SC value.

    OK.

    5. If the SC value exceeds the threshold, we conclude that unintelligent processes could not have produced the object. We deem it to have CSI and we conclude that it was designed.

    No. Completely wrong. We just conclude that the string exhibits dFSCI.

    Because of that conclusion, and because we know from empirical tests made on human artifacts and random strings, that dFSCI can detect design with 100% specificity in all cases where the true origin of the string can be independently known by other means (like historical onservation) we infer design for the string.

    This is your lie. This is your intentional parody, rebutted by me a lot of times. You would have circularity if the conclusion that the string id designed were a logical implication derived from the definition itself.

    But that’s not the case. That has never been the case. Design is not deducted from the definition. Design is inferred by analogy, because our empirical experience tell us that dFSCI is a very good marker of design, with 100% empirical specificity. There can be no circularity in an empirical observation. And you are a liar.

    6. To summarize: to establish that something has CSI, we need to show that it could not have been produced by unguided evolution or any other unintelligent process. Once we know that it has CSI, we conclude that it is designed — that is, that it could not have been produced by unguided evolution or any other unintelligent process.

    A good summary of your errors.

    7. In other words, we conclude that something didn’t evolve only if we already know that it didn’t evolve. CSI is just window dressing for this rather uninteresting fact.

    This is really a trivial lie. The truth is:

    We infer that something was designed because it exhibits the main sign of design (functional specification), and because it exhibits an objective property, dFSCI: the complexity necessary to express that function is too high and there is no other known necessity explanation that could have generated that functional string.

    We make that inference with good safety, because the objective marker we defined (dFSCI) has 100% specificity in detecting design in all empirical tests.

    IOWs, our marker (dFSCI), when present and correctly assessed, seems to be empirically able to distinguish between true designed specifications (those strings where the function was inputted by an intelligent designer) and the cases of pseudo-specifications (those strings where a function can be recognized and defined, but it was not inputted by an intelligent designer, but was rather a result of chance or of necessity, or of some mix of the two).

  554. gpuccio (559):

    I know you are NOT directly working from Dr Dembski’s paper but keiths was/is. And it’s a source I have easy access to and I know where things are.

    Looking at Dr Dembski’s paper he clearly says that non-random, low probability sequences are more compressible than random ones:

    from page 12:

    . . . the collection of algorithmically compressible (and therefore nonrandom) sequences has small probability among the totality of sequences, so that observing such a sequence is reason to look for explanations other than chance.

    and from page 11:

    In other words, most sequences are random in the sense of being algorithmically incompressible. It follows that the collection of nonrandom sequences has small probability among the totality of sequences so that observing a nonrandom sequence is reason to look for explanations other than chance.

    If something is algorithmically compressible it can be expressed algorithmically in a shortened form without losing information. Purely random patterns are the most uncompressible because there is no algorithm to generate shorter versions. Non-random sequences have ‘patterns’ that can be abreviated so they are compressible following a scheme or code.

    I think the pertinent part of his paper that addresses the ability to detect complex, specified information without resorting to perception is found on page 24:

    To see that ? is independent of S’s context of inquiry, it is enough to note two things: (1) there is never any need to consider replicational resources M·N that exceed 10120 (say, by invoking inflationary cosmologies or quantum many-worlds) because to do so leads to a wholesale breakdown in statistical reasoning, and that’s something no one in his saner moments is prepared to do (for the details about the fallacy of inflating one’s replicational resources beyond the limits of the known, observable universe, see my article “The Chance of the Gaps”32). (2) Even though ? depends on S’s background knowledge through ?S(T), and therefore appears still to retain a subjective element, the elimination of chance only requires a single semiotic agent who has discovered the pattern in an event that unmasks its non-chance nature. Recall the Champernowne sequence discussed in sections 5 and 6 (i.e., (?R)). It doesn’t matter if you are the only semiotic agent in the entire universe who has discovered its binary-numerical structure. That discovery is itself an objective fact about the world, and it rightly gets incorporated into ? via ?S(T). Accordingly, that sequence would not rightly be attributed to chance precisely because you were the one person in the universe to appreciate its structure.

    Sorry, some of the formatting got lost. But you can see that he clearly addresses the subjectivity issue. So, I think, whether or not subjectivity has been eliminated in Dr Dembski’s derivation comes down to whether or not you accept the argument he makes in this paragraph.

    I don’t think he has eliminated subjectivity.

  555. Bravo gpuccio. Your takedowns are awe inspiring and superbly educational. I just hope you keep it coming for as long as you can muster the patience to parry the provocations and distractions. You’re rating a black belt so far.

    Truly, the smart play on their side would have been to zip it, exiting with a small loss. But we know their pedantry, conceit, and general dislike of immaterial reality will keep them digging tunnels and talking in tongues.

    But no worries, we have able tunnel detectors and interpreters like you, KF, UB, Mung, Joe, PaV, StephenB, and VJTorley to clear the nitty and get at the gritty.

    I(we) thank you deeply for your time and service.

  556. keiths:

    You have to know that something couldn’t have evolved before you attribute CSI to it.

    Only a moron would say something like that. Either a moron or someone who is dishonest.

    CSI/ biological function is an OBSERVATION. And we make the OBSERVATION BEFORE knowing what caused it.

  557. Steve:

    Thank you! Appreciated. :)

  558. Jerad:

    OK, but Keiths has been using those false points to argue for an inexistent circularity, both in Dembski and in me.

    To be clear:

    a) I don’t agree with Demski that a string should be compressible to be designed, if he ever siad that.

    b) I don’t what to elimimate subjectivity. The foundation of all my thinking is objective subjectivity, that is consious representations. That’s why I use the concept of a subjective designer both in the definition of design and in the recognition/definition of the function.

  559. Steve:

    But no worries, we have able tunnel detectors and interpreters like you, KF, UB, Mung, Joe, PaV, StephenB, and VJTorley to clear the nitty and get at the gritty.

    It’s a good team! :)

  560. Law and regularity can produce compressible data.

  561. gpuccio (564):

    a) I don’t agree with Demski that a string should be compressible to be designed, if he ever siad that.

    Thank you. If I missed that declaration earlier then I apologise.

    b) I don’t what to elimimate subjectivity. The foundation of all my thinking is objective subjectivity, that is consious representations. That’s why I use the concept of a subjective designer both in the definition of design and in the recognition/definition of the function.

    Okay. You do differ from Dr Dembski. When talking to ID proponents on this forum I usually assume they are working from his view but clearly I should not do that.

    I’m not quite clear what you mean by objective subjectivity but I’ll try and skim through some of the previous posts on this thread before I comment on that.

  562. Jerad:

    Did you read it? You asked a question, I found something that addresses that question. What’s the problem?

    I read more than just the title. I read enough to know it doesn’t address the issue. That’s the problem. :)

    Why did you ask the question if you didn’t want to find out?

    I do want to find out. That paper just doesn’t provide any answers. Read if for yourself and make your case that it does.

    It’s about how recombination could help, not about where, when and how recombination originated. And it certainly doesn’t talk about the sort of fitness landscape that was present when recombination originated.

    The title is totally misleading. Read the paper and tell us why I am wrong.

  563. Jerad,

    You can only assert that keiths is not lying by ignoring what he wrote.

    Now on to the question of a probability calculation ruling something out. How does that happen? 1/0?

    Show me a mathematical example, if you will.

  564. gpuccio:

    I cannot rebut arguments that have no meaning.

    lol

    That’s essentially the same thing I told him about his ID is not compatible with the evidence for common descent “argument.”

  565. Mung (569):

    Now on to the question of a probability calculation ruling something out. How does that happen? 1/0?

    Show me a mathematical example, if you will.

    Dr Dembski makes such determinations throughout his paper:

    from page 3

    More formally, the problem is to justify a significance level ? (always a positive real number less than one) such that whenever the sample (an event we will call E) falls within the rejection region (call it T) and the probability of the rejection region given the chance hypothesis (call it H) is less than ? (i.e., P(T|H) less than ?), then the chance hypothesis H can be rejected as the explanation of the sample.

    That’s pretty basic hypothesis testing stuff and Dr Dembski uses that basic logic many times again, on page 12 for example:

    Suppose now that H is a chance hypothesis characterizing the tossing of a fair coin. Any output sequence v in the reference class ? will therefore have probability 2–N. Moreover, since the extremal set T? contains at most 2?+1 elements of ?, it follows that the probability of the extremal set T? conditional on H will be bounded as follows:
    P(T?|H) ? 2?+1/2N = 2?+1–N.
    For N large and ? small, this probability will be minuscule, and certainly smaller than any significance level we might happen to set. Consequently, for the sequences with short programs (i.e., those whose programs have length no greater than ?), Fisher’s approach applied to such rejection regions would warrant eliminating the chance hypothesis H.

    Notice the language: “would warrant eliminating the chance hypothesis” and in the previous example “then the chance hypothesis H can be rejected” based on a probability measure.

    I am only pulling out a few examples where the language is blatantly clear but the same logic runs throughout Dr Dembski’s paper.

    On pages 18 & 19 the use is very clear:

    The additional requirement that these other targets T have probability no more than P(T|H) ensures that S is ruling out large targets in assessing whether E happened by chance. Hitting large targets by chance is not a problem. Hitting small targets by chance can be.

    Some targets are ruled out based on a probability requirement.

    On page 20 Dr Dembski again uses similar reasoning:

    More formally, if a pattern T is going to be adequate for eliminating the chance occurrence of E, it is not enough just to factor in the probability of T and the specificational resources associated with T. In addition, we need to factor in what I call the replicational resources associated with T, that is, all the opportunities to bring about an event of T’s descriptive complexity and improbability by multiple agents witnessing multiple events.

    Here the probability along with other factors are used. And again it’s the elimination of the chance occurrence or hypothesis.

    On page 21:

    For most purposes, however, ? is adequate for assessing whether T happened by chance. The crucial cut-off, here, is M·N·?S(T)·P(T|H) less than 1/2: in this case, the probability of T happening according to H given that all ~ relevant probabilistic resources are factored is strictly less than 1/2, which is equivalent to ? = ~ –log2[M·N·?S(T)·P(T|H)] being strictly greater than 1

    Some of the formatting and typesetting is lost but it’s the language used I am emphasising.

    Here is another restatement on page 23:

    In general, the bigger M·N·?S(T)·P(T|H) — and, correspondingly, the smaller its negative ~ logarithm (i.e., ? ) — the more plausible it is that the event denoted by T could happen by chance.

    And further down the same page:

    On the other hand, if ?S(T) were on the order of 10 or less, ? would be greater than 1, which would suggest that chance should be eliminated. The chance event, in this case, would be that the bank account number had been accidentally reproduced and its contents accidentally retrieved, and so the elimination of chance would amount to the inference that this had not happened accidentally.

    If you have a copy of the article I referenced in the earlier post could you send it to me? I’d like to have it for my records.

  566. Jerad, that’s a lot of words to avoid a very simple question.

    How are probabilities expressed?

    When a coin is tossed, for example, we might express the probability of a heads appearing as 1/2. When a die is tossed, for example, we might express the probability of a given number appearing as 1/6.

    The probability of T given the change hypothesis H (which includes chance + necessity). It’s a fraction.

    Now if we rule out chance + necessity, as keiths claims we do, then the value of H is 0, is it not?

    So please express that fraction for us and explain how it makes sense. Do you really think that is what Dembski is saying?

  567. Jerad:

    If you have a copy of the article I referenced in the earlier post could you send it to me? I’d like to have it for my records.

    The one that is supposedly on The Evolution of Recombination?

    Click on the PDF link. ;)

  568. keiths: “You have to know that something couldn’t have evolved before you attribute CSI to it.”

  569. Mung, does KS understand that he is making logical impossibility his standard to reject materialistic evolution? Does he understand that no scientific theory can demand acceptance at that level, but instead needs to find more or less direct empirical support for the capacity of the causal factors it claims? This looks a lot like the a priorism that Lewontin et al assert and which Johnson excoriated, quite rightly. Gotta go. KF

  570. Mung (572):

    How are probabilities expressed?

    Generally as fractions or decimals.

    When a coin is tossed, for example, we might express the probability of a heads appearing as 1/2. When a die is tossed, for example, we might express the probability of a given number appearing as 1/6.

    Yup.

    The probability of T given the change hypothesis H (which includes chance + necessity). It’s a fraction.

    Sure.

    Now if we rule out chance + necessity, as keiths claims we do, then the value of H is 0, is it not?

    The value of H? In Dr Dembski’s paper? H is the chance hypothesis. The value of H does not make sense. The probability of H kind of makes sense. Dr Dembski talks about the value of P(T|H) which is the probability of T given H. (Which is what you said a above) Is that what you mean? If the value of P(T]H) was zero then Dr Dembski’s measure of complexity would come out to be infinity. Which is not good.

    Generally the P(T]H) is going to be very, very small but not 0.

    So please express that fraction for us and explain how it makes sense. Do you really think that is what Dembski is saying?

    I think Dr Dembski is very clear in what he is saying. You’ve read his paper. What do you think? I cannot possible give you a value for P(T]H) without a particular example in mind. Dr Dembski goes through some particular examples and I’m happy to discuss those or any other concrete example you wish to bring up. Just let me know.

    Mung (573):

    The one that is supposedly on The Evolution of Recombination?

    Click on the PDF link.

    Yes but it’s a pay for view thing. Since you’ve read the paper I thought you must have paid for it.

  571. To Mark Frank (at TSZ):

    No. I have definite personal opinions on each of you, although I am sure you are completely wrong about circularity.

    I believe that you are sincere and I respect you for that, but at the same time I must say that you seriously confused about many fundamental cognitive issues.

    I have respect for Zachriel’s intelligence, but sometimes he has a few opportunistic positions that are not completely enjoyable. However, he usually keeps some dignity.

    I believe that Keiths is a liar. A smart liar, but a liar just the same.

    I take full responsibility for each of these judgements, that are obviously based on a vast experience of your different behaviours, and not certainly only on your wrong ideas about circularity.

    You asked for a sincere statement, I believe, and this is it.

  572. Jerad:

    Yes but it’s a pay for view thing.

    That’s strange. Maybe you only have to pay if you are in the UK. =P

    The Full Text of this article is available as a PDF (3.2M).

  573. To Zachriel (at TSZ):

    He says that everything with dFSCI for which the origin is known is designed, but that’s not the case with evolved sequences—unless you have apriori rejected the conclusions of evolutionary science. And that is the very thing he is attempting to show.

    Zachriel, be serious! I only assume that the origin of biological sequnces is noy historicallt known. I am not rejecting a priori “the conclusions of evolutionary science”. I don’t accept them “a posteriori”, for very definite reasons.

    My only assumption, however, in defining dFSCI, is that some strings have a known, unquestionable origin: human designed strings, such as language and software, and strings that have been certainly generated in a random system.

    I think that it would be simply fair that all of us assume that the origin of biological stgrings is controversial, given that we are debating exactly that.

    So, my point is clear. dFSCI is tested against strings whose origin we know in an uncontrovertible historical way. With those strings, its specificity is 100%. It cannot, obviously, be tested against biological strings because they are the controversial issue.

    I use, instead, dFSCI to infer design for the controversial issue, biological strings, confiding in the absolute specificity it has in all known cases.

    It is, as I have always said, an inference by analogy.

    The eclusion of NS as a credible deterministic explanation is another controversial issue. It is so controversial that we have spent a lot of time, constructive time, debating it. I have my opinions, you have yours. But I would never affirm that the issue is not controversial.

    Only darwinists have the dogmatic attitude to believe that their scientific theories are absolute truth, and that everyone has to accept them.

  574. gpuccio:

    When I see ice melting, I infer the temperature is above 32F.

    Do I have to know the exact temperature and pressure?

    Do I need to know why the temperature is what it is and what caused it to be that way? Do I even have to know why ice melts at 32F?

    These people are just boneheaded ideologues.

  575. To Zachriel (at TSZ):

    That weakens your induction considerably. You only include the uncontroverible in the class, and exclude the plausible;

    Sometimes I wonder if you really read what I write. I quote myself again:

    “So, my point is clear. dFSCI is tested against strings whose origin we know in an uncontrovertible historical way. With those strings, its specificity is 100%. It cannot, obviously, be tested against biological strings because they are the controversial issue.”

    What is not clear in the word “tested” (emphasized)?

    Testing has nothing to do with induction. I blindly test my marker of design against “strings whose origin we know in an uncontrovertible historical way”. That’s the obvious procedure to test the specificity (and sensibility) of a diagnostic tool. Nobody would test a diagnostic tool against “palusible” things, when it can be done with “certain” things.

    So, why do you say “That weakens your induction considerably. You only include the uncontroverible in the class, and exclude the plausible;”? That means absolutely nothing. Again, I am testing the inductive power of my tool. There is nothing to weaken. I am only measuring its specificity.

    When I use the tool on objects whole origin is controversial, there i am making a true induction, that could be true or false (like all inductions). That’s how all inferential science works.

    Sometimes I wonder if you really read what I write.

    and there are good reasons to think that biological structures are inherently different than the rest of the class.

    This is just your opinion. I don’t agree. But it is true that the inference, as I have said many times, is an inference by analogy. If you can demonstrate why “biological structures are inherently different than the rest of the class”, and how that “difference” can explain the spontaneous emergence of dFSCI, that would be a point for you.

    The induction just doesn’t work.

    In your opinion. You have given me no credible argument to believe such a thing.

    You’re just back to arguing the plausibility of evolution,

    Absolutely! I have always done that.

    something strongly supported by the vast majority of biologists.

    Please, not another argument in favour of conformistic thought! Not from you.

    Just to be clear:

    Do I know that I am in a minority, and that the “vast majority of biologists” disagrees with me?

    Obviously I know.

    Do I really think that the “vast majority of biologists” is completely wrong on these points?

    Yes.

    Do I feel bad for disagreeing with the “vast majority of biologists”?

    Absolutely not.

    Do I think that I am better than the “vast majority of biologists”?

    No, but I do think that what I believe is better.

  576. Mung (578):

    That worked! Thanks!!

  577. To Keiths (at TSZ):

    Sorry, gpuccio. I know you won’t like hearing that, but it’s the truth.

    Why should you be sorry? I do like hearing that. It’s the only post with some sense you have made in the last few days!

    Only, it should be expressed this way:

    “a) We examine the gene and each of us independently determines that it contains 1.4 zillion bits of dFSI, well above the threshold for dFSCI, so we move on to criterion b.

    b) We examine the known “necessity mechanisms”, including Darwinian evolution. You decide that none of them (including evolution) could have produced the gene, so you declare that it has dFSCI. I decide that the gene could have evolved, so I declare that it doesn’t have dFSCI. It’s controversial, as you said above.”

    OK. That’s fine. At last, no more silly arguments about circularity. You are essentially correct. Maybe you have understood, or you have decided that lying does not pay, in the long term. Let’s go on.

    “We thus have two kinds of dFSCI: dFSCI.ID and dFSCI.neo darwinism.”

    Excuse me if I remind you that it’s not only a question between you and me, but between two different scientific theories, and two different groups of people.

    “If something has dFSCI.ID, it means that
    1) it is too complex to have come about by pure random variation without selection; and
    2) ID, and all those who are convinced by the thoey and by its criticism of the proposed neo darwinisn explanation, infer design for that something.”

    Please, remember that the whole purpose of dFSCI evaluation is to infer design (or not).

    “If something has dFSCI.neo Darwinism, it means that
    1) it is too complex to have come about by pure random variation without selection; and
    2) Neo darwinists believe that a mechanism based on RV + NS can explain that something: and they have the scientific burden to show that explicit mechanism, not only to dream of it.”

    IOWs, ID and neo Darwinism are two different explanatory theories, in full competition to explain biological information.

    Now, the point is, if at least we could agree on point 1), we could go on discussing point two, like all civil scientific theorists should do. Instead, neo darwinists try in all way to deny point 1), or to affitm that ID is not science. IOWs, they try desperately to evade point 2). Guess why?

    “Back to our gene. ID theorists say it has dFSCI, and neo darwinists say it doesn’t. How do we break the impasse and decide whether it really has dFSCI? The only way we can resolve the dispute is to determine, once and for all, whether the any explicit meachnism proposed by neo darwinists for that “something” really exists and can work. If neo darwinists cannot propose any explicit neo darwinian mechanisms, the two positions remain as follows:

    a) ID theorists, consistently with their scientific explanatory theory, attribute dFSCI to the “something”. That is absolutely necessary for them, because no explicit and verifiable explanation has been proposed by neo darwinists.

    b) Neo darwinists do not accept that (their choice), and go on dreaming.

    Please, note that I have deleted the phrase that said:

    And we have to do that before we can attribute dFSCI to it.

    This is the usual lie. The opposite is true, according to ID theory. Once we have agreed that point 1) is true, it is the burden of neo Darwinists to show that their proposed mechanism can really explain what we observe. Otherwise, it is not a “known explanatory mechanism” at all. An explanatory mechanism for the onject we are observing must be explicit and verifiable. So, the only way we could phrase it correctly is:

    “Neo darwinists have to propose an explicit and verifiable mechanism to explain what we are observing, otherwise design is inferred on the basis of point 1) (which we have agreed upon) and of the lack of any proposed explanatory mechanism (point 2).”

    then what good is dFSCI?

    Immense good. Once we agree on point 1), design becomes the best explanation, unless an explicit explanation is available. That is not only because no explanation of other kinds is available. IOWs, it is not only a negative fact. If that were the case, we could just say: we have no explanation for that “something”.

    But that is not the case. The great strength of dFSCI is its positive aspect, its 100% specificity in detecting design in all known cases.

    That is the true engine of the design inference. We infer design not only because no other explanation is known, but because the observed object has lots of dFSI, and that is notoriously a marker of design. That simple point is what you try each time to disguise.

    IOWs, we don’t infer design for any “something” we observe in nature, for which we have no detailed explanation. We only infer design for objects (strings) that have a function, a function that needs a lot of minimal complexity to be implemented and expressed. That is the main marker of design, the positive part of ID.

    Showing that no mechanism is known that can explain what we observe is only a corollary, a necessary safeguard. That is the “negative” part of ID theory, a comprehensive and convincing criticisn of the proposed neo darwinisn explanation.

    So dFSCI answers a question that no one is asking, and it contributes nothing to answering the question we actually are asking, which is: Could this gene have evolved?

    Lies again. A pity, after a good start.

    dFSCI answers a question that we all, in ID, are asking: should we infer design for this object?

    I agree that nobody in the opposite field is asking that question: they strictly avoid even admitting that the question can be asked!

    The second part of the question, for us in ID, is point 2): is a credible mechanism known that can explain the string? The answer for us is simple: it does not exist, because no real explicit and verifiable mechanism has been proposed for this string. So, we infer design.

    Neo darwinists, instead, go on dreaming.

  578. To all here:

    You may notice how Keiths’ “argument” has apparently shifetd from “dFSCI is circular” to “dFSCI is useless”. Guess why?

    Maybe the next “evolution” of his “thinking” will be “dFSCI is simply unpleasant”!

  579. Jerad:

    I’m not quite clear what you mean by objective subjectivity but I’ll try and skim through some of the previous posts on this thread before I comment on that.

    I think you deserve a little help for that :)

    a) Conscious representations are the basis of all our subjective experience. Therefore, conscious representation are facts, indeed more facts than any other thing.

    b) Consciousness, as intuited directly by ourselves in ourselves, is a direct perception. The fact that we are conscious is the mother of all facts.

    c) Therefore, both consciousness and conscious representations can be safely, indeed must, be used to build our map of reality, as purely empirical facts.

    d) Therefore, the existence of subjecte experiences, and all they imply, is an objective part of our map of reality.

    IOWs, there is no need to eliminate “subjectivity” (the existence of subjects and of subjective representations) from our objective map of reality. Indeed, the opposite is true. Eliminating consciousness as an independent empirical fact generated maps of reality that are utterly unrealistic and useless.

    I use the objective fact of consciousness in my reasoning in two different places:

    1) Design is defined by me as the process by which conscious intelligent purposeful representations are inputted into a material object, giving it some specific form. Therefore, a designer is necessarily a conscious intelligent being, and the process of design always originates in conscious representations.

    2) A conscious intelligent observer is required to recognize, and then define objectively the function for which we will calculate dFSCI. That is absolutely necessary, because function (purpose) and meaning are concepts that cannot be even defined out of conscious representations. Indeed, they are conscious representations, and nothing else.

    Any attempt at defining meanign or function apart from a cosncious observer is, IMO, destined to failure.

    But there is no problem: conscious observers are part of objective reality (facts), therefore why shouldn’t we use them in our definitions and procedures?

  580. These guys are clueless. Even after it has been explained why the OoL is important, they just ignore that explanation and blather on.

    So here it is AGAIN:

    The OoL directly impacts any subsequent evolution because if the OoL was by design then the inference is organisms were designed to evolve/ evolved by design. As DAWKINS said we would be looking at a totally different type of biology (even though we would be looking at the same thing).

    That means the ONLY way the blind watchmaker has sole dominion over evolution is if the blind watchmaker had sole dominion over the OoL.

  581. OMTWO:

    Joe says knowing something was designed enables you to “look at it in a different way”.

    Not only Joe but EVERYONE who has ever conducted a proper investigation. Making a design inference changes the investigation. That is just a fact of life.

  582. Zachriel equivocates:

    Evolution can create functional complexity.

    Intelligent Design evolution, yes. Blind watchmaker evolution, no.

    If you think that is incorrect then please present one peer-reviewed paper that demonstrates blind and undirected chemical processes can create functional complexity.

    My prediction is if you do so it will be more equivocation because it will not deal with blind and undirected processes, but I challenge you to give it a go.

  583. keiths:

    6. To summarize: to establish that something has CSI, we need to show that it could not have been produced by unguided evolution or any other unintelligent process.

    Nope. CSI exists whether or not in arose via agency or blind and undirected processes.

    It is just that to date, in our entire history, we have only observed CSI arising via agency involvement. This is true 100% of the time, with no exceptions.

    That said if someone could demonstrate CSI arising from blind and undirected processes the CSI does NOT vanish, but the design inference wrt CSI does.

    This has been made very clear since Dembski first wrote of CSI yet the menatl midget evos still cannot grasp that simple fact. Why is that?

  584. OMTWO:

    Testing is a large part of science.

    And your position fails because it cannot be tested. And that bothers you so you lash out with your ignorance at your opposition.

  585. gpuccio (585):

    I’m not quite clear what you mean by objective subjectivity but I’ll try and skim through some of the previous posts on this thread before I comment on that.

    I think you deserve a little help for that

    Thanks for the explanation. Your beginning reminds me of Descartes’ “I think therefore I am” but, not being a very good philosopher, after that I find that I’m not used to thinking about perception and consciousness enough to offer any further comment. I shall reread and rethink though!

  586. Joe:

    This has been made very clear since Dembski first wrote of CSI yet the menatl midget evos still cannot grasp that simple fact. Why is that?

    A very good summary, and a very good question, that I certainly support with all my heart!

  587. gpuccio:

    Maybe the next “evolution” of his “thinking” will be “dFSCI is simply unpleasant”!

    No. He’ll claim he never gave up on the idea that it was circular and go back to arguing that.

  588. 594

    589 and 592

    I might add… they cannot even get to the “I” in CSI.

    It is a virtually intractable problem from a materials standpoint. They know this. So it is simply taken for granted.

    (shrug)

    So much for materialism being based upon the material.

  589. Jerad:

    The value of H? In Dr Dembski’s paper? H is the chance hypothesis. The value of H does not make sense. The probability of H kind of makes sense.

    Were you sober when you posted this? Is there some mysterious mathematical meaning of value that I am not aware of?

    Jerad:

    Generally the P(T]H) is going to be very, very small but not 0.

    For the probability to be very very small implies that we’ve divided the numerator by the denominator to arrive at the value. The numerator and denominator are both values. The value of the denominator cannot be zero.

    Therefore, either Dembski is proposing we divide by zero in his formula, or keiths is lying. I’ve opted to believe the latter.

    Why am I wrong?

    Jerad:

    Generally the P(T]H) is going to be very, very small but not 0.

    Which is precisely what I have been arguing, so why on ear