Uncommon Descent Serving The Intelligent Design Community

ID Foundations, 11: Borel’s Infinite Monkeys analysis and the significance of the log reduced Chi metric, Chi_500 = I*S – 500

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

 (Series)

Emile Borel, 1932

Emile Borel (1871 – 1956) was a distinguished French Mathematician who — a son of a Minister — came from France’s Protestant minority, and he was a founder of measure theory in mathematics. He was also a significant contributor to modern probability theory,  and so Knobloch observed of his approach, that:

>>Borel published more than fifty papers between 1905 and 1950 on the calculus of probability. They were mainly motivated or influenced by Poincaré, Bertrand, Reichenbach, and Keynes. However, he took for the most part an opposed view because of his realistic attitude toward mathematics. He stressed the important and practical value of probability theory. He emphasized the applications to the different sociological, biological, physical, and mathematical sciences. He preferred to elucidate these applications instead of looking for an axiomatization of probability theory. Its essential peculiarities were for him unpredictability, indeterminism, and discontinuity. Nevertheless, he was interested in a clarification of the probability concept. [Emile Borel as a probabilist, in The probabilist revolution Vol 1 (Cambridge Mass., 1987), 215-233. Cited, Mac Tutor History of Mathematics Archive, Borel Biography.]>>

Among other things, he is credited as the worker who introduced a serious mathematical analysis of the so-called Infinite Monkeys theorem (just a moment).

So, it is unsurprising that Abel, in his recent universal plausibility metric paper, observed  that:

Emile Borel’s limit of cosmic probabilistic resources [c. 1913?] was only 1050 [[23] (pg. 28-30)]. Borel based this probability bound in part on the product of the number of observable stars (109) times the number of possible human observations that could be made on those stars (1020).

This of course, is now a bit expanded, since the breakthroughs in astronomy occasioned by the Mt Wilson 100-inch telescope under Hubble in the 1920’s. However,  it does underscore how centrally important the issue of available resources is, to render a given — logically and physically strictly possible but utterly improbable — potential chance- based event reasonably observable.

We may therefore now introduce Wikipedia as a hostile witness, testifying against known ideological interest, in its article on the Infinite Monkeys theorem:

In one of the forms in which probabilists now know this theorem, with its “dactylographic” [i.e., typewriting] monkeys (French: singes dactylographes; the French word singe covers both the monkeys and the apes), appeared in Émile Borel‘s 1913 article “Mécanique Statistique et Irréversibilité” (Statistical mechanics and irreversibility),[3] and in his book “Le Hasard” in 1914. His “monkeys” are not actual monkeys; rather, they are a metaphor for an imaginary way to produce a large, random sequence of letters. Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly.

The physicist Arthur Eddington drew on Borel’s image further in The Nature of the Physical World (1928), writing:

If I let my fingers wander idly over the keys of a typewriter it might happen that my screed made an intelligible sentence. If an army of monkeys were strumming on typewriters they might write all the books in the British Museum. The chance of their doing so is decidedly more favourable than the chance of the molecules returning to one half of the vessel.[4]

These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work, and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys’ success is effectively impossible, and it may safely be said that such a process will never happen.

Let us emphasise that last part, as it is so easy to overlook in the heat of the ongoing debates over origins and the significance of the idea that we can infer to design on noticing certain empirical signs:

These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work, and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys’ success is effectively impossible, and it may safely be said that such a process will never happen.

Why is that?

Because of the nature of sampling from a large space of possible configurations. That is, we face a needle-in-the-haystack challenge.

For, there are only so many resources available in a realistic situation, and only so many observations can therefore be actualised in the time available. As a result, if one is confined to a blind probabilistic, random search process, s/he will soon enough run into the issue that:

a: IF a narrow and atypical set of possible outcomes T, that

b: may be described by some definite specification Z (that does not boil down to listing the set T or the like), and

c: which comprise a set of possibilities E1, E2, . . . En, from

d: a much larger set of possible outcomes, W, THEN:

e: IF, further, we do see some Ei from T, THEN also

f: Ei is not plausibly a chance occurrence.

The reason for this is not hard to spot: when a sufficiently small, chance based, blind sample is taken from a set of possibilities, W — a configuration space,  the likeliest outcome is that what is typical of the bulk of the possibilities will be chosen, not what is atypical.  And, this is the foundation-stone of the statistical form of the second law of thermodynamics.

Hence, Borel’s remark as summarised by Wikipedia:

Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly.

In recent months, here at UD, we have described this in terms of searching for a needle in a vast haystack [corrective u/d follows]:

let us work back from how it takes ~ 10^30 Planck time states for the fastest chemical reactions, and use this as a yardstick, i.e. in 10^17 s, our solar system’s 10^57 atoms would undergo ~ 10^87 “chemical time” states, about as fast as anything involving atoms could happen. That is 1 in 10^63 of 10^150. So, let’s do an illustrative haystack calculation:

 Let us take a straw as weighing about a gram and having comparable density to water, so that a haystack weighing 10^63 g [= 10^57 tonnes] would take up as many cubic metres. The stack, assuming a cubical shape, would be 10^19 m across. Now, 1 light year = 9.46 * 10^15 m, or about 1/1,000 of that distance across. If we were to superpose such a notional 1,000 light years on the side haystack on the zone of space centred on the sun, and leave in all stars, planets, comets, rocks, etc, and take a random sample equal in size to one straw, by absolutely overwhelming odds, we would get straw, not star or planet etc. That is, such a sample would be overwhelmingly likely to reflect the bulk of the distribution, not special, isolated zones in it.

With this in mind, we may now look at the Dembski Chi metric, and reduce it to a simpler, more practically applicable form:

m: In 2005, Dembski provided a fairly complex formula, that we can quote and simplify:

χ = – log2[10^120 ·ϕS(T)·P(T|H)]. χ is “chi” and ϕ is “phi”

n:  To simplify and build a more “practical” mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally: Ip = – log p, in bits if the base is 2. (That is where the now familiar unit, the bit, comes from.)

o: So, since 10^120 ~ 2^398, we may do some algebra as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p):

Chi = – log2(2^398 * D2 * p), in bits

Chi = Ip – (398 + K2), where log2 (D2 ) = K2

p: But since 398 + K2 tends to at most 500 bits on the gamut of our solar system [our practical universe, for chemical interactions! (if you want , 1,000 bits would be a limit for the observable cosmos)] and

q: as we can define a dummy variable for specificity, S, where S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T:

Chi_500 =  Ip*S – 500, in bits beyond a “complex enough” threshold

(If S = 0, Chi = – 500, and, if Ip is less than 500 bits, Chi will be negative even if S is positive. E.g.: A string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will — unsurprisingly — be positive.)

r: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was intelligently designed. (For instance, no-one would dream of asserting seriously that the English text of this post is a matter of chance occurrence giving rise to a lucky configuration, a point that was well-understood by that Bible-thumping redneck fundy — NOT! — Cicero in 50 BC.)

s: The metric may be directly applied to biological cases:

t: Using Durston’s Fits values — functionally specific bits — from his Table 1, to quantify I, so also  accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold:

RecA: 242 AA, 832 fits, Chi: 332 bits beyond

SecY: 342 AA, 688 fits, Chi: 188 bits beyond

Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond

u: And, this raises the controversial question that biological examples such as DNA — which in a living cell is much more complex than 500 bits — may be designed to carry out particular functions in the cell and the wider organism.

v: Therefore, we have at least one possible general empirical sign of intelligent design, namely: functionally specific, complex organisation and associated information [[FSCO/I] .

But, but, but . . . isn’t “natural selection” precisely NOT a chance based process, so doesn’t the ability to reproduce in environments and adapt to new niches then dominate the population make nonsense of such a calculation?

NO.

Why is that?

Because of the actual claimed source of variation (which is often masked by the emphasis on “selection”) and the scope of innovations required to originate functionally effective body plans, as opposed to varying same — starting with the very first one, i.e. Origin of Life, OOL.

But that’s Hoyle’s fallacy!

Advice: when you go up against a Nobel-equivalent prize-holder, whose field requires expertise in mathematics and thermodynamics, one would be well advised to examine carefully the underpinnings of what is being said, not just the rhetorical flourish about tornadoes in junkyards in Seattle assembling 747 Jumbo Jets.

More specifically, the key concept of Darwinian evolution [we need not detain ourselves too much on debates over mutations as the way variations manifest themselves], is that:

CHANCE VARIATION (CV) + NATURAL “SELECTION” (NS) –> DESCENT WITH (UNLIMITED) MODIFICATION (DWM), i.e. “EVOLUTION.”

CV + NS –> DWM, aka Evolution

If we look at NS, this boils down to differential reproductive success in environments leading to elimination of the relatively unfit.

That is, NS is a culling-out process, a subtract-er of information, not the claimed source of information.

That leaves only CV, i.e. blind chance, manifested in various ways. (And of course, in anticipation of some of the usual side-tracks, we must note that the Darwinian view, as modified though the genetic mutations concept and population genetics to describe how population fractions shift, is the dominant view in the field.)

There are of course some empirical cases in point, but in all these cases, what is observed is fairly minor variations within a given body plan, not the relevant issue: the spontaneous emergence of such a complex, functionally specific and tightly integrated body plan, which must be viable from the zygote on up.

To cover that gap, we have a well-known metaphorical image — an analogy, the Darwinian Tree of Life. This boils down to implying that there is a vast contiguous continent of functionally possible variations of life forms, so that we may see a smooth incremental development across that vast fitness landscape, once we had an original life form capable of self-replication.

What is the evidence for that?

Actually, nil.

The fossil record, the only direct empirical evidence of the remote past, is notoriously that of sudden appearances of novel forms, stasis (with some variability within the form obviously), and disappearance and/or continuation into the modern world.

If by contrast the tree of life framework were the observed reality, we would see a fossil record DOMINATED by transitional forms, not the few strained examples that are so often triumphalistically presented in textbooks and museums.

Similarly, it is notorious that fairly minor variations in the embryological development process are easily fatal. No surprise, if we have a highly complex, deeply interwoven interactive system, chance disturbances are overwhelmingly going to be disruptive.

Likewise, complex, functionally specific hardware is not designed and developed by small, chance based functional increments to an existing simple form.

Hoyle’s challenge of overwhelming improbability does not begin with the assembly of a Jumbo jet by chance, it begins with the assembly of say an indicating instrument on its cockpit instrument panel.

The D’Arsonval galvanometer movement commonly used in indicating instruments; an adaptation of a motor, that runs against a spiral spring (to give proportionality of deflection to input current across the magnetic field) which has an attached needle moving across a scale. Such an instrument, historically, was often adapted for measuring all sorts of quantities on a panel.

(Indeed, it would be utterly unlikely for a large box of mixed nuts and bolts, to by chance shaking, bring together matching nut and bolt and screw them together tightly; the first step to assembling the instrument by chance.)

Further to this, It would be bad enough to try to get together the text strings for a Hello World program (let’s leave off the implementing machinery and software that make it work) by chance. To then incrementally create an operating system from it, each small step along the way being functional, would be a bizarrely operationally impossible super-task.

So, the real challenge is that those who have put forth the tree of life, continent of function type approach, have got to show, empirically that their step by step path up the slopes of Mt Improbable, are empirically observable, at least in reasonable model cases. And, they need to show that in effect chance variations on a Hello World will lead, within reasonable plausibility, to such a stepwise development that transforms the Hello World into something fundamentally different.

In short, we have excellent reason to infer that — absent empirical demonstration otherwise — complex specifically functional integrated complex organisation arises in clusters that are atypical of the general run of the vastly larger set of physically possible configurations of components. And, the strongest pointer that this is plainly  so for life forms as well, is the detailed, complex, step by step information controlled nature of the processes in the cell that use information stored in DNA to make proteins.  Let’s call Wiki as a hostile witness again, courtesy two key diagrams:

I: Overview:

The step-by-step process of protein synthesis, controlled by the digital (= discrete state) information stored in DNA

II: Focusing on the Ribosome in action for protein synthesis:

The Ribosome, assembling a protein step by step based on the instructions in the mRNA “control tape” (the AA chain is then folded and put to work)

Clay animation video [added Dec 4]:

[youtube OEJ0GWAoSYY]

More detailed animation [added Dec 4]:

[vimeo 31830891]

This sort of elaborate, tightly controlled, instruction based step by step process is itself a strong sign that this sort of outcome is unlikely by chance variations.

(And, attempts to deny the obvious, that we are looking at digital information at work in algorithmic, step by step processes, is itself a sign that there is a controlling a priori at work that must lock out the very evidence before our eyes to succeed. The above is not intended to persuade such, they are plainly not open to evidence, so we can only note how their position reduces to patent absurdity in the face of evidence and move on.)

But, isn’t the insertion of a dummy variable S into the Chi_500 metric little more than question-begging?

Again, NO.

Let us consider a simple form of the per-aspect explanatory filter approach:

The per aspect design inference explanatory filter

 

You will observe two key decision nodes,  where the first default is that the aspect of the object, phenomenon or process being studied, is rooted in a natural, lawlike regularity that under similar conditions will produce similar outcomes, i.e there is a reliable law of nature at work, leading to low contingency of outcomes.  A dropped, heavy object near earth’s surface will reliably fall at g initial acceleration, 9.8 m/s2.  That lawlike behaviour with low contingency can be empirically investigated and would eliminate design as a reasonable explanation.

Second, we see some situations where there is a high degree of contingency of possible outcomes under initial circumstances.  This is the more interesting case, and in our experience has two candidate mechanisms: chance, or choice. The default for S under these circumstances, is 0. That is, the presumption is that chance is an adequate explanation, unless there is a good — empirical and/or analytical — reason to think otherwise.  In short, on investigation of the dynamics of volcanoes and our experience with them, rooted in direct observations, the complexity of a Mt Pinatubo is explained partly on natural laws and chance variations, there is no need to infer to choice to explain its structure.

But, if the observed configurations of highly contingent elements were from a narrow and atypical zone T not credibly reachable based on the search resources available, then we would be objectively warranted to infer to choice. For instance, a chance based text string of length equal to this post, would  overwhelmingly be gibberish, so we are entitled to note the functional specificity at work in the post, and assign S = 1 here.

So, the dummy variable S is not a matter of question-begging, never mind the usual dismissive talking points.

I is of course an information measure based on standard approaches, through the sort of probabilistic calculations Hartley and Shannon used, or by a direct observation of the state-structure of a system [e.g. on/off switches naturally encode one bit each].

And, where an entity is not a direct information storing object, we may reduce it to a mesh of nodes and arcs, then investigate how much variation can be allowed and still retain adequate function, i.e. a key and lock can be reduced to a bit measure of implied information, and a sculpture like at Mt Rushmore can similarly be analysed, given the specificity of portraiture.

The 500 is a threshold, related to the limits of the search resources of our solar system, and if we want more, we can easily move up to the 1,000 bit threshold for our observed cosmos.

On needle in a haystack grounds, or monkeys strumming at the keyboards grounds, if we are dealing with functionally specific, complex information beyond these thresholds, the best explanation for seeing such is design.

And, that is abundantly verified by the contents of say the Library of Congress (26 million works) or the Internet, or the product across time of the Computer programming industry.

But, what about Genetic Algorithms etc, don’t they prove that such FSCI can come about by cumulative progress based on trial and error rewarded by success?

Not really.

As a rule, such are about generalised hill-climbing within islands of function characterised by intelligently designed fitness functions with well-behaved trends and controlled variation within equally intelligently designed search algorithms. They start within a target Zone T, by design, and proceed to adapt incrementally based on built in designed algorithms.

If such a GA were to emerge from a Hello World by incremental chance variations that worked as programs in their own right every step of the way, that would be a different story, but for excellent reason we can safely include GAs in the set of cases where FSCI comes about by choice, not chance.

So, we can see what the Chi_500 expression means, and how it is a reasonable and empirically supported tool for measuring complex specified information, especially where the specification is functionally based.

And, we can see the basis for what it is doing, and why one is justified to use it, despite many commonly encountered objections. END

________

F/N, Jan 22: In response to a renewed controversy tangential to another blog thread, I have redirected discussion here. As a point of reference for background information, I append a clip from the thread:

. . . [If you wish to find] basic background on info theory and similar background from serious sources, then go to the linked thread . . . And BTW, Shannon’s original 1948 paper is still a good early stop-off on this. I just did a web search and see it is surprisingly hard to get a good simple free online 101 on info theory for the non mathematically sophisticated; to my astonishment the section A of my always linked note clipped from above is by comparison a fairly useful first intro. I like this intro at the next level here, this is similar, this is nice and short while introducing notation, this is a short book in effect, this is a longer one, and I suggest the Marks lecture on evo informatics here as a useful contextualisation. Qualitative outline here. I note as well Perry Marshall’s related exchange here, to save going over long since adequately answered talking points, such as asserting that DNA in the context of genes is not coded information expressed in a string of 4-state per position G/C/A/T monomers. The one good thing is, I found the Jaynes 1957 paper online, now added to my vault, no cloud without a silver lining.

If you are genuinely puzzled on practical heuristics, I suggest a look at the geoglyphs example already linked. This genetic discussion may help on the basic ideas, but of course the issues Durston et al raised in 2007 are not delved on.

(I must note that an industry-full of complex praxis is going to be hard to reduce to an in a nutshell. However, we are quite familiar with information at work, and how we routinely measure it as in say the familiar: “this Word file is 235 k bytes.” That such a file is exceedingly functionally specific can be seen by the experiment of opening one up in an inspection package that will access raw text symbols for the file. A lot of it will look like repetitive nonsense, but if you clip off such, sometimes just one header character, the file will be corrupted and will not open as a Word file. When we have a great many parts that must be right and in the right pattern for something to work in a given context like this, we are dealing with functionally specific, complex organisation and associated information, FSCO/I for short.

The point of the main post above is that once we have this, and are past 500 bits or 1000 bits, it is not credible that such can arise by blind chance and mechanical necessity. But of course, intelligence routinely produces such, like comments in this thread. Objectors can answer all of this quite simply, by producing a case where such chance and necessity — without intelligent action by the back door — produces such FSCO/I. If they could do this, the heart would be cut out of design theory. But, year after year, thread after thread, here and elsewhere, this simple challenge is not being met. Borel, as discussed above, points out the basic reason why.

Comments
Can I take it then that you have no way of telling whether any mutation (once it has happened) was the result of a stochastic process or a directed process? I'm quite willing to discuss NS once we've come to some agreement on this.Bydand
January 31, 2012
January
01
Jan
31
31
2012
06:41 AM
6
06
41
AM
PDT
And I can't help it that you don't like teh answers I provided.Joe
January 31, 2012
January
01
Jan
31
31
2012
06:17 AM
6
06
17
AM
PDT
Well, Joe, I see you're determined not to answer a straight question. I was hoping for more. Perhaps someone else could tell us how to determine what sort of process produced a given mutation?Bydand
January 31, 2012
January
01
Jan
31
31
2012
06:00 AM
6
06
00
AM
PDT
How does NS "direct" seeing that whatever works good enough surives to reproduce? As I remeber evos are unable to present any positive evidence for their "theory". As for how can we tell- again it all comes down to origins. Other than that we will have to wait until someone unravels the internal programming.Joe
January 31, 2012
January
01
Jan
31
31
2012
05:45 AM
5
05
45
AM
PDT
Ah, yes - Spetner and the NREH, where he tells us that mutations are a response to "signals" from the environment. Darwinists say that mutations are stochastic, and NS "directs" the result using information from the environment. As I remember it, Spetner was unable to present any positive evidence for his hypothesis in his book. Be that as it may, I'm telling you I do not know, post facto, how to determine whether or not any mutation was the result of a stochastic process, or of a directed one. So your representation of "my position" is incorrect. You, OTOH, seem very certain that both types of process are operating. I ask again - how do you know, and can you tell the result of one from the result of the other? For instance, all those mutations reported in Lenski's long-term experiments with E coli - were they the result of stochastics, or of directed processes? If only one cell out of a whole bunch of similar cells all enjoying the same environment exhibits a mutation, would this not count against such a mutation being the result of a signal from the environment? Don't get me wrong, Joe - I'm really interested in the science behind thisBydand
January 31, 2012
January
01
Jan
31
31
2012
05:35 AM
5
05
35
AM
PDT
Well Bydand- your position sez they are all random yet no one can tell me how that was determined. But anyway "Not By Chance" by Dr Lee Spetner- it came out in 1997- he goes over this random mutation canard.Joe
January 31, 2012
January
01
Jan
31
31
2012
04:45 AM
4
04
45
AM
PDT
Thanks, Joe! And can we tell which genetic variation is directed and which random? If so, how?Bydand
January 31, 2012
January
01
Jan
31
31
2012
04:43 AM
4
04
43
AM
PDT
I will begin with the end: what you say in post 51.2.2.1 is correct, and I don’t understand how you may still have doubts that I think exactly that, because it was the starting point of all my discussion. The only thing I would clarify is that there is no necessity that the variants be lethal. Neutral variants too cannot bridge those ravines. For two reasons. If they are neutral functional variants of the starting sequence, they simply cannot reach the new island. If they are neutral because they happen in some inactivated gene, then they can certainly reach any possible state, in theory, but emoirically it is simplly too unlikely that they may reach a functional state.
OK, but it is that last point that is precisely at issue. It is not "simply" too unlikely at all. That's exactly the claim that needs to be backed up, as does the claim that there are no selectable intermediates. In other words, once you have a duplicate, there is no ravine (no lethal loss of function incurred by deactivating variation), and you have no way of knowing whether the ravine is level, or includes upward steps. Or, if you have, you have not presented them.
Confusion again about the methodology of probabilistic inference. A t test, or any equivalent method of statistical analysis, is only a mathemathical tool we apply to a definite model. Usually, the model is a two groups comparison for a difference in means, according to the scheme of Fisher’s hypothesis testing.
Well, I don't know what your point was in that case. An independent samples t-test tests the null hypothesis that two samples were drawn from the same population, on the assumption that that population has a particular probability distribution. Establishing the probability distributions in your data is key to figuring out the probabilities of certain observations under the null. You can't leave that part out, and both variant generation and variant selection are stochastic processes with probability distributions that need to be ascertained if you are going to make any conclusions about the probability of your observed data under your null.
As you well know, the only role of our t test, or other statistical analysis, is to reject (or not) the null hypothesis, which gives some methodological support to our alternative hypothesis. And we reject our null if we are convinced that our data are not well explained by a random model (because they are too unlikely under that assumption).
Right until your last sentence. That's the part that I dispute - indeed, I cannot parse it. I don't know what "well explained by a random model" means, and that is precisely my objection - your null is not well characterised. Sure, we reject the null if our data are unlikely under our null, but to do that we have to know what exactly our null is. "A random model" does not tell us that. What the null you are interested in is, in fact, the null hypothesis that evolutionary processes are responsible for theh observed data. So to model that null you have to model evolutionary processes. And to do that, for any given biological feature, you have to have either far more data than you actually have, or you have to estimate the probability distributions of those data (for instance, the probability distribution of certain environments favoring certain sequences at certain intermediate times between a posited duplication event and a posited observed different protein. Those are the probability functions you don't present, and can't even accommodate in Fisherian testing - you'd need some kind of Bayesian analysis.
Yes, it was “vague” :) And what you are trying to evade is a quantitative explanation of how protein domains could have emerged.
I'm not trying to evade it at all! I don't know how protein domains emerged, although there is, I understand, some literture on the subject. As I've said, I can think of very few (if any - maybe one or two possible cases) of naturally observed biological features for which we have a quantitative, or even qualitative explanation, and may never have - there are far too many known unknowns as well as unknown unknowns. What we have instead is a theory that explains patterns in the data (those nested hierarchies), as well as mechanisms that can be shown, in the lab and in the field, to produce the predicted effects, both of adaptation, and speciation. Sure there are huge puzzles - the mechanisms of variance production, the evolution of sexual reproduction, the origin of the first Darwinian-capable self-replicators, the origins of some of the most conserved sequenes (hox genes, protein domains). But we won't solve them by saying: this is improbable, therefore it didn't evolve. We attempt to solve them by saying: if this evolved, how did it do so? In other words by treating evolutionary theory not as the null but as H1. Or, more commonly, by comparing alternative explanatory theories. ID doesn't work as H1 unless you characterise the evolutionary null, and neither Dembski nor you attempt to do this. This is because ID is not, in fact, an explanation at all. It is a default.
But I am modeling the theory. I have modeled the random system, and I have also shown how a selectable intermediate can be added to the random modle, still having a quantitative evaluation. If more intermediates are known, they can be considered in the same way.
Well, I'd like to see your model, but from what you have told us, it doesn't seem to be a model of evolutionary theory! Is it implemented in code somewhere? What is it written in?
Indeed, even that single intermediate I have assumed does not exist. So, if you want to affirm that basic protein domains evolved by gradual variation bearing differential reproduction, be my guest. Show one example of such a transition. Show what the intermediates were, or could have been. Explain why they were selectable. And then, with real intermediates on the table, we will be able to apply my method to compute if the system is credible.
But why should those putative intermediates still exist? And how can we show that they were "selectable" without knowing the environment in which the population who bore them lived? As I keep saying, you can't model selection without modeling the environment, which includes not only the external environment, but the population itself (and its current allele frequencies) and the genetic environment. It's a dynamic, non-linear system, and trying to model it to explain the origins of protein domains is a bit like trying to explain the ice age by modeling the weather in Thule on some Friday several thousand years ago. In other words, it can't be done. But that doesn't justify the conclusion that "it didn't happen, therefore ID". That's why, if you want to research ID, it needs to be researched as a positive hypothesis, not simply as the default conclusion in the absence of an evolutionary one. Which was why I was interested in the front-loading thread, although I don't think Genomicus' hypothesis works. Anyway, nice to talk to you, even if we never agree on this :) Gotta run. LizzieElizabeth Liddle
January 31, 2012
January
01
Jan
31
31
2012
04:06 AM
4
04
06
AM
PDT
I gather that you believe that genetic variation is not a stochastic process – that it is directed in some way.
At least some, if not most, but not all- random stuff still happens.
And you also believe that Sanford’s claim of looming genetic meltdown is credible, and backed by good experimental data.
Nope- his claim only pertains to stochastic processes. If living organisms are designed and evolved by design then his claim is moot.Joe
January 31, 2012
January
01
Jan
31
31
2012
03:51 AM
3
03
51
AM
PDT
So, Joe... I gather that you believe that genetic variation is not a stochastic process - that it is directed in some way. And you also believe that Sanford's claim of looming genetic meltdown is credible, and backed by good experimental data. I'm a bit befuddled, then, as to just what or who, in your view, is directing this genetic entropy, and why. Is it an inimical intelligent designer? Or are only beneficial mutations "directed"?Bydand
January 31, 2012
January
01
Jan
31
31
2012
01:38 AM
1
01
38
AM
PDT
Elizabeth: While I really feel I owe champignon really nothing, and therefore will not answer him any more, your case is completely different. You are sincere and intelligent. So I feel I owe you some final clarifications, and I will give them. But if still you believe that what I say has no sense, I would really leave it to that. I have no problem with the simple fact that you think differently. And I thank you for giving me the chance to express, and detail, my ideas. I will begin with the end: what you say in post 51.2.2.1 is correct, and I don't understand how you may still have doubts that I think exactly that, because it was the starting point of all my discussion. The only thing I would clarify is that there is no necessity that the variants be lethal. Neutral variants too cannot bridge those ravines. For two reasons. If they are neutral functional variants of the starting sequence, they simply cannot reach the new island. If they are neutral because they happen in some inactivated gene, then they can certainly reach any possible state, in theory, but emoirically it is simplly too unlikely that they may reach a functional state. Indeed, I have modeled the emergence of protein domains, specifying that no inmtermediate is known for them, and that therefore their emergence should be modeled at present as a mere effect of RV. However, I have hypothesized how the global probability of a new functional domain could be affected (at most) by the demonstration of a single, fully selectable intermediate with optimal properties. That intermediate indeed is not known, and I do believe that it does not exist, but it was very important for me to show that the existence of selectable intermediates can, if demonstrated, be including in the modeling of global probabilities of a transition, if we reason correctly on the cause effect relationship we have empirically found (or assumed). It seems a very simple, and correct, reasoning to me, but if you don't agree, no problem. Certainly, the reasons why you have said you don't agree, up to now, make no sense for me. Well, I can only disagree, and say that from where I am standing it really does seem as thought it is you who are confused. Or at any rate it is not clear what you are saying. Certainly we can draw conclusions about cause and effects from probability distributes, and we do so every time we conduct a t test. Not only that, but when we model certain causes and effects, we can model them as probability distributions. I gave a clear example: a variant sequence that leads to light colouring on a moth affects the probability distribution of deaths by predation. Confusion again about the methodology of probabilistic inference. A t test, or any equivalent method of statistical analysis, is only a mathemathical tool we apply to a definite model. Usually, the model is a two groups comparison for a difference in means, according to the scheme of Fisher's hypothesis testing. That means, as you well know, that we have a null hypothesis, and we have an alternative hypothesis. The null hypothesis is that what we observe is well explained by random factors (usually random variation due to sampling). But the alterbative hypothesis is a necessity hypothesis: we hypothesize that some specific and explicit cause is acting, with a definite logical explanatory model. As you well know, the only role of our t test, or other statistical analysis, is to reject (or not) the null hypothesis, which gives some methodological support to our alternative hypothesis. And we reject our null if we are convinced that our data are not well explained by a random model (because they are too unlikely under that assumption). This is very different from what you say. And it is exactly what I say. Causal relations and probabilistic modeling are two different things, that in the end contribute both to the final explanatory model we propose. And as for evading into “vague” definitions (if that’s what you mean, I’m not sure, given the typos!) well, I’m certainly not trying to evade anything. Yes, it was "vague" :) And what you are trying to evade is a quantitative explanation of how protein domains could have emerged. You may not believe that model is a good fit to the data, but modelling something different, and then showing that your model doesn’t work isn’t going to falsify the theory of evolution because you aren’t modeling the theory of evolution! But I am modeling the theory. I have modeled the random system, and I have also shown how a selectable intermediate can be added to the random modle, still having a quantitative evaluation. If more intermediates are known, they can be considered in the same way. Indeed, even that single intermediate I have assumed does not exist. So, if you want to affirm that basic protein domains evolved by gradual variation bearing differential reproduction, be my guest. Show one example of such a transition. Show what the intermediates were, or could have been. Explain why they were selectable. And then, with real intermediates on the table, we will be able to apply my method to compute if the system is credible. I gave a clear example: a variant sequence that leads to light colouring on a moth affects the probability distribution of deaths by predation. Well, if you want to mopdify my model by saying tyhat the presence of a selectable intermediate will increase the probability of reproduction of (how much? you say), and not of 100% (that was my maximal assumption), that only means that you can do the same computations, with my same method, and the difference in probability between the pure random system and the mioxed system will be less than what I have found. I have simply assumed the most favorable situation for the darwinian algorithm.gpuccio
January 31, 2012
January
01
Jan
31
31
2012
01:37 AM
1
01
37
AM
PDT
gpuccio:
I don’t know if you are really interested in a serious discussion (champignon evidently is not).
gpuccio, I engaged your argument directly, using quotes from what you wrote, and showed that according to your own statements, dFSCI cannot tell us whether something could have evolved. If you won't stand behind what you wrote, why should the rest of us take you seriously? And if you disagree with yourself and wish to retract your earlier statements, please be honest and admit it. Show us exactly where you believe your mistakes are and how you wish to correct them. My earlier comment:
gpuccio, By your own admission, dFSCI is useless for ruling out the evolution of a biological feature and inferring design. Earlier in the thread you stressed that dFSCI applies only to purely random processes:
As repeatedly said, I use dFSCI only to model the probabilitites of getting a result in a purely random way, and for nothing else. All the rest is considered in its own context, and separately.
But evolution is not a purely random process, as you yourself noted:
b) dFSCI, or CSI, shows me that it could not have come out as the result of pure RV. c) So, some people have proposed an explanation based on a mixed algorithm: RV + NS.
And since no one in the world claims that the eye, the ribosome, the flagellum, the blood clotting cascade, or the brain came about by “pure RV”, dFSCI tells us nothing about whether these evolved or were designed. It answers a question that no one is stupid enough to ask. ["Could these have arisen through pure chance?"] Yet elsewhere you claim that dFSCI is actually an indicator of design:
Indeed, I have shown two kinds of function for dFSCI: being an empirical marker of design, and helping to evaluate the structure function relationship of specific proteins.
That statement is wildly inconsistent with the other two. I feel exactly like eigenstate:
That’s frankly outrageous — dFSCI hardly even rises to the level of ‘prank’ if this is the essence of dFSCI. I feel like asking for all the time back I wasted in trying to figure your posts out…
You have an obligation to make it clear in future discussions that dFSCI is utterly irrelevant to the “designed or evolved” question. In fact, since dFSCI is useless, why bring it up at all? The only function it seems to serve is as pseudo-scientific window dressing.
And despite your continual references to "post 34 and following" in this thread, you make the same mistakes there as you do in this thread: 1. You assume a predefined target, which evolution never has. 2. You assume blind search, which is not how evolution operates. Your only "concession" is to model two consecutive blind searches instead of one, as if that were enough to turn your caricature into an accurate representation of evolution. 3. Even granting your assumptions, you get the math wrong, as Elizabeth pointed out to you. 4. You make other wild, unsupported assumptions and you pick the numbers used in your example so that - surprise! - the conclusion is design. I hope you'll take up eigenstate's challenge:
I claim you have not, cannot, and will not provide a mathematical model for “RV” in your dFSCI argument that captures the key, essential dynamic of the “RV” in “RV + NS” — incremental variations across iterations with feedback integrations accumulating across those same iterations. This is what makes “Methinks it is like a weasel” impossible per your contrived probabilities and producible in just dozens of iterations with a cumulative feedback loop.
Given your past behavior, I'm not holding my breath.champignon
January 30, 2012
January
01
Jan
30
30
2012
11:05 PM
11
11
05
PM
PDT
According to the AVIDA data (small genomes, asexually reproducing population) virtual organisms that can perform complex logic operations evolve from starter organisms that do no more than reproduce (no logic functions).
Given totally unrealistic parameters, mutation rates and evrything else. The Sanford paper demonstrates what happens when reality hits.Joe
January 30, 2012
January
01
Jan
30
30
2012
05:29 PM
5
05
29
PM
PDT
All human kids get 1/2 from dad and 1/2 from mom. I am pretty sure that is par for the course wrt sexual reproduction.
Yes, but that doesn't mean you are "throwing out" half of each genome. It means that kids get half of each parent genotype (which have a lot in common to start with). And if there's more than one kid, more than half the parental genetic material will get passed into the next generation. And in any case, that material will be found in other members of the population. Only rarely will sequences be completely lost.
But anyway starting with an asexually reproducing population (with a small genome) stochastic processes cannot construct anything more complex-> that is according to the data.
No. According to the AVIDA data (small genomes, asexually reproducing population) virtual organisms that can perform complex logic operations evolve from starter organisms that do no more than reproduce (no logic functions). So yes, those stochastic processes do exactly what you say they can not - enable a population to evolve from "can do no logic functions" to "can do complex logic functions".Elizabeth Liddle
January 30, 2012
January
01
Jan
30
30
2012
04:02 PM
4
04
02
PM
PDT
And when you add sexual reproduction you have to throw out 1/2 of each genome- do you do that?
No, you don’t throw out half of each genome.
All human kids get 1/2 from dad and 1/2 from mom. I am pretty sure that is par for the course wrt sexual reproduction. But anyway starting with an asexually reproducing population (with a small genome) stochastic processes cannot construct anything more complex-> that is according to the data.Joe
January 30, 2012
January
01
Jan
30
30
2012
03:50 PM
3
03
50
PM
PDT
But the first organisms would have been asexually reprducing with small genomes.
yes.
And when you add sexual reproduction you have to throw out 1/2 of each genome- do you do that?
No, you don't throw out half of each genome. Nor do you even throw out half of each genotype, unless each parental couple only produce one offspring. Obviously that's not how you set it up, otherwise your population would go extinct. And even if some couples do only produce one offspring, you still have samples of their un-passed on genetic sections all over the population. That's why evolution works much faster in sexually reproducing populations (I make mine hermaphroditic though, as it saves time).Elizabeth Liddle
January 30, 2012
January
01
Jan
30
30
2012
03:42 PM
3
03
42
PM
PDT
But the first organisms would have been asexually reprducing with small genomes. And when you add sexual reproduction you have to throw out 1/2 of each genome- do you do that?Joe
January 30, 2012
January
01
Jan
30
30
2012
03:34 PM
3
03
34
PM
PDT
@gpuccio#50,
I don’t know if you are really interested in a serious discussion (champignon evidently is not). If you are, I invite you too to read my posts here: https://uncommondescent.com.....selection/ (post 34 and following) and comment on them, instead of just saying things that have no meaning.
I've read through that section, more than once now, thank you. That, combined with the key insights gained from comments made by Dr. Liddle and petrushka were the catalyst for "getting it", in terms of your views on dFSCI. I'm terribly disappointed in what I came to realize was the substance of your argument/metric, but I don't think it's a matter of not devoting time to understand it. The disappointment comes from understanding. I was much more interested and hopeful you were on to something when I was confused by what you were saying.
That is both wrong and unfair. I have addressed evolutionary processes in great detail. Read and comment, if you like. And yes, english is not my primary language at all, but I don’t believe there is any mystery or misunderstanding in the way I use RV. It measn random variation, exactly as you thought.
No, it can't mean "random variation" as I thought, because random variation as I thought spreads those variation ACROSS GENERATIONS. That means the variation spans iterations, and because that variation spans iterations, it can (and does) incorporate accumulative feedback from the environment. As soon as you regard "random variation" as variation across generations in populations that incorporate feedback loops, your probabilistic math goes right out the window. Totally useless. This is precisely what Dawkins was reacting to with the Weasel example. From his book:
I don't know who it was first pointed out that, given enough time, a monkey bashing away at random on a typewriter could produce all the works of Shakespeare. The operative phrase is, of course, given enough time. Let us limit the task facing our monkey somewhat. Suppose that he has to produce, not the complete works of Shakespeare but just the short sentence 'Methinks it is like a weasel', and we shall make it relatively easy by giving him a typewriter with a restricted keyboard, one with just the 26 (capital) letters, and a space bar. How long will he take to write this one little sentence
Given the setup of the thought experiment, the "one shot" odds of a random typing on the 26-char keyboard are greater that 10^39 against "Methinks it is like a weasel". But in just a few dozen iterations, based on the VARIATION ACROSS GENERATIONS WITH A CUMULATIVE FEEDBACK LOOP incorporated into it, the target string is produced. It's unimaginable that you are not familiare with Dawkins' Weasel argument for the importance of cumulative feedback loops, and yet, your dFSCI NOWHERE accounts for the effects of feedback loops interstitial with random variations. Every single appeal you've made in dFSCI (and I've read a LOT from you on this now) is "single shuffle" random sampling. If I'm wrong, I stand to be corrected, and invite you to point me to just ONE PLACE where you've applied your probability calculations in a way that incorporates the accumulative feedback across many iterations as those variations take place. If you are unable to do so, then I suggest I've been more than fair, I've been a chump hoodwinked by obtuse arguments here. Shame on me for being a chump, if so, but in that case, you've no basis for complaining about unfair treatment, and have been the benefactor of generous charity in reading your polemics that you in no way have earned, given what they actually entail. Show me one place where you've applied your math across generations where those generations each have their own random changes, and incorporate feedback, and I think we will again have something relevant to evolution and/or design to discuss. Barring that, you're just wasting bandwidth here in committing the tornado-in-a-junkyard fallacy, obscured by vague and cryptic and inchoate prose surrounding it.
Excuse me, in english, I believe, “random variation” means just what it means: variation caused by random events.
Variation across generations with feedback loops -- which is what is entailed by evolutionary models -- produces totally different calculations than "random samples". A single configuration pulled at random from a phase space will easily produce vanishingly small probabilities for that configuration. An ensemble of configurations that iterate and vary over generations with accumulating feedback from the environment can (and does) "climb hills", reducing over those iterations the probabilities to favor and even probable (or inevitable) odds. Everything depends on the inclusion of iterations with feedback, coincident with those variations. It doesn't matter if you think that Joe Q. Public on the street supposes "random variation" is just fine as a term for a "random one-time sample". That isn't cognizant of the evolutionary dynamics, and as such, is just tornado-in-a-junkyard thinking.
The only mechanism not included in RV is NS. As you can verify if you read my posts, I have modeled both RV (Using the concept of dFSCI and NS. Whatever you folks may like to repeat, dFSCI is very useful in modeling the neo darwinian algorithm.
I do not know of any supporter of Darwinian theory that would recognize evolutionary theory AT ALL in your calculations. You don't include any math for variations ACROSS GENERATIONS. You say "I have modeled both RV...", but you HAVEN'T modeled RV as is denoted by "RV + NS". The "RV" in "RV + NS" spreads incremental variances across generations, with feedback also accumulating across those generations. I claim you have not, cannot, and will not provide a mathematical model for "RV" in your dFSCI argument that captures the key, essential dynamic of the "RV" in "RV + NS" -- incremental variations across iterations with feedback integrations accumulating across those same iterations. This is what makes "Methinks it is like a weasel" impossible per your contrived probabilities and producible in just dozens of iterations with a cumulative feedback loop. The math will tell the tale here. We great thing about this is we don't need to rely on polemics or bluster. All we have to do to succeed in our arguments is show our math so everyone can see it, test it, and judge the results for themselves. If you aren't just wasting my time and the time of so many others here, please let me invite you to SHOW YOUR MATH, and demonstrate with an applied example how you calculate the probabilistic resources you use for your "RV" and the probabilities that actualy obtain from the numerators and denominators you identify. This can and should be worked out, agreeably and objectively, by just having each of us support what we say with the applied maths.eigenstate
January 30, 2012
January
01
Jan
30
30
2012
03:21 PM
3
03
21
PM
PDT
I understand too, that these “creatures” replicate asexually.
Yes, that's my understanding. I was quite surprised. I usually mate my critters, because they evolve much faster that way :)Elizabeth Liddle
January 30, 2012
January
01
Jan
30
30
2012
03:18 PM
3
03
18
PM
PDT
And BTW, asexually reproducing organisms with small genomes is allegedly how the diverstity started.Joe
January 30, 2012
January
01
Jan
30
30
2012
03:11 PM
3
03
11
PM
PDT
Whatever I am not convinced by anything evos say- way too many problems with their "models". So if you have anything, anything at all that supports the claim tha stochastic processes can construct new, useful multi-protein configurations I'd be very glad to hear of it!Joe
January 30, 2012
January
01
Jan
30
30
2012
03:10 PM
3
03
10
PM
PDT
I'll have a go at trying to pin point it: I think what you may be saying is that new protein domains are too "brittle" to have evolved - that, in fitness landscape terms, each is separated from its nearest possible relative by a lethal ravine? And that evolutionist are at a loss to explain how incremental, non-lethal variants could have bridged those ravines?Elizabeth Liddle
January 30, 2012
January
01
Jan
30
30
2012
03:01 PM
3
03
01
PM
PDT
@49.1.1.1.2 Well, Joe, as I understand it, the “organisms” in AVIDA have very much smaller genomes than in “real life”, and possibly the population sizes are somewhat limited. I understand too, that these “creatures” replicate asexually. My reading indicates that population geneticists would be very unsurprised if there was a build-up of deleterious mutations in a small population of asexually reproducing organisms with small genomes Is not Sanford a YEC, maintaining that all life was created a few thousand years ago, and is even now heading for genetic meltdown? Did he not, in a book he wrote about genetic entropy, use the biblically-reported long lives of Biblical patriarchs as evidence that the human genome was deteriorating? Would he not expect that faster-reproducing species would be teetering on the brink of extinction by now? Yet bacteria, baboons, and blue whales are, as far as I am aware, genetically healthy. You’d think too, that all those thousands upon thousands of generations of E coli that Lenski’s lab bred in their long-term experiments would have revealed some evidence of genetic entropy if it was such a problem. So far as I am aware, no such thing appeared No, I’m not convinced by the work you cite, there are too many problems with the model. But if you have any more evidence for your stance, (and surely this must be a productive area for peer-reviewed ID science) I’d be very glad to hear of it!Bydand
January 30, 2012
January
01
Jan
30
30
2012
02:27 PM
2
02
27
PM
PDT
OK, but tbh I think it is you who are confused:
It’s not the same with NS. In NS, as I have told you many times, there is a specific necessity relation between the fucntion of the varied sequence and replication. Now, forget for a momemt your adaptational scenarios of traditional darwinism,
Why should I "forget" the very system we are discusssing?! Weirdly, I've pointed out several times that you are "forgetting" the environment-phenotype relationship, and you say you aren't - then you tell me to "forget" it! But OK, for the sake of discussion, I will put it to one side....
and try to think a little in molecular terms and in terms of biochemical activity, exactly what darwinism cannot explain. In terms if molecular activity of an enzyme, you can in most cases trace a specific necessity relationship between that activity and replication. For instance, if DNA polimerase does not work, the cell cannot replicate. If coagultaion does not work, the individual often dies. And so on.
Sure. Clearly if a variant sequence is incompatible with life or reproduction, the phenotype dies without issue. Only variants that are compatible with life and reproduction ever make it beyond one individual.
There is absolutely no reason to “draw a “cause-and-effect” relation from a probability distribution. You are really confused here. Probability dostributions describe situations where the cause and effect relation is not known, or not quantitatively analyzable. A definite cause effect relation will give some specific “structure” to data which is not explained by the probability distribution. That’s a good way, as you know, to detect causal effects in data.
Well, I can only disagree, and say that from where I am standing it really does seem as thought it is you who are confused. Or at any rate it is not clear what you are saying. Certainly we can draw conclusions about cause and effects from probability distributes, and we do so every time we conduct a t test. Not only that, but when we model certain causes and effects, we can model them as probability distributions. I gave a clear example: a variant sequence that leads to light colouring on a moth affects the probability distribution of deaths by predation. So you will have to clarify what you are saying, because what you have written, on the face of it, makes no sense. And I just don't find the rest of your post clarifies it any further. You seem to have constructed a model that doesn't reflect what anyone thinks actually happens. And as for evading into "vague" definitions (if that's what you mean, I'm not sure, given the typos!) well, I'm certainly not trying to evade anything. I'm trying to tie down those definitions as tightly as I can! And the fact is (and it is a fact) that the theory of evolution posits a model in which stochastic processes result in genetic variations with differential probabilities (again, stochastic) of reproductive success. You may not believe that model is a good fit to the data, but modelling something different, and then showing that your model doesn't work isn't going to falsify the theory of evolution because you aren't modeling the theory of evolution! And no, the theory of evolution is not a "useless scientific object". From it we derive testable hypotheses that are tested daily, and deliver important findings that have real benefits, as well as increasing our understanding of the amazing world we live in. But really, gpuccio, we are not communicating at all here. I know it is frustrating for you, but your posts simply are not making sense to me. I can't actually parse what you are saying. And what you seem to be saying seems to be to be demonstrably not true. Natural selection cannot but be other than a stochastic process, except, I guess, in the extreme case of variants that are incompatible with life, and they don't get passed on, so are irrelevant to the process. What is that I'm not seeing? What is it that you are not seeing?Elizabeth Liddle
January 30, 2012
January
01
Jan
30
30
2012
01:37 PM
1
01
37
PM
PDT
There is absolutely no reason to “draw a “cause-and-effect” relation from a probability distribution.
Quantum effects come to mind. Boyle's Law. Maybe others.Petrushka
January 30, 2012
January
01
Jan
30
30
2012
01:27 PM
1
01
27
PM
PDT
Elizabeth: I am afraid we will never agree on that. I do think you are confused. The "necessity" created by the laws of chemistry is exactly the kind of necessity that is the base of random systems, like the tossing of a die: necessity it is, but we can describe it only probabilistically. It's not the same with NS. In NS, as I have told you many times, there is a specific necessity relation between the fucntion of the varied sequence and replication. Now, forget for a momemt your adaptational scenarios of traditional darwinism, and try to think a little in molecular terms and in terms of biochemical activity, exactly what darwinism cannot explain. In terms if molecular activity of an enzyme, you can in most cases trace a specific necessity relationship between that activity and replication. For instance, if DNA polimerase does not work, the cell cannot replicate. If coagultaion does not work, the individual often dies. And so on. Here. it is not so much the case of evading a predator, but of having the fundamental functions by which the cell, or the multicellular being, do survive. Those functions are incredibly sophisticated at molecular level. And we have to explain them. There is absolutely no reason to "draw a “cause-and-effect” relation from a probability distribution. You are really confused here. Probability dostributions describe situations where the cause and effect relation is not known, or not quantitatively analyzable. A definite cause effect relation will give some specific "structure" to data which is not explained by the probability distribution. That's a good way, as you know, to detect causal effects in data. Causal effects that can be described (you can say what is the cause, how it acts, and trace the connection in explicit terma) are not "drawn" from a probability distribution. They just "modify" the observed data. If you superimpose a causal relationship (like a sequence that improves survival) you can still see the final effect as probabilistic if it is mixed to other random effects that you cannot know explicitly. So, I agree with you that, if a sequence has a positive effect on survival, and is selectable, it will tend to propagate in the population in a way that is not completely predictable, because too many other unknown factors contribute to the final effect. But that does not make the known effect of the sequence similar to the other unknown factors. Because we know it, we understand it, we can reason on it, and we cna model it deterministically. When I assume that a selectable gene is optimally selected, conserved and propagated, I am not making an error: I am simply considering the "best case" for evolution thrpugh NS. Although that perfect scenario will never happen, it is however a threshold of what can happen. In any other, more realistic, scenario, the effect of NS will be weaker. That is a perfectly reasonable procedure. It allows us to compute the effect of NS if it were the strongets possible. And to have quantitative predictions for the behaviour of the system under those conditions. And I am not ignoring nother sources of variance. I have considered all possible sources of variance in my probabilistic modeling of RV. Then I have considered the maximum benefic effect that NS can have in improving the result: you would say, with your strange terminology, "biasing" it in favour of the functional result. So, I really don't understand your objections. I believe that my reasoning is correct. Your objections are not offerinf any better way to model your proposed mechanism: as darwinists often do, you evade again into cague definitions, and the result is that you fight against any quantitative analysis, because darwinism itself fears quantitative analysis. If you really think that my reasoning is wrong, you propose how to model a specific system, how to verify that it can do what you say it has done. We cannot go on believing in the fairy tale of neo darwinism only out of faith and vague definitions. If there is no way to verify or falsify what your so called stochastic system can do or not do in reality, it is a completely useless scientific object.gpuccio
January 30, 2012
January
01
Jan
30
30
2012
01:14 PM
1
01
14
PM
PDT
But I don’t see why, even if the origins of life were non-stochastic, subsequent happenings couldn’t be entirely stochastic.
So someone/ something went through all the trouble to design living organisms and a place for them and then left it all up to stochastic processes? As I said that would be like saying the car is designed but motors around via stochastic processes. See also: The effects of low-impact mutations in digital organisms Chase W. Nelson and John C. Sanford Theoretical Biology and Medical Modelling, 2011, 8:9 | doi:10.1186/1742-4682-8-9
Abstract: Background: Avida is a computer program that performs evolution experiments with digital organisms. Previous work has used the program to study the evolutionary origin of complex features, namely logic operations, but has consistently used extremely large mutational fitness effects. The present study uses Avida to better understand the role of low-impact mutations in evolution. Results: When mutational fitness effects were approximately 0.075 or less, no new logic operations evolved, and those that had previously evolved were lost. When fitness effects were approximately 0.2, only half of the operations evolved, reflecting a threshold for selection breakdown. In contrast, when Avida's default fitness effects were used, all operations routinely evolved to high frequencies and fitness increased by an average of 20 million in only 10,000 generations. Conclusions: Avidian organisms evolve new logic operations only when mutations producing them are assigned high-impact fitness effects. Furthermore, purifying selection cannot protect operations with low-impact benefits from mutational deterioration. These results suggest that selection breaks down for low-impact mutations below a certain fitness effect, the selection threshold. Experiments using biologically relevant parameter settings show the tendency for increasing genetic load to lead to loss of biological functionality. An understanding of such genetic deterioration is relevant to human disease, and may be applicable to the control of pathogens by use of lethal mutagenesis.
IOW stochastic processes just don't measure up to the task.Joe
January 30, 2012
January
01
Jan
30
30
2012
12:56 PM
12
12
56
PM
PDT
Oops! sorry for typo - please substitute "living" for "lieing"Bydand
January 30, 2012
January
01
Jan
30
30
2012
11:48 AM
11
11
48
AM
PDT
It was as good an answer as I can give - I simply don't know how, or whether, a determination of the stochastic nature of these processes was made. But I don't see why, even if the origins of life were non-stochastic, subsequent happenings couldn't be entirely stochastic. So have you any evidence or data to support your inference; or can you say why that inference is justified? There are, I believe, those who think that although a designer gave life its start, life was then pretty much left to get on with lieing. Do you think this incorrect? Why?Bydand
January 30, 2012
January
01
Jan
30
30
2012
11:46 AM
11
11
46
AM
PDT
Umm that doesn't answer my question, however I do have a reason to doubt gene duplications are stochastic- the OoL-> as in the only reason to infer gene duplication is a stochastic process is if living organisms arose from non-living matter via stochastic processes. IOW as we have been saying all along- the origins is what counts. We do not say cars are designed but the way they get around is entirely stochastic.Joe
January 30, 2012
January
01
Jan
30
30
2012
10:15 AM
10
10
15
AM
PDT
1 2 3 14

Leave a Reply