Uncommon Descent Serving The Intelligent Design Community

Axe on specific barriers to macro-level Darwinian Evolution due to protein formation (and linked islands of specific function)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A week ago, VJT put up a useful set of excerpts from Axe’s 2010 paper on proteins and barriers they pose to Darwinian, blind watchmaker thesis evolution. During onward discussions, it proved useful to focus on some excerpts where Axe spoke to some numerical considerations and the linked idea of islands of specific function deeply isolated in AA sequence and protein fold domain space, though he did not use those exact terms.

I think it worth the while to headline the clips, for reference (instead of leaving them deep in a discussion thread):

_________________

ABSTRACT: >> Four decades ago, several scientists suggested that the impossibility of any evolutionary process sampling anything but a miniscule fraction of the possible protein sequences posed a problem for the evolution of new proteins. This potential problem—the sampling problem—was largely ignored, in part because those who raised it had to rely on guesswork to fill some key gaps in their understanding of proteins. The huge advances since that time call for a care -ful reassessment of the issue they raised. Focusing specifically on the origin of new protein folds, I argue here that the sampling problem remains. The difficulty stems from the fact that new protein functions, when analyzed at the level of new beneficial phenotypes, typically require multiple new protein folds, which in turn require long stretches of new protein sequence. Two conceivable ways for this not to pose an insurmountable barrier to Darwinian searches exist. One is that protein function might generally be largely indifferent to protein sequence. The other is that rela-tively simple manipulations of existing genes, such as shuffling of genetic modules, might be able to produce the necessary new folds. I argue that these ideas now stand at odds both with known principles of protein structure and with direct experimental evidence . . . >>

Pp 5 – 6: >> . . . we need to quantify a boundary value for m, meaning a value which, if exceeded, would solve the whole sampling problem. To get this we begin by estimating the maximum number of opportunities for spontane-ous mutations to produce any new species-wide trait, meaning a trait that is fixed within the population through natural selection (i.e., selective sweep). Bacterial species are most conducive to this because of their large effective population sizes. 3 So let us assume, generously, that an ancient bacterial species sustained an effective population size of 10 ^10 individuals [26] while passing through 10^4 generations per year. After five billion years, such a species would produce a total of 5 × 10 ^ 23 (= 5 × 10^ 9 x 10^4 x 10 ^10 ) cells that happen (by chance) to avoid the small-scale extinction events that kill most cells irrespective of fitness. These 5 × 10 ^23 ‘lucky survivors’ are the cells available for spontaneous muta-tions to accomplish whatever will be accomplished in the species. This number, then, sets the maximum probabilistic resources that can be expended on a single adaptive step. Or, to put this another way, any adaptive step that is unlikely to appear spontaneously in that number of cells is unlikely to have evolved in the entire history of the species.

In real bacterial populations, spontaneous mutations occur in only a small fraction of the lucky survivors (roughly one in 300 [27]). As a generous upper limit, we will assume that all lucky survivors happen to receive mutations in portions of the genome that are not constrained by existing functions 4 , making them free to evolve new ones. At most, then, the number of different viable genotypes that could appear within the lucky survivors is equal to their number, which is 5 × 10^ 23 . And again, since many of the genotype differences would not cause distinctly new proteins to be produced, this serves as an upper bound on the number of new protein sequences that a bacterial species may have sampled in search of an adaptive new protein structure.

Let us suppose for a moment, then, that protein sequences that produce new functions by means of new folds are common enough for success to be likely within that number of sampled sequences. Taking a new 300-residue structure as a basis for calculation (I show this to be modest below), we are effectively supposing that the multiplicity factor m introduced in the previous section can be as large as 20 ^300 / 5×10^ 23 ~ 10 ^366 . In other words, we are supposing that particular functions requiring a 300-residue structure are real-izable through something like 10 ^366 distinct amino acid sequences. If that were so, what degree of sequence degeneracy would be implied? More specifically, if 1 in 5×10 23 full-length sequences are supposed capable of performing the function in question, then what proportion of the twenty amino acids would have to be suit-able on average at any given position? The answer is calculated as the 300 th root of (5×10 23 ) -1 , which amounts to about 83%, or 17 of the 20 amino acids. That is, by the current assumption proteins would have to provide the function in question by merely avoid-ing three or so unacceptable amino acids at each position along their lengths.

No study of real protein functions suggests anything like this degree of indifference to sequence. In evaluating this, keep in mind that the indifference referred to here would have to charac-terize the whole protein rather than a small fraction of it. Natural proteins commonly tolerate some sequence change without com- plete loss of function, with some sites showing more substitutional freedom than others. But this does not imply that most mutations are harmless. Rather, it merely implies that complete inactivation with a single amino acid substitution is atypical when the start-ing point is a highly functional wild-type sequence (e.g., 5% of single substitutions were completely inactivating in one study [28]). This is readily explained by the capacity of well-formed structures to sustain moderate damage without complete loss of function (a phenomenon that has been termed the buffering effect [25]). Conditional tolerance of that kind does not extend to whole proteins, though, for the simple reason that there are strict limits to the amount of damage that can be sustained.

A study of the cumulative effects of conservative amino acid substitutions, where the replaced amino acids are chemically simi-lar to their replacements, has demonstrated this [23]. Two unrelat-ed bacterial enzymes, a ribonuclease and a beta-lactamase, were both found to suffer complete loss of function in vivo at or near the point of 10% substitution, despite the conservative nature of the changes. Since most substitutions would be more disruptive than these conservative ones, it is clear that these protein functions place much more stringent demands on amino acid sequences than the above supposition requires.

Two experimental studies provide reliable data for estimating the proportion of protein sequences that perform specified func -tions [–> note the terms] . One study focused on the AroQ-type chorismate mutase, which is formed by the symmetrical association of two identical 93-residue chains [24]. These relatively small chains form a very simple folded structure (Figure 5A). The other study examined a 153-residue section of a 263-residue beta-lactamase [25]. That section forms a compact structural component known as a domain within the folded structure of the whole beta-lactamase (Figure 5B). Compared to the chorismate mutase, this beta-lactamase do-main has both larger size and a more complex fold structure.

In both studies, large sets of extensively mutated genes were produced and tested. By placing suitable restrictions on the al-lowed mutations and counting the proportion of working genes that result, it was possible to estimate the expected prevalence of working sequences for the hypothetical case where those restric-tions are lifted. In that way, prevalence values far too low to be measured directly were estimated with reasonable confidence.

The results allow the average fraction of sampled amino acid substitutions that are functionally acceptable at a single amino acid position to be calculated. By raising this fraction to the power l, it is possible to estimate the overall fraction of working se-quences expected when l positions are simultaneously substituted (see reference 25 for details). Applying this approach to the data from the chorismate mutase and the beta-lactamase experiments gives a range of values (bracketed by the two cases) for the preva-lence of protein sequences that perform a specified function. The reported range [25] is one in 10 ^77 (based on data from the more complex beta-lactamase fold; l = 153) to one in 10 ^53 (based on the data from the simpler chorismate mutase fold, adjusted to the same length: l = 153). As remarkable as these figures are, par-ticularly when interpreted as probabilities, they were not without precedent when reported [21, 22]. Rather, they strengthened an existing case for thinking that even very simple protein folds can place very severe constraints on sequence.  [–> Islands of function issue.]

Rescaling the figures to reflect a more typical chain length of 300 residues gives a prevalence range of one in 10 ^151 to one in 10 ^104 . On the one hand, this range confirms the very highly many-to-one mapping of sequences to functions. The corresponding range of m values is 10 ^239 (=20 ^300 /10 ^151 ) to 10 ^286 (=20 ^300 /10 ^104 ), meaning that vast numbers of viable sequence possibilities exist for each protein function. But on the other hand it appears that these functional sequences are nowhere near as common as they would have to be in order for the sampling problem to be dis-missed. The shortfall is itself a staggering figure—some 80 to 127 orders of magnitude (comparing the above prevalence range to the cutoff value of 1 in 5×10 23 ). So it appears that even when m is taken into account, protein sequences that perform particular functions are far too rare to be found by random sampling.>>

Pp 9 – 11: >> . . . If aligned but non-matching residues are part-for-part equivalents, then we should be able to substitute freely among these equivalent pairs without impair-ment. Yet when protein sequences were even partially scrambled in this way, such that the hybrids were about 90% identical to one of the parents, none of them had detectable function. Considering the sensitivity of the functional test, this implies the hybrids had less than 0.1% of normal activity [23]. So part-for-part equiva-lence is not borne out at the level of amino acid side chains.

In view of the dominant role of side chains in forming the bind-ing interfaces for higher levels of structure, it is hard to see how those levels can fare any better. Recognizing the non-generic [–> that is specific and context sensitive] na-ture of side chain interactions, Voigt and co-workers developed an algorithm that identifies portions of a protein structure that are most nearly self-contained in the sense of having the fewest side-chain contacts with the rest of the fold [49]. Using that algorithm, Meyer and co-workers constructed and tested 553 chimeric pro-teins that borrow carefully chosen blocks of sequence (putative modules) from any of three natural beta lactamases [50]. They found numerous functional chimeras within this set, which clearly supports their assumption that modules have to have few side chain contacts with exterior structure if they are to be transport-Able.

At the same time, though, their results underscore the limita-tions of structural modularity. Most plainly, the kind of modular-ity they demonstrated is not the robust kind that would be needed to explain new protein folds. The relatively high sequence simi-larity (34–42% identity [50]) and very high structural similarity of the parent proteins (Figure 8) favors successful shuffling of modules by conserving much of the overall structural context. Such conservative transfer of modules does not establish the ro-bust transportability that would be needed to make new folds. Rather, in view of the favorable circumstances, it is striking how low the success rate was. After careful identification of splice sites that optimize modularity, four out of five tested chimeras were found to be completely non-functional, with only one in nine being comparable in activity to the parent enzymes [50]. In other words, module-like transportability is unreliable even under extraordinarily favorable circumstances [–> these are not generally speaking standard bricks that will freely fit together in any freely plug- in compatible pattern to assemble a new structure] . . . .

Graziano and co-workers have tested robust modularity directly by using amino acid sequences from natural alpha helices, beta strands, and loops (which connect helices and/or strands) to con-struct a large library of gene segments that provide these basic structural elements in their natural genetic contexts [52]. For those elements to work as robust modules, their structures would have to be effectively context-independent, allowing them to be com-bined in any number of ways to form new folds. A vast number of combinations was made by random ligation of the gene segments, but a search through 10^8 variants for properties that may be in-dicative of folded structure ultimately failed to identify any folded proteins. After a definitive demonstration that the most promising candidates were not properly folded, the authors concluded that “the selected clones should therefore not be viewed as ‘native-like’ proteins but rather ‘molten-globule-like’” [52], by which they mean that secondary structure is present only transiently, flickering in and out of existence along a compact but mobile chain. This contrasts with native-like structure, where secondary structure is locked-in to form a well defined and stable tertiary Fold . . . .

With no discernable shortcut to new protein folds, we conclude that the sampling problem really is a problem for evolutionary accounts of their origins. The final thing to consider is how per-vasive this problem is . . . Continuing to use protein domains as the basis of analysis, we find that domains tend to be about half the size of complete protein chains (compare Figure 10 to Figure 1), implying that two domains per protein chain is roughly typical. This of course means that the space of se-quence possibilities for an average domain, while vast, is nowhere near as vast as the space for an average chain. But as discussed above, the relevant sequence space for evolutionary searches is determined by the combined length of all the new domains needed to produce a new beneficial phenotype. [–> Recall, courtesy Wiki, phenotype: “the composite of an organism’s observable characteristics or traits, such as its morphology, development, biochemical or physiological properties, phenology, behavior, and products of behavior (such as a bird’s nest). A phenotype results from the expression of an organism’s genes as well as the influence of environmental factors and the interactions between the two.”]

As a rough way of gauging how many new domains are typi-cally required for new adaptive phenotypes, the SUPERFAMILY database [54] can be used to estimate the number of different protein domains employed in individual bacterial species, and the EcoCyc database [10] can be used to estimate the number of metabolic processes served by these domains. Based on analysis of the genomes of 447 bacterial species 11, the projected number of different domain structures per species averages 991 (12) . Compar-ing this to the number of pathways by which metabolic processes are carried out, which is around 263 for E. coli,13 provides a rough figure of three or four new domain folds being needed, on aver-age, for every new metabolic pathway 14 . In order to accomplish this successfully, an evolutionary search would need to be capable of locating sequences that amount to anything from one in 10 ^159 to one in 10 ^308 possibilities 15 , something the neo-Darwinian model falls short of by a very wide margin. >>
____________________

Those who argue for incrementalism or exaptation and fortuitous coupling or Lego brick-like modularity or the like need to address these and similar issues. END

PS: Just for the objectors eager to queue up, just remember, the Darwinism support essay challenge on actual evidence for the tree of life from the root up to the branches and twigs is still open after over two years, with the following revealing Smithsonian Institution diagram showing the first reason why, right at the root of the tree of life:

Darwin-ToL-full-size-copy

No root, no shoots, folks.  (Where, the root must include a viable explanation of gated encapsulation, protein based metabolism and cell functions, code based protein assembly and the von Neumann self replication facility keyed to reproducing the cell.)

Comments
keiths:
I will be quoting liberally from Andreas Wagner’s new book Arrival of the Fittest.
When? I honestly believe I have provided more quotes from the book than you have. Am I wrong?Mung
November 25, 2014
November
11
Nov
25
25
2014
09:06 PM
9
09
06
PM
PDT
keiths:
I will be quoting liberally from Andreas Wagner’s new book Arrival of the Fittest.
When? I honestly believe I have provided more quote from the book than you have. Am I wrong?Mung
November 21, 2014
November
11
Nov
21
21
2014
05:09 PM
5
05
09
PM
PDT
MeThink, make a denominator gigantic and you end up with a really small fraction. Obviously. At higher Dimensions, not only is the cube volume more, the Hypercube is clustered too. I will give an example :
Imagine your Facebook network. Your immediate friends will have interest similar to yours. As you venture out in your network and traverse your friend’s friend network, or further on to friend’s friend’s friend network, you will encounter someone with totally different interest (akin to new function – this may or may not help in generating a new phenotype). Now imagine this Facebook network of yours balled up.It took many steps to reach the node where the person with a totally different interest exist on a 2d network. In your balled network, you have to travel maximum half of the steps to reach the same person. Now imagine this in higher and higher dimensions (perhaps like crushing the balled Facebook further). You will find you have to travel not even a fraction of 1 step to reach the person, and you will find a huge number of persons with dissimilar interests (akin to new functions – many of it helping to build a new phenotype or at-least help this generation survive better to start the search all over again with the advantage of the new phenotype) in just that fraction of 1 step, and that’s the reason improbabilities of ID don’t matter.
I don't think you are familiar with 'search'. You should read up your own ID concepts like 'No Free Lunch', 'Conservation of Information (yes it is about search though oddly named)' - which discuss search. IF you want to read more about landscape search, read Axe papers which talks of sparse landscapes. And the “10 steps reduces drastically to a fraction of 1 step!” – How do you take a tiny fraction of 1 step? Just think about stepping? I think Wagner was referring to distance not tiny fractions of 1 step. I was talking of Random walk (which is a stochastic process)- it is not literal walk of-course, so although not exact 1 step, it will be close to 1 step - of course this may not be true for every hypercube network. The steps required may vary, and yes it can be correlated to distance in network. BTW, the programming code behind Wagner’s hyperastronomical library simulation public and subject to peer review? Just curious. Not sure about that, but both Wagner and Zurich university have lot of related software and material : Publications Software data you can search in University of Zurich website too.Me_Think
November 20, 2014
November
11
Nov
20
20
2014
10:48 AM
10
10
48
AM
PDT
MeThink, make a denominator gigantic and you end up with a really small fraction. And the "10 steps reduces drastically to a fraction of 1 step!" - How do you take a tiny fraction of 1 step? Just think about stepping? I think Wagner was referring to distance not tiny fractions of 1 step. BTW, the programming code behind Wagner's hyperastronomical library simulation public and subject to peer review? Just curious.ppolish
November 20, 2014
November
11
Nov
20
20
2014
10:09 AM
10
10
09
AM
PDT
ppolish @124
If there are random walkers, they are cheating by following the paths laid done for the guided walkers by the mathematically designed Hypercube. The random walkers have stumbled upon a secret of ID.
Where did guided walkers come from ? What path are you talking about ? Network is still a search and it is random, only since the dimensions are high, the search space is reduced drastically. I think that concept has been discussed above in this thread. Here: @ 11
Imagine a solution circle (the circle within which solution exists) of 10 cm inside a 100 cm square search space. The area which needs to be searched for solution is pi x 10 ^2 = 314.15 The total Search area is 100 x 100 = 10000. The % area to be searched is (314.15/10000) x 100 = 3.14% In 3 dimensions,the search area will be 4/3 x pi x 10^3 Area to search is now cube (because of 3 dimensions) = 100^3. Thus the % of area to be searched falls to just 4188.79/100^3 = 0.41 % only. Hypervolume of sphere with dimension d and radius r is: (Pi^d/2 x r^d)/r(d/2+1) HyperVolume of Cube = r^d At 10 dimensions, the volume to search reduces to just: 0.000015608 % But in nature, the actual search area is incredibly small. As wagner points out in Chapter six, In the number of dimensions where our circuit library exists—get ready for this—the sphere contains neither 0.1 percent, 0.01 percent, nor 0.001 percent. It contains less than one 10^ -100th of the library
@ 26
The concept is quite simple: A ball (representing the search volume) with constant radius occupies ever-decreasing fractions of a cube’s volume as dimensions increases. I will quote Wagner himself: This volume decreases not just for my example of a 15 percent ratio of volumes, but for any ratio, even one as high as 75 percent, where the volume drops to 49 percent in three dimensions, to 28 percent in four, to 14.7 percent in five, and so on, to ever-smaller fractions. What this means: In a network of N nodes and N-1 neighbors, if in 1 dimension, 10 steps are required to to discover new genotype/procedure, in higher dimension, this 10 steps reduces drastically to fraction of 1 step !
HTH. I don't think I can explain any better.Me_Think
November 20, 2014
November
11
Nov
20
20
2014
09:15 AM
9
09
15
AM
PDT
Thank you MeThink, that makes sense. It appears a random walker would not need to travel much further than a guided walker to reach destination. If there are random walkers, they are cheating by following the paths laid done for the guided walkers by the mathematically designed Hypercube. The random walkers have stumbled upon a secret of ID. Unless the mathematically elegant Hypercube structure just poofed into existence not. MeThink, I'll understand if you need to do a face palm:)ppolish
November 20, 2014
November
11
Nov
20
20
2014
08:54 AM
8
08
54
AM
PDT
ppolish @ 122 The network is based on real genotype and metabolism data. You have to use computers to do a random walk because there is no other way. Eg : Take the 5000 metabolisms required for life, the number of vertex of the hypercube graph (which is the representation of network at 5000 dimensions) will be 2^n = 2^5000 = 1.4 x 10^1505 The number of edges of the graph will be 2^(-1+5000) x 5000 = 3.5 x 10^1508 so you need a cluster of computers to do all network and random walk calculations - based on real data Me_Think
November 20, 2014
November
11
Nov
20
20
2014
03:33 AM
3
03
33
AM
PDT
MeThink, Wagner discovered and/or invented the hyper dimensional library in his high powered computer lab. Did he test his idea in an actual Bio Lab? Is it even testable, or maybe more like a "multiverse" or "many worlds" kind of idea.ppolish
November 19, 2014
November
11
Nov
19
19
2014
03:59 PM
3
03
59
PM
PDT
Me_Think, I'm just trying to fill in the gaps that keiths promised to fill. In case you missed it: keiths:
I will be quoting liberally from Andreas Wagner’s new book Arrival of the Fittest. I highly recommend this book to anyone involved in the ID debate, whether pro or con. You will be hearing about it again and again, so you need to understand its contents.
Feel free to quote from the book and help fill in the gaps.Mung
November 18, 2014
November
11
Nov
18
18
2014
07:29 PM
7
07
29
PM
PDT
Since keiths can't be bothered, I'll continue:
But the biggest mystery about evolution eluded his [Darwin's] theory. and he [Darwin] couldn't even get close to solving it.
Nothing new here. Nothing non-Darwinian. And certainly nothing anti-Darwinian. Move along.Mung
November 18, 2014
November
11
Nov
18
18
2014
07:24 PM
7
07
24
PM
PDT
Mung @ 188 Do you really think 'intelligence' in Wagner's book refers to Intelligent Designer ? Can you explain how you came to that astonishing conclusion ?Me_Think
November 18, 2014
November
11
Nov
18
18
2014
07:22 PM
7
07
22
PM
PDT
Andreas Wagner presents a compelling, authoritative, and up-to-date case for bottom up intelligence in in biological evolution. - George Dyson
Mung
November 18, 2014
November
11
Nov
18
18
2014
07:16 PM
7
07
16
PM
PDT
Mung, Of course it reveals the hyperdimensional structure and how it can help in reducing the improbabilities of new phenotype 'search'.Me_Think
November 18, 2014
November
11
Nov
18
18
2014
07:12 PM
7
07
12
PM
PDT
A radical departure from the mainstream perspective on Darwininan evolution. - Rolf Dobelli
But not non-Darwinian. And certainly not anti-Darwinian. keiths sez so.Mung
November 18, 2014
November
11
Nov
18
18
2014
07:12 PM
7
07
12
PM
PDT
..reveals the astonishing hidden structure of evolution, long overlooked by biologists... - Philip Ball
Mung
November 18, 2014
November
11
Nov
18
18
2014
07:08 PM
7
07
08
PM
PDT
keiths:
I will be quoting liberally from Andreas Wagner’s new book Arrival of the Fittest.
When? keiths:
I will be quoting liberally from Andreas Wagner’s new book Arrival of the Fittest. I highly recommend this book to anyone involved in the ID debate, whether pro or con. You will be hearing about it again and again, so you need to understand its contents.
Well gee, since I got tired of waiting for you to do more than posture:
...contains brand new scientific insights... - Matt Ridley
Nothing new here. Move along.Mung
November 18, 2014
November
11
Nov
18
18
2014
07:05 PM
7
07
05
PM
PDT
ppolish: maybe “oasis” instead of “island” is better metaphor. Step in the direction of one grain of sand amidst the Innumerable grains of hyperastronomical sand. That's not just a different metaphor, but a different claim, which was that there exists no pathway. Your new metaphor is interesting, but we're considering selectable pathways, not neutral evolution, so we know which way to step. If there are only a few selectable paths, then evolution will eventually hit upon it. On the other hand, if there are a great multitude of pathways, it's possible that evolution could stall on local peaks; however, recombination allows jumping between local peaks.Zachriel
November 17, 2014
November
11
Nov
17
17
2014
02:45 PM
2
02
45
PM
PDT
Zachriel, maybe "oasis" instead of "island" is better metaphor. Step in the direction of one grain of sand amidst the Innumerable grains of hyperastronomical sand. Sure, the oasis is just a step away. One really small step. But which step?ppolish
November 17, 2014
November
11
Nov
17
17
2014
01:48 PM
1
01
48
PM
PDT
kairosfocus: Z, Reality check. You didn't respond, but just repeated your claim. If we can walk between the purported islands without getting our feet wet, then they aren't islands — by definition.Zachriel
November 17, 2014
November
11
Nov
17
17
2014
06:48 AM
6
06
48
AM
PDT
franklin, see my response here Mung
November 16, 2014
November
11
Nov
16
16
2014
10:23 PM
10
10
23
PM
PDT
mung
When do you plan to begin quoting from the book?
when do you plan on returning to our conversation to defend your assertions about hemoglobin? never? in case you've forgotten you abandoned your claims in this thread: https://uncommondescent.com/intelligent-design/denying-the-truth-is-not-the-same-as-not-knowing-itfranklin
November 16, 2014
November
11
Nov
16
16
2014
09:29 PM
9
09
29
PM
PDT
keiths:
Natural Selection can preserve innovations, but it cannot create them.
This is from the book? Which page? not keiths:
Natural Selection can eliminate innovations, but it cannot create them.
This is from the book? Which page? If natural selection can preserve innovations why can't it eliminate innovations?Mung
November 16, 2014
November
11
Nov
16
16
2014
09:07 PM
9
09
07
PM
PDT
keiths:
I will be quoting liberally from Andreas Wagner’s new book Arrival of the Fittest. I highly recommend this book to anyone involved in the ID debate, whether pro or con. You will be hearing about it again and again, so you need to understand its contents.
When do you plan to begin quoting from the book?Mung
November 16, 2014
November
11
Nov
16
16
2014
08:59 PM
8
08
59
PM
PDT
keiths:
You’re bluffing, KF. Selection makes all the difference in the world, and you know it. (Try running Weasel — the non-latching variety) — without selection sometime. Make sure you have a few quintillion lifetimes to spare. You’ll need them.)
laughable. really. hilarious. pathetic. keiths appeals to the weasel algorithm, one in which the desired outcome is programmed in from the beginning, along with a "fitness function" that ensures the desired outcome. who is doing the bluffing here? Sure, if we know what we want we can devise an algorithm to get there. But that requires a pre-specified target and a designed fitness function. If that's what keiths means by "selection" making "all the difference in the world" I don't think any ID proponent would disagree.Mung
November 16, 2014
November
11
Nov
16
16
2014
08:53 PM
8
08
53
PM
PDT
keiths:
My friends at AtBC would never forgive me if I didn’t egg you on.
trollMung
November 16, 2014
November
11
Nov
16
16
2014
08:25 PM
8
08
25
PM
PDT
kairosfocus:
KS: I draw your attention:
FSCO/I will naturally come in islands of function in much larger config spaces — because to get correctly arranged parts to work together to achieve a function there needs to be specific configs that follow a correct wiring diagram [with the vastly wider set of clumped or scattered but non functional configs excluded], where such plans are routinely produced by design . . .
I can back that up on billions of cases in point. Can you show why any old parts can be oriented any old how, put any old where and can be connected any which ways, and will readily achieve interactive complex function? Where we deal with at least 500 bits of complexity? Do you see why I am having a serious problem with your rejection of the commonplace fact that functionality dependent on interacting multiple parts depends crucially on how they are wired up?
KF, please read Arrival of the Fittest so that the rest of us no longer have to listen to your inane arguments about fishing reels and "islands of function". Your Designer is shivering in the cold. Let him retreat to the next gap, out of kindness if nothing else.keith s
November 16, 2014
November
11
Nov
16
16
2014
08:20 PM
8
08
20
PM
PDT
kairosfocus,
As for non-latching varieties of Weasel, KS you full well know that (a) the results published by Dawkins showed latching behaviour, (b) you full well know that by adjusting parameters latching will show up in many runs of supposed non latching reconstructions, where (c) quite conveniently the original code is nowhere to be found.
My friends at AtBC would never forgive me if I didn't egg you on. As you full well know or should know, latching is a red herring, led away from the path to truth and then carried away to strawman caricatures soaked in toxic oil of ad hominems and set alight to cloud, poison, polarise and confuse the atmosphere. Non-latching versions of Weasel work just fine, but remove selection and you'll be waiting quintillions of lifetimes for convergence. Your arguments fail utterly because they do not take selection into account. Please do better.keith s
November 16, 2014
November
11
Nov
16
16
2014
08:09 PM
8
08
09
PM
PDT
Ok. Metaphysics comes under Philosophy. I understand why Dembski would says things like 'Matter is a Myth' in that book.Me_Think
November 16, 2014
November
11
Nov
16
16
2014
07:49 PM
7
07
49
PM
PDT
MeThink, Wagner imagines multiple Libraries, metabolic etc. But even a metaphorical Library contains information. Information first, a hyperastronomical library second. Dembski's book is classified on Amazon under "Logic and Language", currently #12:) He himself refers to the book as metaphysical. His book describes the underpinnings of Science. Would that be considered a Science Book?ppolish
November 16, 2014
November
11
Nov
16
16
2014
07:40 PM
7
07
40
PM
PDT
ppolish @ 99 You missed one basic point in the book. Library is a metaphor for genotype network. Matter is a myth ? Elements that make up everything is a myth ? Is Dembski's book making a scientific argument or philosophical argument ?Me_Think
November 16, 2014
November
11
Nov
16
16
2014
07:17 PM
7
07
17
PM
PDT
1 2 3 5

Leave a Reply