Uncommon Descent Serving The Intelligent Design Community

Chance, Law, Agency or Other?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Suppose you come across this tree:

Tree Chair

You know nothing else about the tree other than what you can infer from a visual inspection.

Multiple Choice:

A.  The tree probably obtained this shape through chance.

B.  The tree probably obtained this shape through mechanical necessity.

C.  The tree probably obtained this shape through a combination of chance and mechanical necessity.

D.  The tree probably obtained this shape as the result of the purposeful efforts of an intelligent agent.

E.  Other.

Select your answer and give supporting reasons.

Comments
The “rendering” is designed but the exact contents of the texture are not, at least not at run time.
An intended result can be reached via algorithm when intelligence is involved. For example: http://www.mapzoneeditor.com/?PAGE=GALLERY.RENDERS Active information is involved. (I just emailed Bill to doublecheck on how to calculate this in regards to fractals, procedural textures, etc.) The same applies to GAs. Active information requires intelligence based upon all known observation.
Of course, you could predict what it would be no doubt, but that’s not quite the same thing. Does creating a texture when you have no idea what it looks like count as “designing” it?
The difference is the incorporation of a generalized Specification. In the act of designing you will have a target in search space. This target can be very vague/generalized (large) or very specific (small). Most GAs are examples of the former, while Dawkin's Weasel program is the extreme of the latter.
If so, how would you tell the difference between a procedural effect and a designed effect designed to look like a procedural effect?
Essentially what you're doing is rephrasing the old tired objection that Dawkins made about "apparent design". I see no need to rehash that one: http://www.google.com/search?hl=en&q=%22apparent+design%22+site%3Awww.uncommondescent.com&btnG=Google+Search
How could you tell your “design” from somebody elses “design” if neither of you knows what it looks like to begin with.
Comparison of the active information, perhaps?
So, the “feature” and “rendering” is designed, and the procedural texture is following rules that are designed but the texture itself? If it was partly based on a random seed generated (as many are, or used to be anyway) from how long your PC has been powered on for would you claim it was designed by you by virture of you turning on the pc at a given moment?
A designed object can contain pseudo-random attributes. Darwinists are generally confused about this. But I'm willing to forgive this confusion since there are ID proponents who use poor arguments. For algorithms (GAs, fractals, whatever) it's not merely the act of writing the code itself that invalidates the example. The design is in how the search is funneled. Dembski calls this active information.Patrick
May 28, 2008
May
05
May
28
28
2008
03:40 PM
3
03
40
PM
PDT
Does the usage of a procedural texture mean that a rendering incorporating such a feature is not designed?
The "rendering" is designed but the exact contents of the texture are not, at least not at run time. Hence the utility of procedural texturing. Of course, you could predict what it would be no doubt, but that's not quite the same thing. Does creating a texture when you have no idea what it looks like count as "designing" it? If so, how would you tell the difference between a procedural effect and a designed effect designed to look like a procedural effect? How could you tell your "design" from somebody elses "design" if neither of you knows what it looks like to begin with. So, the "feature" and "rendering" is designed, and the procedural texture is following rules that are designed but the texture itself? If it was partly based on a random seed generated (as many are, or used to be anyway) from how long your PC has been powered on for would you claim it was designed by you by virture of you turning on the pc at a given moment?
It would certainly affect the calculating of informational bits–the systems generating the complexity are taken into account, along with the reduced amount of information necessary to represent the entire object–but I hope that makes it obvious how silly your objection is (aka the presence of fractals does not equate to the EF always returning a false).
I'm afraid that I did not fully understand the first part. As to my objection, in fact it is the objection of many others too, including gpuccio (which is why my comment was aimed at him, but I'm glad you answered).
I wouldn’t be surprised if DNA uses recursive mathematics for generating its complexity.
What do you mean? I've just shown you a picture of DN" "using" recursive mathematics.
Plants do this for their structure at a macro level, although this is the first time I’ve seen such a pattern on a plant.
Can't be many vegetarians around them there parts! :)
Also, fractals can be used for data compression, so why couldn’t there be fractal compression of hereditary information?
Nobody was implying they could not.Mavis Riley
May 28, 2008
May
05
May
28
28
2008
02:04 PM
2
02
04
PM
PDT
Mavis, Does the usage of a procedural texture mean that a rendering incorporating such a feature is not designed? It would certainly affect the calculating of informational bits--the systems generating the complexity are taken into account, along with the reduced amount of information necessary to represent the entire object--but I hope that makes it obvious how silly your objection is (aka the presence of fractals does not equate to the EF always returning a false). I wouldn’t be surprised if DNA uses recursive mathematics for generating its complexity. Plants do this for their structure at a macro level, although this is the first time I've seen such a pattern on a plant. Also, fractals can be used for data compression, so why couldn't there be fractal compression of hereditary information? And, yes, yes...you're probably trying to lead the conversation to attempt to make the old argument that the OOL found its source in self-organization, fractals, whatever... Gil's previous thoughts on this subject:
Recursion (self-referential algorithms and self-calling functions) is an extremely powerful tool in computer science. The AI (artificial intelligence) techniques used in chess- and checkers-playing computer programs are based upon this concept. This is the basis of what we call a “tree search.” The immune system apparently uses a search/trial-and-error technique in order to devise antibodies to pathogens. The immune system also maintains a database of previously-seen pathogenic agents and how to defeat them. This is what immunization is all about. As an ID research proposal I would suggest pursuing what we have learned from AI research to see if human-designed algorithms are reflected in biology, especially when it comes to the immune system: 1) Iterative Deepening: Make initial, shallow, inexpensive searches, and increase the depth and expense of the searches iteratively. 2) Investigative Ordering: Order results from 1) to waste as little time as possible during deeper searches. 3) Maintain short-term memory to rapidly access solutions to the most-recently-seen problems. (We use RAM-based hash tables in chess and checkers programs for this purpose.) 4) Maintain long-term memory for catastrophic themes that tend to recur on a regular basis. (We use non-volatile, disk-based endgame databases in chess and checkers programs for this purpose.)
Patrick
May 28, 2008
May
05
May
28
28
2008
01:23 PM
1
01
23
PM
PDT
gpuccio, Now that my comments are appearing, please take note of this http://www.fourmilab.ch/images/Romanesco/ Fractal food - naturally occurring fractals. Romanesco no less! So, this leaves me with a question. If fractals will fail to be noted as "designed" by the EF as several have pointed out already, what does the EF make of a biological organism (presumably designed) that expresses a fractal as it's physical form? Designed? Not? Can it tell?Mavis Riley
May 28, 2008
May
05
May
28
28
2008
12:37 PM
12
12
37
PM
PDT
I vaguely remember reading those articles, but I recall them being trivial. Behe spoke of such short indirect pathways being feasible years ago. Kinda like the "devastating examples" of irreducibly complex structures being formed...comprised of 2-3 components. But what do you expect...we've been asking Darwinists for evidence for macroevolutionary events for years and all they can do is showcase trivial examples and hysterically assert that mechanisms with 100 times the complexity can occur in just the same manner. How many times must ID proponents repeat themselves, stating that such modifications of CSI should be fully possible without intelligence? We are EXPECTING to find such examples, for heaven's sake! Anyway, enough of me, here is Behe's thoughts on that paper:
“The evolutionary puzzle becomes more complex at a higher level of cellular organization.” No kidding. The January 25th issue of Nature carries a “Progress” paper by Poelwijk et al that’s touted on the cover as “Plugging Darwin’s Gaps,” and cited by its authors as addressing concerns raised by proponents of intelligent design. The gist of the paper is that some amino acid residues of several proteins can be altered in the lab to produce proteins with properties slightly different from those they started with. A major example the authors cite is the work of Bridgham et al (2006) altering hormone receptors, which I blogged on last year. That very modest paper was puffed not only in Science, but in the New York Times, too. It seems some scientists have discovered that one way to hype otherwise-lackluster work is to claim that it discredits ID. Quite unsurprisingly, the current paper shows that microevolution can happen. Small changes in a protein may not destroy its activity. If you start out with a protein that does something, such as bind DNA or a hormone, it’s not surprising that you can sometimes find a sequence of changes that can allow the protein to do something closely similar, such as bind a second sequence of DNA or a second, structurally-similar hormone. My general reaction to breathless papers like this is that they vastly oversimplify the problems evolution faces. Consider a very rugged evolutionary landscape. Imagine peaks big and small all packed closely together. It would of course be very difficult for a cell or organism to traverse such a landscape. Now, however, suppose an investigator focuses his gaze on just one peak of the rugged landscape and myopically performs experiments whose products lie very close to the peak. In that case the investigator is artificially reducing what in reality is a very rugged landscape to one that looks rather smooth. The results tell us very little about the ability of random processes to traverse the larger, rugged landscape. The authors remark, “The evolutionary puzzle becomes more complex at a higher level of cellular organization.” No kidding. Nonetheless, they, like most Darwinists, assume that larger changes involving more components are simple extrapolations of smaller changes. A good reason to be extremely skeptical of that is the work of Richard Lenski, which they cite. Lenski and his collaborators have grown E. coli in his lab for tens of thousands of generations, in a cumulative population size of trillions of cells, and they have seen no building of new systems, just isolated mutations in various genes. Apparently, nature has a much more difficult time putting together new systems than do human investigators in a lab.
Here's a relevant discussion about how minor stepwise pathways are viable but run into problems when several major concurrent changes must occur: http://www.overwhelmingevidence.com/oe/node/381 That pretty much covers my thoughts on this subject. Darwinists just need to be repeatedly banged over the head with basic engineering concepts (the problem as outlined in comment #128) until they get it.Patrick
May 28, 2008
May
05
May
28
28
2008
11:56 AM
11
11
56
AM
PDT
gpuccio, Perhaps the path is fractal and as such does not have a length as you understand it.Mavis Riley
May 28, 2008
May
05
May
28
28
2008
11:50 AM
11
11
50
AM
PDT
Bob O'H: Thank you for the one word. I am still waiting for the summary. Could you please at least tell us how short that path is, and which are the proteins involved? Just curious...gpuccio
May 28, 2008
May
05
May
28
28
2008
11:27 AM
11
11
27
AM
PDT
Equivocation is also wonderful- especially if you are an evolutionist. Too bad evidence for "evolution" is not evidence for non-telic processes. Gene duplications? Then it is also required to duplicate all the regulatory elements that accompany gene activation. And even then if the gene's product doesn't have a specified place to go then it is just a waste of energy for the cell to manufacture something it doesn't need.Joseph
May 28, 2008
May
05
May
28
28
2008
11:06 AM
11
11
06
AM
PDT
In a word, yes. Isn't science wonderful. :-)Bob O'H
May 28, 2008
May
05
May
28
28
2008
10:50 AM
10
10
50
AM
PDT
Bob O'H: Unfortunately, I have not access to that article. Could you please sum it up for us? At risk of speaking of what I have note read, I would like anyway remark that we are not only looking at a pathway which is "logically possible", but to one which is "empirically possible". And to the real function advantages which make it selectable step by step. Does the article show examples of such pathways through single event mutations for real proteins?gpuccio
May 28, 2008
May
05
May
28
28
2008
10:33 AM
10
10
33
AM
PDT
gpuccio - you too should read the articles I linked to. The first falsifies your claim that step-wise paths through sequence space is impossible. You could reply that it only looks at a short path, which would be correct. But I'd still like to see a better argument that longer paths can't be traversed other than the argument from incredulity you have at the moment.Bob O'H
May 28, 2008
May
05
May
28
28
2008
09:58 AM
9
09
58
AM
PDT
M Caldwell: "If perhaps we allow, for argument’s sake, this unwarranted assumption, might we not end up with an unbroken chain of unwarranted assumptions from Mr O’H?" Well, let's go step by step. In intelligent discussion between intelligent agents, that's a perfectly natural pathway... :-)gpuccio
May 28, 2008
May
05
May
28
28
2008
02:03 AM
2
02
03
AM
PDT
M Caldwell: I suppose that common descent is assumed, at least as hypothesis, in the discussion between F2XL and Bob O'H. I was just arguing that Bob O'H assumption that a step by step functional and selectable pathway of mutations exists is not warranted, neither logically nor empirically, even under the assumption of common descent.gpuccio
May 28, 2008
May
05
May
28
28
2008
12:43 AM
12
12
43
AM
PDT
Bob O'H (#134): Excuse the brief intrusion, but I want to comment on a couple of your affirmations: "That would seem a reasonable evolutionary explanation" Yes, it's a pity that it is simply impossible. There is no reason in the world, neither logical nor empirical, that functional sequences of proteins can be derived step by step passing through increasingly functional intermediates. Indeed, that's a really silly idea, considering all that we know, both of information in general and of protein function in particular. regardinginformation, it would be life affirming that any meaningful sentence of, say, 150 characters, cna be obtained from another different one by successive changes of one character, always retaining menaing (indeed, increasingly maningful menaing). That's obviously ridiculous for sentences, as it is for computer programs, and as it is especially for proteins, whose function depends critically on complex, and as yet largly unpredictable even to us, biochemical interactions and 3D folding. Just for curiosity, could you junp a moment to the thread about the de novo protein gene, and explain a model why, in that evolutionary scenario (suggested by perfectly serious darwinian researchers) about 350 nucleotides would have changed to give a completely new protein gene, and we have absolutely no trace of the supposed 350 step-by-step increasingly functional intermediates, while all the related species retain the other 128 nucleotides, whose function remains a mystery? Or take any other protein you like, or with which you are more comfortable, and show us any proposed (not necessarily demonstrated) pathway of step by step mutations which harvest a new, different function passing through a number (let's say at least 30) of successive intermediates, each selectable for its increase in fitness. Oh, and you can fix the order of mutations as you like. And you can use all the side activities you like (provided you motivate them, either theoretically or empirically). You say: "No. You’re presenting your model, based on your assumptions. Give use evidence that your assumptions are reasonable. Let’s not get side-tracked." Frankly, I think you are not fair here. F2XL is presenting his model, and his assumptions about the mutations necessary are perfectly natural. You objected with a counter-model, that each successive mutation can be selected for a benefit. That counter-model is not natural at all. Indeed, it appears absolutely artificial, and almost certainly wrong, as I have tried to argue at the previous point. So, I really think it is you who have the burden to show that your counter-model is even barely credible, if you want to keep it as an objection. Apply it, even hypothetically, to the object discussed, the flagellum, and show us why it should be reasonable to believe that for each protein of it there is a functional step by step path from a previously existing protein, the related funtions (startin, intermediates, final), and especially the general balance of functions (we are talking of a complex of proteins, after all). Oh, if possible with an eye to regulatory problems (relative rates of transcription, post transcriptional maturation, assemblage of the different parts, localization, etc...)gpuccio
May 28, 2008
May
05
May
28
28
2008
12:05 AM
12
12
05
AM
PDT
Sorry, I don’t see why it’s obvious. I can’t see why there cannot be a path through sequence space that would give an increase in fitness at every step. What evidence do you have for this? So you’re saying that at every point mutation there will be a benefit?
That would seem a reasonable evolutionary explanation
Give me what you think is a realistic pathway for an E. coli population to obtain a flagellum.
No. You're presenting your model, based on your assumptions. Give use evidence that your assumptions are reasonable. Let's not get side-tracked. Oh, and arguments from incredulity or ignorance won't cut it.
and gene products can have more than one function, so after duplication this side activity could become selectively more important (e.g. http://dx.doi.org/10.1073/pnas.0707158104. Sorry, the second anchor tag is screwing things up). To which selection says, “Hey, this gene by itself after obtaining many various errors JUST SO HAPPENED to obtain a new function,
Read the paper I linked to, please. Or, if don't want to (and there's no compulsion), don't try and make criticisms of something you haven't read.
Suppose you want to calculate the probability of three events (A, B, and C) happening, and the order is irrelevant. Agree with you so far. After all in this case the order in which the mutations happen doesn’t matter at all.
In reality, it might make a difference, of course, but assuming the order is irrelevant is a decent first approximation.
... (I have no idea why we are talking about the order of events, the order in which mutations occur don’t matter in this scenario). ...
Are you even aware that your calculation assumed a set order? Look at how you set up the calculation - you took the product of the probabilities. This assumes a fixed order, as I showed above. You're right that from the way you set up the problem, the order shouldn't matter. So you have to make sure that in the maths it doesn't.Bob O'H
May 27, 2008
May
05
May
27
27
2008
10:21 PM
10
10
21
PM
PDT
Bob O'H, can you explain to me in detail what it is you think I was trying to calculate?F2XL
May 27, 2008
May
05
May
27
27
2008
09:39 PM
9
09
39
PM
PDT
Sorry, I don’t see why it’s obvious. I can’t see why there cannot be a path through sequence space that would give an increase in fitness at every step. What evidence do you have for this? So you're saying that at every point mutation there will be a benefit? Give me what you think is a realistic pathway for an E. coli population to obtain a flagellum. There is evidence that fitness landscapes can be traversed (e.g. this review), Apply the findings in this article to the flagellum please. and gene products can have more than one function, so after duplication this side activity could become selectively more important (e.g. http://dx.doi.org/10.1073/pnas.0707158104. Sorry, the second anchor tag is screwing things up). To which selection says, "Hey, this gene by itself after obtaining many various errors JUST SO HAPPENED to obtain a new function, so rather than risking any loss of functional advantage we will just keep the gene as it is in terms of what it's functioning as since we're blind and cannot see into the future what genes could eventually become a structure coded for by 49,000 base pairs." Suppose you want to calculate the probability of three events (A, B, and C) happening, and the order is irrelevant. Agree with you so far. After all in this case the order in which the mutations happen doesn't matter at all. Suppose they are independent with probabilities are p_A, p_B, p_C. There are six ways in which this could occur: ABC ACB BAC BCA CAB CBA For the first, the probability is P(A).P(B|A).P(C|A,B) = p_A.p_B.p_C For the second, the probability is P(A).P(C|A).P(B|A,C) = p_A.p_C.p_B etc. The total probability is thus 6p_A.p_C.p_B. I think a far more simple way to put it would be to use a hypothetical set of index cards each numbered 1-10. Now suppose you were in a situation where the orders in which they can come in was what you were trying to calculate (I have no idea why we are talking about the order of events, the order in which mutations occur don't matter in this scenario). What you would do is take the number of options (in this case you have ten options, index cards numbered 1-10). If I had to detirmine the number of possible orders that they could all go in if I were randomly shuffling them, I would take the highest number (10) and multiply that by all preceeding numbers. As a result, the calculation would look something like this: 10 x 9 x 8 x 7 x 6 x 5 x 4 x 3 x 2 x1 = 3,628,800 If I have 3 options, then it would look a lot like your example, 3 x 2 x1 = 6 total combinations. The difference from the coin toss is that in a coin toss, the first toss has to come first (!), so the order can't be permuted. What does the order of coin tosses (in fact what does the order of anything) have to do with what I was doing? BTW different coins can be tossed in different orders, just like mutations can happen in different orders (though with the mutations I was considering it does not matter what order they come in). But you haven’t shown why permutation isn’t possible for the mutations. Because it's not relevant.F2XL
May 27, 2008
May
05
May
27
27
2008
09:18 PM
9
09
18
PM
PDT
Must? Can you show us the empirical evidence for this statement? I think it’s kind of obvious. I’ll give a few examples to illustrate the point further, along with a link that gives some idea of what even Matzke’s pathway would require (by his own admission).
Sorry, I don't see why it's obvious. I can't see why there cannot be a path through sequence space that would give an increase in fitness at every step. What evidence do you have for this? There is evidence that fitness landscapes can be traversed (e.g. this review), and gene products can have more than one function, so after duplication this side activity could become selectively more important (e.g. http://dx.doi.org/10.1073/pnas.0707158104. Sorry, the second anchor tag is screwing things up).
…and they had to occur in the order specified. Time for a quick math lesson. :D
And now the maths lesson. Suppose you want to calculate the probability of three events (A, B, and C) happening, and the order is irrelevant. Suppose they are independent with probabilities are p_A, p_B, p_C. There are six ways in which this could occur: ABC ACB BAC BCA CAB CBA For the first, the probability is P(A).P(B|A).P(C|A,B) = p_A.p_B.p_C For the second, the probability is P(A).P(C|A).P(B|A,C) = p_A.p_C.p_B etc. The total probability is thus 6p_A.p_C.p_B. The difference from the coin toss is that in a coin toss, the first toss has to come first (!), so the order can't be permuted. But you haven't shown why permutation isn't possible for the mutations.Bob O'H
May 26, 2008
May
05
May
26
26
2008
11:53 PM
11
11
53
PM
PDT
Let’s say the parts fulfill every requirement except for interfacing. I forgot to add something in that paragraph, and that would be the issue that Matzke concedes must be crossed for his pathway to work. Just add this to the paragraph I quoted from for better understanding. :) Also towards the end of comment #117 I made the following quote: But now we have our probabilistic resources to take into account. I won't move onto that step until a consensus is reached on what I've done so far.F2XL
May 26, 2008
May
05
May
26
26
2008
06:23 PM
6
06
23
PM
PDT
Must? Can you show us the empirical evidence for this statement? I think it's kind of obvious. I'll give a few examples to illustrate the point further, along with a link that gives some idea of what even Matzke's pathway would require (by his own admission). Suppose a part switches location to where the flagellum is to be built, but it's other respective homologs do not make the switch. Selection has nothing different to act on, so this change would probably be neutral (or harmful, but we'll set that aside) UNLESS other parts have made the switch as well. ......Ok maybe this is a better way to put it. Recall the list of things that must happen with each part before selection can do anything (from comment #116). If all of the parts that are needed to produce a flagellum fulfill all five of the criteria except #2, then selection cannot preserve that "progress," you just have a pile of protein parts that don't really have any conceivable way of benefiting the cell, not until AFTER they've ALL fulfilled criteria #2. Suppose that all the parts fulfill every criteria except they aren't localized in the same area. Again, selection has nothing to act upon in order to preserve the progress so far because you basically have the parts to a flagellum that would fill in that job just fine but they are scattered all throughout the cell, thus selection doesn't have the "foresight" to realize that the parts are all optimized to become such a structure; it is thus rendered powerless. Let's say the parts fulfill every requirement except for interfacing. In that case you would have the parts all there in the same location, and the right order of assemblage with functions ready to go, but the parts aren't optimized to fit together. It would be like trying to build a motor out of parts which are from all sorts of various vehicles from an airplane to a Humvee to a nuclear submarine. The parts (or proteins) would not interface to get any functional advantage for the vehicle (or cell) that will be using the motor (or flagellum). Again, selection is thus far rendered powerless. The number is probably much, much (probably several times higher) greater but I assumed that there were only 490 base pairs that needed to be changed before all 42 of the homologs would fulfill all 5 of the criteria I listed in comment #116 (therefore allowing selection to actually preserve something). 490 base pairs accounts for only 1% of the total amount of base pairs in a typical 35 gene flagellum, and only constitutes a little over a hundredth of a percent of the entire genome in E. coli. But it's certainly the biggest hundredth of a percent you will ever find in biology. No, you would if there were only 490 mutations in total... Which there are (at least in terms of what selection can't have any effect on unless they've all made their respective changes). ...and they had to occur in the order specified. Time for a quick math lesson. :D While it's true that they don't have to occur in any particular order, that's irrelevant to what I calculated. Suppose the odds of an event (which we will denote with X) must also occur with event Y (though it's not necessary that they happen at the same time). Both events being independent of each other would have their probabilities multiplied. To use coin tosses as an analogy, suppose I wanted to calculate the odds that I will flips 5 heads in a row with five separate coins (or the same coin). With the odds of each coin (assuming they are fair) being respective to the individual coins themselves (as with mutations), you would take the odds of each and multiply them to figure out what the odds are that you will get all heads (2 to the 5th power, or one chance in 32). The same goes for the mutations. With the odds of each mutation being on the order of 10 to the negative 2,830,000th power (4 to the 4.7 millionth power, 4 base pair outcomes and 4.7 millions places they can go), you would take that and multiply that with itself 490 times. Just as you would with the odds of getting a heads on each coin flip. A glance at the now infamous XVIVO video (esp the version with the voiceover . . .) will at once make plain that Denton long since put his finger on the issue: we see codes, algorithms, implementation machinery, all in a self-assembling and significantly self-directing whole. Just finished reading his book yesterday, and it's not hard to see why it inspired Behe so much. And "The Inner Life of the Cell" certainly puts Denton's words into context. :)F2XL
May 26, 2008
May
05
May
26
26
2008
03:51 PM
3
03
51
PM
PDT
PS: While waiting on TMLO . .. The already mentioned XVIVO video is a enough example, as it aptly illustrates the machinery of the white blood cell, and the algors and codes are in of course the DNA & RNA etc, of which sequences of execution to make proteins are shown. How that is done is a commonplace: codes, algorithms, executing machines. [Cf the machine code/ architecture view of a digital computer.]kairosfocus
May 26, 2008
May
05
May
26
26
2008
04:04 AM
4
04
04
AM
PDT
codes, algorithms, implementation machinery
Could you give me a few examples of each of those things please, as instantiated in the human body?Mavis Riley
May 26, 2008
May
05
May
26
26
2008
03:42 AM
3
03
42
AM
PDT
Mavis, With a 4-state element, chained to n times, the config space is 4^n. To calculate, do log10 [4] and multiply by n. Subtract the whole number part and extract the antilog of the remaining fractional part. So far the config spaces F2 has estimated look about right. I do not claim any expertise beyond the merits on the facts and related logico-mathematical and factual reasoning. [And, in an earlier thread, I gave links and remarks on how Wm A D estimates CSI. I prefer to look at the vector of values, as the disaggregation into vector elements tells important things. Bits, after all, is a measure of information storage capacity, not significance.] I will withold final estimation of the worth of F2's work till he finishes; save that so far he seems to be on an interesting, more detailed track than I am wont to take up. F2's work is rather like taking a 20 lb sledge to a walnut. A glance at the now infamous XVIVO video (esp the version with the voiceover . . .) will at once make plain that Denton long since put his finger on the issue: we see codes, algorithms, implementation machinery, all in a self-assembling and significantly self-directing whole. Such is long since in the class of entities known to be originated in intelligence. And a part of that is the fact that the config space so isolates islands of functionality that search resources uninformed by intelligent, active information [which is also quantified] are credibly fruitless. Cf my microjets case in App 1 th always linked,in thermodynamics context. Trust that helps. GEM of TKIkairosfocus
May 26, 2008
May
05
May
26
26
2008
03:40 AM
3
03
40
AM
PDT
Kairosfocus, I know you are one of the resident experts on the EF etc. Do you agree with F2XL's math so far?Mavis Riley
May 26, 2008
May
05
May
26
26
2008
02:48 AM
2
02
48
AM
PDT
F2XL: Keep going, fascinating to watch. GEM of TKIkairosfocus
May 26, 2008
May
05
May
26
26
2008
02:46 AM
2
02
46
AM
PDT
F2XL - Will you be publishing this work in one of the ID journals?Mavis Riley
May 26, 2008
May
05
May
26
26
2008
02:24 AM
2
02
24
AM
PDT
Dawkins (excuse my language)says somewhere that each change must arise sequentially in an unbroken chain of viable organisms. And this is somehow supposed to add credibility to random evolution! Laughable!
You would hardly expect changes that resulted in a unviable organism to be passed on would you? Why laughable? I don't think it was intended to "add credibility", it seems to me a statement of the obvious. Any anyway, is that you Minnie? EEH, it's been some time eh girl? I don't really know how long it's been eh? You must have left the street 20 years ago now, don't time fly! Fancy a pint in the Rovers later Minnie?Mavis Riley
May 26, 2008
May
05
May
26
26
2008
02:07 AM
2
02
07
AM
PDT
So with the odds of each base pair changing to the right combination being independent of each other (if not then please explain why) you would take the original odds and take those to the 490th power (the odds for each base pair would be both the same and independent).
No, you would if there were only 490 mutations in total, and they had to occur in the order specified.Bob O'H
May 26, 2008
May
05
May
26
26
2008
12:30 AM
12
12
30
AM
PDT
For the homologs to proceed as a flagellum, several things must change at once for selection to preserve them.
Must? Can you show us the empirical evidence for this statement?Bob O'H
May 25, 2008
May
05
May
25
25
2008
10:31 PM
10
10
31
PM
PDT
With that 5% gap to cross, you would be looking at 2,450 base pairs that need to be changed, out of 4.7 million in the entire genome. What the changes could be categorized as are elaborated on in my previous comment. For the homologs to proceed as a flagellum, several things must change at once for selection to preserve them. If you proceed to have the a part change location, then selection can't really do anything to preserve that change throughout the vast majority of the population unless all other parts have made the same change, in the right order, are made to be mutually compatible, have their functions switched, etc. This applies to each and every homology. Suppose a part changes location (thanks to getting the right sets of base pairs to change), but the information that tells what order of assemblage the part will go (in that location) isn't present. In that case it's likely to just do harm (in our experiment though, we'll just say the effects are neutral :)), so selection can't help you there. While I think it's reasonable to say that all 2,450 of the base pairs (an extremely conservative estimate) that go from homologs to the actual flagellum would be neutral by this standard, I will be extremely, unrealistically hopeful and assume that only 1% of the total base pairs that code for a flagellum are neutral when they must be implemented. Selection will take care of the rest. That means that 490 base pairs must be changed over the course of an entire line of descent before you actually reach the level of change needed for a flagellum to appear in an E. coli. So with our 4.7 million base pair genome, let's see what the odds are that we would be able to make any particular base pair change to the right nucleotides. 1. 4 options... (e.g: at, ta, cg, gc) 2. 4.7 million places... So what you would do is take four to the 4.7 millionth power (feel free to ask why if I haven't made it clear enough). The result? (One chance in) 10 to the 2,830,000 power. And that's just for a single point mutation (this could represent a start signal for instance). What we're looking to cross is a 490 base pair change over the course of all living history. So with the odds of each base pair changing to the right combination being independent of each other (if not then please explain why) you would take the original odds and take those to the 490th power (the odds for each base pair would be both the same and independent). Our semi-final result for having the neutral gaps crossed is one chance in 10 to the 1,386,700,000th power. But now we have our probabilistic resources to take into account.F2XL
May 25, 2008
May
05
May
25
25
2008
04:19 PM
4
04
19
PM
PDT
1 2 3 4 5 7

Leave a Reply