Uncommon Descent Serving The Intelligent Design Community

Evolution and the NFL theorems

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Ronald Meester    CLICK HERE FOR THE PAPER  Department of Mathematics, VU-University Amsterdam,

“William Dembski (2002) claimed that the NoFreeLunch-theorems from op-
timization theory render Darwinian biological evolution impossible. I
argue that the NFL-theorems should be interpreted not in the sense that the models can be used to draw any conclusion about the real biological evolution (and certainly not about any design inference), but in the sense that it allows us to interpret computer simulations of  evolutionary processes. I will argue that we learn very little, if anything at all, about biological evolution from simulations. This position is in stark contrast with certain claims in the literature.”

This paper is wonderful! Will it be published? It vindicates what Prof Dembski has been saying all the time whilst sounding like it does not.
 
“This does not imply that I defend ID in any way; I would like to emphasise this from the outset.”
 
I love the main useful quote it is a gem!

“I will argue now that simulations of evolutionary processes only demonstrate good programming skills – not much more. In particular, simulations add very little, if anything at all, to our understanding of “real” evolutionary processes.”

“If one wants to argue that there need not be any design in nature, then it is hardly convincing that one argues by showing how a well-designed algorithm behaves as real life is supposed to do.”

Comments
Unfortunately I’ve not found this point. Could you please restate roughly what the argument is?
Unfortunately, the deleted comment is not in google cache either... The impression I got was that S. believed that by its very nature that algorithms that are carried out by naturally occurring processes should perform better than software-based programs. I find this assertion odd since to my mind the constraints of nature either are too wide or too narrow and not at all balanced like a well-designed GA. It should be a rather rare event that an environment provides a balance and the variation provides functionally positive mutations. So on balance I would expect nature to perform worse than even a poorly designed GA. I was hoping to ask for his justification for his assertion.Patrick
January 11, 2008
January
01
Jan
11
11
2008
01:51 PM
1
01
51
PM
PDT
#239 Kairosfocus First, any thoughts on using pi-250 as a usefgul model complete with hill-climbing? A possible criticism could point on the choice of the precision according to which the first hit could yield the start of hillclimbing. Somebody could say: ok we cannot add to the algorithm a formula for Pi but here we are in the mathematical world and we aren't constrained by measure precision (as it's the case in real world example; so, we have a fittness function that does tell us how much the hit is "good". Why don't start from a whichever point and use hillclimbing without any constraint on the precision? So the example is OK if we state explicitly that that precision is due to the use of "rel world" fittness functions, for example the direct measure on a circle. With this additio it's an interesting example that is somewhat similar to what I meant (for real world examples). I would suggest that the example could be made less bound to our specific mathematical notation by using directly the binary representation of Pi instead of the BCD one. this would mean to search in a solution space with S={0,1}. It's also a good example because , as you have already stated, computation of Pi can be expressed in a very short way (i.e. with very high specificity), for example by providing the code for computing the Gregory-Leibniz series: Pi=4*(1 - 1/3 + 1/5 - 1/7 + ...). GEM are my initials, the ... TKI is the short form of my consultancy personality and organisation — I am involved in a loose regional network. The Kairos Initiative. I beg your pardon; I didn't understood.kairos
January 10, 2008
January
01
Jan
10
10
2008
12:30 AM
12
12
30
AM
PDT
Hi Kairos [and Patrick]: First, any thoughts on using pi-250 as a usefgul model complete with hill-climbing? [BTW, the fact that the coin can go to binary codes that BCD does not use both brings in points that are very binarily close to functional points that are not, and brings in a non-uniformity on the bit patterns, i.e not all of the set from 0000 to 1111 is used. That means that the 1's and 0's will not express the same amount of information!] Thanks GEM of TKI PS: Kairos, GEM are my initials, the meaning of which is easily enough accessed through the always linked (and links to you above that Semiotic unfortunately decided to abuse); indeed, in hand-drawn stylised form it is a form of my initials-style signature. TKI is the short form of my consultancy personality and organisation -- I am involved in a loose regional network. The Kairos Initiative.kairosfocus
January 9, 2008
January
01
Jan
9
09
2008
07:08 PM
7
07
08
PM
PDT
#235 Patrick Now Semiotic made an “interesting” claim that no one jumped on (unfortunately, it appears it was deleted since it was part of an offending comment). He briefly mentioned how the (presumably) software-based programs that generate information would exceed the UPB (duh) then he claimed that an algorithm furthered by natural processes should be expected to perform better (or something to that effect). Unfortunately I've not found this point. Could you please restate roughly what the argument is? I know that in the past some critics did claim that code generation (having in mind gene duplication obviously) would be an easy way to increase CSI. Was this S. argument? In this case this would simply show a very typical misunderstanding of what CSI concept really means. PS for Kairosfocus. Please excume my ignorance, but what does GEM of TKI stand for?kairos
January 9, 2008
January
01
Jan
9
09
2008
02:13 PM
2
02
13
PM
PDT
Patrick: I too am sorry to see the conversation end as it did. I wish it had not -- and despite having had to complain of tort, and before that having had to point out through the hostile witness, Wiki, that there was more to the story than we were being given by the ones tightly focussed on whether NFLT strictly holds in relevant real-world contexts. To put a similar case, at a very crude level, pi is not strictly speaking equal to 22/7, but that is often "good enough for government work." Similarly, NFLT probably does not hold strictly in the sort of situation we were facing, but it is probably true that no "blind" algorithm will do significantly better than an arbitrary "pick a config at random" in finding the FUNCTIONALLY SPECIFIED DNA configs of life [much less the other components and organisation of a cell], starting from any plausible or generous pre-biotic soup. That is why I said in 184 above that PaV put his finger on the money in his comment in 175:
If we mentally try to visualize what’s going on, we can look down on a sea of two-dimensional space. At each location, that is, each point[I would say cell -this is a discrete space!] , of this two-dimensional space we find a permutation of a 3,000,000,000 long genome. As we look down onto this 2D space, these 100 trillion “high fitness” genomes, along with each of their trillion “high fitness” permutations, are randomly dispersed on this plane. What we’re going to do is to “pull together” all of these trillion of “high fitness” permutations to form a cluster. (After all, they’re ‘independent’ of one another) We end up with 100 million clusters, consisting of one trillion permutations. We could have, admittedly, “clustered” all 10^25 (100 trillion x one trillion) together. But, if we were to do a blind search for just that one cluster, it would be much harder to find than having 100 trillion “clusters” (of a trillion permutations) throughout the space of all possible genomes. Now in this configuration of genome space we have “clustering”; in fact, we have it to a staggering degree: viz., one trillion viable permutations per genome. So, [per model just proposed] if the human genome were to experience a mutation anywhere along its length, the likelihood of it not being viable would be 1 in a trillion. So, again, we have the space of all possible genomes within which are to be found, randomly (again, giving the best possibility of being found by search), 100 trillion “clusters” of a trillion permutations. Once we’ve pulled all these permutations together and formed 100 trillion “clusters” of a trillion permutations each, then the space, G, of all possible genomes is smaller by roughly 10^25 genomes. But 10^25 represents 1/4,000,000,000 of G, leaving G essentially unaffected in size. Now, what we have left is a uniform distribution of size 10^1,000,000,000 among which are to be found generously realistic “clusters” of genomes for every living being imaginable. The odds of hitting the target, that is, any one of the 100 trillion “clusters” of genome permutations, through blind search is 10^25/10^1,000,000,000= 1 in 10^4,000,000. You can’t argue that the “clustering” I propose has in any significant way changed the uniform distribution of G, the space of all possible genomes. Nature must navigate this way using, per Haggstrom, Darwin’s algorithm A (reproduction-mutation-selection) to find its way through this uniform distribution. But since it is a uniform distribution, we know that it’s no better than ‘blind search’, and we know that G is to Vast for blind search to work. This is where the Explanatory Filter, that Dembski describes, would tell us that since randomness cannot explain the “discovery” of living genomes, then design is involved.
However on generation of "information" in one sense, that is ever so easy: flip a coin 1,000 times and you have a sequence that is unique to one part in 2^1,000. That is it is complex in the sense of very highly contingent -- you would be ever so unlikely to match that particular string of coins again on the gamut of the observable universe, over its lifespan. But, to specify the string of coins, we would have to basically list it out. But, now, suppose I were to tell you that he string of coins specifies the first 250 digits of pi in binary coded decimal, ignoring the decimal point: 31415926539 . . . and on for 250 digits. That is, in 8421 BCD, with dashes to show the digits: 0011 - 0001 - 0100 - 0001 - 0101 - 1001 . . . Now, the string is not only unique, but also functionally specified, as just described, i.e plug it into the area calculation for the surface of a sphere and it will give the right answer. That functionality can be simply and briefly described [and replicated through a series for pi, at will]. That is, we see here functionally specified, complex information. We can even specify a cluster of functional near-equivalents, e.g will give pi to within .0001% or whatever is useful. BTW, such a specification will of course preserve a certain part of the pi-string very tightly indeed, and will allow the rest to vary as it wills. For the rest is much less important to the function. We could even extend this: we can allow hill-climbing to pi-250 if the first hit is close enough to count to a required precision. But, that would not help an arbitrary coin toss get near enough to count in the sea of all possible configs. And, if we rigged the coins so that the first toss will to high probability be within teh target zone, that too will be because we have intelligently intervened to shift the distribution of the random variable sufficiently far away form "uniform" that we cna now say we have fed in an increment of acrtive information. And tha tis what WD and Marks did intheir recent work on NFLT and evolutionary computing -- quantified how much information tha tis functional has been fed into Dawkins' "Methinks" and Avida and Ev. It turns out that if you are able to do significantly better than random selection across the whole config space, for a sufficiently rich space to be relevant to say OOL or OOBPLBD, you have committed an act of intelligent design. That is exactly the sort of thing that TBO pointed out in TMLO -- the first technical level ID work -- twenty-five years ago when they came up with a metric for investigator interference with the chemistry in pre-biotic scenarios; and again the point is that if you are above the threshold of success, you are outside the credible framework of what unaided blind nature in plausible pre-biotic scenarios will do. Somebody is trying to tell us something, if we are only listening . . . GEM of TKIkairosfocus
January 9, 2008
January
01
Jan
9
09
2008
10:19 AM
10
10
19
AM
PDT
Hey. Get over it. It's a fact. Computers simulating evolution create information just like iPods create music. GloppyGalapagos Finch
January 9, 2008
January
01
Jan
9
09
2008
08:40 AM
8
08
40
AM
PDT
I watched this conversation unfold and it seemed to me that there was a disconnect since Semiotic seemed focused on the problems of software engineering and everyone else on biological reality. Now Semiotic made an "interesting" claim that no one jumped on (unfortunately, it appears it was deleted since it was part of an offending comment). He briefly mentioned how the (presumably) software-based programs that generate information would exceed the UPB (duh) then he claimed that an algorithm furthered by natural processes should be expected to perform better (or something to that effect). No justification was given, but I found that assertion to be more interesting than anything else being discussed.Patrick
January 9, 2008
January
01
Jan
9
09
2008
07:55 AM
7
07
55
AM
PDT
Dave Thanks for the attention. I appreciate your removal of the unnecessary reference to me by personal name. A real pity that Semiotic had to resort to personalities and attempted outing; there could have been a useful discussion. I wish he could have simply apologised and allowed the discussion to move on from there, with a fdue balance between the issues of mathematical niceties and the real-world considerations of modelling and validation -- thence, of what we may call: useful reliability. GEM of TKIkairosfocus
January 9, 2008
January
01
Jan
9
09
2008
04:33 AM
4
04
33
AM
PDT
semiotic007 is no longer a member and the offending comments were removed.DaveScot
January 9, 2008
January
01
Jan
9
09
2008
04:17 AM
4
04
17
AM
PDT
Semiotic 007: On page 9, Haggstrom writes: "The basic NFL theorem involves an average over all possible functions f." If it is an average, then we should write something like Sigma, i=1 to N of f(sub i)/N; but this, then, implies that we should use f(sub i) rather than a simple f, it would seem. If you’re going to call all your functions f, there should be a way of distinguishing one f from another, right? That said, however, since the cardinality of the sets f is mapping can be so huge, I suppose you just simply drop the (sub i) since you can’t iterate, practically, over that large a number of elements. So, it seems, that the lack of a (sub i) is an indicator of the futility of searching for such an f(sub i), and an harbinger of the NFL. Nonetheless, it takes a little getting used to.PaV
January 9, 2008
January
01
Jan
9
09
2008
02:58 AM
2
02
58
AM
PDT
MODERATORS: Official complaint against the anonymous poster at UD known as Semiotic 007. 1] As you may have observed, yesterday in the 30th Dec 07 thread on NFL theorems, I complained to the above identified commenter at UD, that he had improperly published my real, full name [I have used my initials previously, in defence of myself from spam and harrassment]. 2] Note, like other UD commenters have, he could easily have used the responsible approach and simply emailed me at contact emails maintained at my reference web site. He did not, and has not. [NB: I have found that using initials publicly and keeping my direct contact in a separate reference site is effective in keeping spam within reasonable levels, at least currently. Hopefully, the spambots will not get significantly more effective, at least for now.] 3] Now, too, since it is well-known that anonymity is often used by those open to consider ID or ID proponents in defence of themselves from being Sternberged or Gonzalezed respectively, this "outing" attempt must be considered a serious offence in intent if not effect. 4] Further to this, overnight I find a follow up post addressed to my self by handle, in which Semiotic 007 is demanding a signed statement from me. 5] Nowhere is there the faintest trace of regrets or apology for action which is plainly improper, lending further support to the conclusion that it is intentional and calculated to do actual harm, in wanton and willful disregard for duties of reasonable care. Namely, it is a TORT. 6] Worse, he now "demands" for me to submit to him -- from the manner of behaviour, this sex is likely -- a notarised signature that he declares intent to post on the Internet, i.e. an open invitation to identity theft. (And that in a context where I had already expressed concerns about Internet security.) [So is the nonsense of posting a bet of US$25,000 on a minor matter, on which I have repeatedly stated that I hold to be irrelevant and have also stated with circumstantial details that I have a serious objection to anything that even smells of gambling.] 7] MODERATORS: I therefore ask that you bear this in mind in dealing with Semiotic 007, and request that you take appropriate action in defence of the privacy of your commenters. For those of us who take Matt 5 - 7 seriously, let us pray for this man that he will repent and seek the blessed transformation of life that flows from that. In the meanwhile in the interests of Justice on the principles of Rom 13:1 - 10, the Moderators at UD as those in a position of governance here at UD, have a duty to protect us from harm stemming from improper, irresponsible or ill-willed behaviour. 8] On the substantive matter, onlookers can see for themselves that I hold that he debates over the ideal-world mathematical nuances largely irrelevant to the real world of model reliability and validity. I do so as one experienced in real world electronics and related systems similar to what Kairos raised], as well as in the even more messy world of management models and applications:
1 --> For, as noted in the long since public presentation accessible through my reference site here, ALL models [save prototypes and the like] are false, strictly. (Observe how S was easily able to identify my name but did not take time to look at what I have to say seriously on the matter of the validity of models.] 2 --> But, we may inspect the subtleties that lurk in the logic of implication . . . 3 --> Namely, P => Q asserts only the truth of the IMPLICATION, a certain logical connexion in which we have that NOT-[P and NOT-Q], so that IF P holds, then Q holds, and P cannot hold unless Q holds. 4 --> But equally, as my all-time favourite Math prof, the "famous" Harald Neiderriter of Austria,was so fond of teaching: Ex falso quod libet. From what is false, we may freely infer to what is true [or in some cases false!] 5 --> Thus we come to what Kairos underscored about model validation, ands what PaV used effectively in the above. Namely: the strictly false may be the reliably useful [as a model], once there is proper empirical testing and validation. For instance electronic amplifiers are commonly modelled as clusters of passive components [R -- including of course radiation resistance, C, L, M] and ideal generators [voltage or current sources], with signal grounds that take advantage of the high frequency shorting out effect of capacitors, and such models [when suitably sophisticated] hold up tpo tthe very limits of circuit theory where one has to introduce wave theory stuff. 6 --> Indeed, by applying transmission line lumped parameter approximations, even then circuit theory insights can be extended into the zone of wave effects where components and traces are at least ~ 0.1 wavelength long, noting that typically in such media we are dealing with EM wave speeds of about 0.6 c. And of course in the near vicinity of many antennas, 90 - 95% of c is a relevant factor in adjusting antenna element length for best effect. Such experience- derived, judgemental rules of thumb of course allow us to extend the reliability of models further, and are part of the tricks of the trade of practitioners that you pay for when you directly or indirectly hire their expertise. And, similar points extend for process control or servo systems too, etc etc, even for computer architecture. 7 --> Indeed, post Quantum and post relativity, that is what Newtonian dynamics is. Extending still further, models, theories and explanations by inference to best explanation, in general, are all of the same basic character. That is strictly, a trusted scientific theory is RELIABLE, not proved true. 8 --> Indeed, ever since Godel the same is known to extend to Mathematics, for there is no guarantee that sufficiently rich mathematical domains are free of contradictions. And if they are free of contradictions, there are true claims that are unreachable relative to their axioms. Even scientists and mathematicians must live by faith, in short.
So, ONLOOKERS: In the real world context, I look at WD's work as pointing out that in the relevant case of interest, biofunctionality of perturbation-sensitive systems with digital storage capacity orders of magnitude beyond the "stretched" UPB, 500 - 1,000 bits. Note how I use a bit of judgement here, to take in cases where there are clusters of biofunctionality as PaV pointed out: a lot of clusters will get taken in within 10^300 cells in a config space. I then apply thermodynamics-based thinking, as I document in the above and in the always linked, appendix 1. I see in that light, that the picture PaV has painted is very apt, and that is why I used it above. Namely -- and disputes over minutiae on the NFLT are irrelevant to this -- RV + NS based searches that start from arbiotrary initial cells in the config space as likely to be obtained in reasonable or even very generaous prebiotic chemistry and physics deiven circumstances, are nmaximally unlikely to succeed in accessing even the biofuncitonal macromolecules sufficiently to get to any local hill-climbing that one may conceive of. Worse, we know that biomolecules are operative in precisely organised clusters, under algorithmic control. As the microjet assembly thought expt identifies, that is three further stages of serious expansion of the relevant config space. But by then the matter is in effect academic -- we long since know that the only alternative to agency is a quasi-infinite cosmos as a whole qwith some sort of quantum bubble foam in which there are thousands or millions or far more orders of magnitude of sub universes with physics and prebiotic soups scattered at due random. And BTW, as Robin Collins asks ever so astutely, where did the universe- making- machine come from to do all of this convenient stirring of parameters and soups with highly convenient ingredients? All, duly unobserved. So, on inference to best, empirically anchored, explanation across comparative difficulties on factual adequacy, coherence and explanatory elegance, I have long since seen that the evidence points to an Intelligent Agent with MORAL certainty. So, when I see WD and Marks arguing that the issue is that in praxis we see that active interference is inserted into such hypothetical searches to reach to functionality that rises significantly above zero, that is obvious. Indeed, that raises a direct echo of the empirical findings noted by TBO in TMLO 25 years or so ago, namely that absent undue experimenter intervention, experiments on OOL go nowhere significant, that just shows me a side-light from Math on why, and from thermodynamics too. When they then use the NFLT type model and extends it to provide a metric on the active information supplied, that seems very reasonable indeed. Then, when I see this quantitative analysis easily take apart Dawkins' Weasel, and the more sophisticated Avida and Ev, I see why there is an intent to now use a red herring leading out to an oil soaked strawman and ignite it tio distract attention, cloud the atmosphere and poison it. Semiotic 007 now extends this to me, by seeking to "out" me and evidently to thus expose me to being Gonzalezed and/or at least subjected to spamming and possible identity theft were I so foolish as to provide him with a notarised signature. Finally: Semiotic, I was not born yesterday and you show yourself utterly unworthy of respect or trust. (That does not remove the duty to pray for you under Matt 5 - 7, and so: "may God grant you the grace of penitence and reform.") GEM of TKIkairosfocus
January 9, 2008
January
01
Jan
9
09
2008
01:26 AM
1
01
26
AM
PDT
PaV (230):
Finally, since we’re talking about the Kronecker delta function, why, if there’s a more apt way to do it, do you show, as does Haggstrom, the function f: V?S when the real interest is in the entire set of such functions? Shouldn’t the proper notation have the subscript i, e.g., underneath the f ?
Could you point me somewhere in particular in the paper? I'm not following you. The NFL literature refers to "needle in a haystack" (NIAH) functions instead of Kronecker delta functions. The needle is a good value, and the hay is a range of bad values (often a single value). Sometimes there are multiple needles, but never many. The sense of "good" and "bad" depends upon whether the objective is minimization or maximization. The good and bad values are not necessarily 0 and 1. Perhaps it makes sense now when I say that analysis of optimization calls for something a bit more general than the Kronecker delta.Semiotic 007
January 8, 2008
January
01
Jan
8
08
2008
06:18 PM
6
06
18
PM
PDT
PaV, First, let me apologize for allowing to spill over to you my annoyance with someone charging into Gish gallops and hurling bricks at the research area near and dear to me, knowing that my responses are "awaiting moderation."
Why haven’t you made the argument that Dembski was wrong, pointed it out in a clear way, and then allowed him to respond?
I don't think the NFL stuff is central to ID. In a forthcoming publication on ID, I focus on the arguments from irreducible complexity and specified complexity. I have little to say about Dembski's No Free Lunch, which I believe is written in jello. In all honesty, I decided that to treat it as representative of Dembski's thought, when I had read his later writings, would have been to set up a straw man. In other words, people here are defending ideas that I declined, out of fairness, to pin on Dembski. It's interesting that you bring up displacement, because "Searching Large Spaces" was the first work of Dembski's that I respected -- which is not to say that I agreed with his analysis. Wolpert and Macready (1997) indeed seem not to have thought carefully about how a practitioner would match an algorithm to a "problem." I think Dembski's observation that there was an implicit search for an effective search algorithm was acute. It is entirely appropriate to ask how one gains information about which algorithm to apply. My objections to Dembski's analysis are complex, and I will not go into them here. In any case, the "Practical Free Lunch" theorem in 232 effectively says that the space of search algorithms is much, much smaller in practice than in theory, and this means that Dembski's displacement analysis is in terms of a model that does not fit physical reality. Accounting for the information practitioners use to select effective search algorithms is no less interesting a problem, however.Semiotic 007
January 8, 2008
January
01
Jan
8
08
2008
05:52 PM
5
05
52
PM
PDT
Semiotic 007, I think we're talking 'apples and oranges' here. I was talking explicitly , or, I should say, I had in mind, Haggstrom's argument when I made the statement you quote. I'm in no position to agree, or disagree, with your assessment of Dembski's NFL argument. But my impression is that you might be overstating things when you say that random searches can easily reach optimization, for this is the gist of the dilemma: Dembski says that in using some kind of fitness function that we're dealing with a "displacement problem'. Information is needed, and it is being provided, essentially, by some other kind of 'search', one larger than the first. But, since intelligent agents are involved in the search for a fitness function, a certain amount of improbability can be overcome, and is. Is that what you're referring to? I don't know. But, I think that the statement you made concerning Dembski should be applied to you. Why haven't you made the argument that Dembski was wrong, pointed it out in a clear way, and then allowed him to respond? As to the Dirac function and Kronecker delta function, I didn't get 800 on the GRE, but it's rather obvious that if you're doing calculus you use the one, and when you're doing linear algebra, you use the other. My use of the Dirac delta function had application since I was dealing with 'fitness landscapes', and not with the 'fitness functions' AI is fond of. We, here at UD, often 'see' fitness functions; and my point was that they're really a fiction when it comes to protein configuration space. Finally, since we’re talking about the Kronecker delta function, why, if there’s a more apt way to do it, do you show, as does Haggstrom, the function f: V->S when the real interest is in the entire set of such functions? Shouldn’t the proper notation have the subscript i, e.g., underneath the f ? It seems to me that’s what Einstein notation would indicate.PaV
January 8, 2008
January
01
Jan
8
08
2008
07:16 AM
7
07
16
AM
PDT
Semiotic 007, I think we're talking 'apples and oranges' here. I was talking explicitly , or, I should say, I had in mind, Haggstrom's argument when I made the statement you quote. I'm in no position to agree, or disagree, with your assessment of Dembski's NFL argument. But my impression is that you might be overstating things when you say that random searches can easily reach optimization, for this is the gist of the dilemma: Dembski says that in using some kind of fitness function that we're dealing with a "displacement problem'. Information is needed, and it is being provided, essentially, by some other kind of 'search', one larger than the first. But, since intelligent agents are involved in the search for a fitness function, a certain amount of improbability can be overcome, and is. Is that what you're referring to? I don't know. But, I think that the statement you made concerning Dembski should be applied to you. Why haven't you made the argument that Dembski was wrong, pointed it out in a clear way, and then allowed him to respond? As to the Dirac function and Kronecker delta function, I didn't get 800 on the GRE, but it's rather obvious that if you're doing calculus you use the one, and when you're doing linear algebra, you use the other. My use of the Dirac delta function had application since I was dealing with 'fitness landscapes', and not with the 'fitness functions' AI is fond of. We, here at UD, often 'see' fitness functions; and my point was that they're really a fiction when it comes to protein configuration space. Finally, since we’re talking about the Kronecker delta function, why, if there’s a more apt way to do it, do you show, as does Haggstrom, the function f: V?S when the real interest is in the entire set of such functions? Shouldn’t the proper notation have the subscript i, e.g., underneath the f ? It seems to me that’s what Einstein notation would indicate.PaV
January 8, 2008
January
01
Jan
8
08
2008
07:15 AM
7
07
15
AM
PDT
Kairos: Thanks. And thanks for the Mt 5 reminder to pray for those who act like that. May God help him/her. GEM of TKI PS: Semiotic, maybe it is a bit forward of me to suggest, but forgive me if I say perhaps you would benefit from reading this session of my intro to phil course.kairosfocus
January 8, 2008
January
01
Jan
8
08
2008
06:01 AM
6
06
01
AM
PDT
#220 Kairosfocus: Excellent! Indeed, it is the right time . . . if we are paying attention. And for ID certainly the time will be more and more favourable. #224 Kairosfocus: Kairos, 223: "distinction between precise (theoretically true) and approximate (valid in the real world) is quite common in engineering. For a good example you can see to the way the density functions for elelctrons and holes n8×9 and p(x) are treated within the depletion region of a pn junction." Precisely! I don't know who semiotic 007 is, but I suppose: a. He/She is a Maths Prof in some University working on abstract Maths for optimization problems (I have some idea about). This could explain his/her reaction to arguments about approximation for problems in the real world. b. Independently of his/her competence in a specific field, what did he/she say about you don't show a great personality. I am sorry for him/her.kairos
January 8, 2008
January
01
Jan
8
08
2008
12:53 AM
12
12
53
AM
PDT
Atom: I see, indeed, on Mrs Atom! My Little Kairosfocus [so he has called himself on seeing my online activities!] stayed home yesterday from School, fighting the flu; but by the afternoon was having fun with a remote-control car and lenses, eager to figure them out. [You should have seen his astonishment on observing a Fresnel lens and its flat, corrugated appearance -- how does this one work, it is not like the other ones? Anybody got a good simple 9 yo level explanation on that one?] [He has been fascinated with how pinholes and lenses form real images and has been contrasting the brightness and wondering why. Of course, all of this is relevant: so much for the slanderous notion that believing in God is a "Science-stopper" -- science is rooted in our in-build intense desire to understand the mysteries of our world, and to use the results to do something interesting or advantageous. Guess who put that there, and put us in a world set up for exploration . . .? For further reference, kindly read the General Scholium to Newton's Principia, e.g. the excerpt here. of course, Newton's first major investigation was on "Opticks," and inter alia led him to invent what we know as the Newtonian reflector telescope, as he despaired of solving the aberrations and dispersion of light problems that characterised refracting telescopes. BTW, ever noticed how N. is hardly ever brought up as an exemplar of science these days? And yet, he is indisputably the greatest of all scientists, ever.] Now, on more direct points: 1] agreed that if we begin [to search for islands of fucntionality in a config space for the genome, considered as a physical-chemical system to emerge from a "plausible" pre-biotic soup -- cf PaV at 175 and again at 189 etc, as well as my always linked] at a random point we have no hope of finding the first cluster of functionality Not just that we start from an arbitrary initial point, but that we are using algorithms that are based on RV + NS to get to [1] OOL [genome ~ 300 - 500 k], thence, [2]BPLBD [genome ~ 100 mn], and onward, [3] organisms with reasonably reliable mental functions and using genomes of order 3*10^9. All, to happen in a cosmos of scope ~ 10^80 atoms and say 15 BY. In short, Evo Mat advocates -- in our day; back in C19, they thought life was about as sophisticated as a bowl of jello-like "protoplasm" so simple RV + NS mechanisms seemed plausible -- need an algorithm based on chance + necessity only, that is dynamically and probabilistically capable of doing that in that sort of scope. To date, after 150 years of trying and delivering various promissory notes and just-so stories in the name of "science" -- watched the astonishingly hollow and weak performance of Dawkins with Lennox on the weekend . . . -- the Evo Mat school of thought has failed to deliver. And, once one sees the config space scale and search issues, one easily sees why, on grounds long since forming the base for the highly successful discipline in science commonly known as statistical thermodynamics! Contrast, the empirically known, commonly observed, ability of intelligent agents to use knowledge of the possibilities of configurations and the underlying discoverable framework of lawlike natural regularities, to construct systems exhibiting organised complexity reflecting FSCI as a signature of their work. Thart is, we DO know a dynamically and probabilistically competent source for the genome. Just, it does not sit well with the worldview preferences and agendas of the school that happens to dominate in science institutions in our time. SO, that school keeps on issuing just-so stories and promissory notes, which on the implications of stat thermo-D [and the real-world applications of NFL] keep falling flat. So it is time to collect on the IOUs, and declare intellectual bankruptcy. 2] SEMIOTIC, re 213: first, a Complaint Semiotic, there is a reason why after having had to deal with plagues of spam and harrassment that I reserve my name from general discussions on blogs. (I have given adequate information for those who legitimately need to contact me, as say several commentators at UD have.) I ask that you kindly respect this, and address issues on the merits instead of indulging in puerile personalities. And kindly note that "fools for arguments use wagers." Not to mention, that I have a moral objection to gambling in any form, one that I have made a public record of -- through co-hosting a live, call in programme here in my land of residence when casino gambling was put on the agenda by powerful forces in the community as a "solution" to our post-volcano economic woes [by some of the same ones who ignored the warnings when something could have been done in advance to reduce our vulnerability . . .] -- and have paid a price for so doing. If you refuse to refrain from personalities, I will make my complaint to the authorities here at UD loud and clear. Understood? Now, on substance . . . 3] Let X = {0, 1}^64. Also let F be the set of all functions from X^5 to X. . . . . If algorithms to search functions in F are written as binary Turing machine descriptions (strings over {0, 1}) for a fixed universal Turing machine U, and the S-th cell of the tape from which U reads descriptions is set to 2 immediately prior to the operation of U, then there is no probability distribution on F for which all algorithms have identically- distributed sequences of observed values when presented to U. . . . . I made the preceding proposition somewhat informal. I claim that a similar statement (to be included in a written agreement) will be proven a theorem in a journal in mathematics, science, or engineering within the coming three years, and that its negation will not Has it ever dawned on you that chance + necessity acting by themselves on a space-time, matter-energy only cosmos are on excellent basic thermodynamics grounds dynamically and/or probabilistically incompetent on the gamut of our observed cosmos to synthesise a code system and algorithms such as you have just characterised -- e.g. it is well beyond the UPB limit of 500 - 1,000 bits of information- storage capacity, once we begin to unpack what phrases like "Turing Machines" [i.e. general purpose computing device] mean, etc? So, here is my counter-proposal, Semiotic: Kindly simply provide a plain explanation relative to evo mat premises that shows that the contentions I have made in my always linked, App 1, and/or that PaV has pointed out in 175 as I have excerpted, are not cogent to the issues of [a] real-world OOL, [b] origin of BPLBD, and [c] origin of an embodied organism capable of reliable reasoning tha tis not plagued by the dilemmas implicit in say Crick's infamous statement:
The Astonishing Hypothesis is that "You," your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. Free Will is, in many ways, a somewhat old-fashioned subject. Most people take it for granted, since they feel that usually they are free to act as they please. While lawyers and theologians may have to confront it, philosophers, by and large, have ceased to take much interest in the topic. And it is almost never referred to by psychologists and neuroscientists. A few physicists and other scientists who worry about quantum indeterminacy sometimes wonder whether the uncertainty principle lies at the bottom of Free Will. ... Free Will is located in or near the anterior cingulate sulcus. ... Other areas in the front of the brain may also be involved. What is needed is more experiments on animals, ... [The Astonishing Hypothesis: The Scientific Search for the Soul, Charles Scribner's Sons, New York, NY, 1993, pp. 3, 265, 268.]
As to wagers, no money needs be on the table, just demonstrated ability to think clearly and address issues cogently on the merits across comparative difficulties; instead of on personalities rooted in red herrings leading out to oil-soaked strawmen burned to cloud and poison the atmosphere with noxious smoke. Or, has it ever dawned on you that there is a reason why I remain utterly unimpressed with the debates on NFL etc that you have put up? And, why I have therefore quite deliberately chosen to respond at the basic 101 level that a reference in succession to the paragraphs of the Wiki article on NFL will permit? [The showing off of mathematical virtuosity by using techniques and concepts that good old Prof Harald Neiderriter -- in his favourite orange and green Dashiki and duly brown sandals, toting that old brown leather briefcase with the alarm clock set on the desk promptly at 5 minutes past the hour -- taught us on UWI Mona Campus back in M100 in the 1970's, is not the only relevant consideration, in short!] Speaking of Wiki . . . 4] NFL article: “A search algorithm will rarely evaluate more than a small fraction of the candidates before locating a very good solution.” As I noted in 218, point 5, this is key.
a --> For, therein lieth the issue of probabilistic resource exhaustion and the resort to a quasi-infinite quantum foam of sub-cosmi to try to evade its force through a materialistic form of the anthropic principle. b --> In short, the point has actually long since been conceded, and in the peer-reviewed lit too: c --> Namely, there is not a good reason to believe that on the gamut of the observed cosmos, RV + NS and extensions thereof under evo mat models of origins, can credibly locate bio-functional forms and to cluster them into living cells and organisms of widely divergent body plans [the functionality targets forming the relevant set of objective functions to be searched out by search algorithms resting on RV + NS], in the relevant configuration space defined by the organic chemistry and associated thermodynamics of plausible pre-biotic environments. d --> But, such an extension of the scope of search to incorporate an unobserved [and probably inherently unobservable] quasi-infinite scale is a resort to speculative, empirically un-anchored metaphysics, not science. e --> Thus, we are now not in the province of scientific methods, but most strictly in that of philosophy, and so the comparative difficulties approach across live option worldviews is the relevant one.
Further to this, we then have no excuse to use words like "science" to censor out due consideration of ALL live option alternatives and issues in the relevant phil, however broadly we may define these. (That is, for instance, we should broadly/generically identify and examine materialistic, theistic, and pantheistic views as the main options, in education and in popular or semi-popular discussion of such phil topics and issues. A modicum of basic phil would then allow us to discuss with basic understanding what our options of thought are; and what challenges they each face -- all worldviews bristle with difficuties. So, we can then make informed and balanced, not manipulated choices.) 5] Kairos, 223: distinction between precise (theoretically true) and approximate (valid in the real world) is quite common in engineering. For a good example you can see to the way the density functions for elelctrons and holes n8×9 and p(x) are treated within the depletion region of a pn junction. Precisely! GEM of TKIkairosfocus
January 7, 2008
January
01
Jan
7
07
2008
11:49 PM
11
11
49
PM
PDT
kairos, You are constructing P from given p, a proper distribution. The sum over all f of p(f) is 1. Thus it makes no sense to normalize first.Semiotic 007
January 7, 2008
January
01
Jan
7
07
2008
07:18 PM
7
07
18
PM
PDT
#216 Semiotic To say that there is NFL for P is to say that P(f) = P(f o j) for all functions f and for all permutations j of the domain of functions. Suppose that p(f o j) is below threshold and that p(f) is above. If you set P(f o j) = 0 and P(f) = p(f), then P(f) - P(f o j) > p(f) - p(f o j). But this doesn't occur in the second (real world) operation. It's only here that0 is assigned to all p's. But doesn’t an improper distribution bother you? The sum of P(f) over all f is less than 1 unless you normalize in some fashion. This is not the case if the normalization for sum p =1 is performed before. This kind of distinction between precise (theoretically true) and approximate (valid in the real world) is quite common in engineering. For a good example you can see to the way the density functions for elelctrons and holes n8x9 and p(x) are treated within the depletion region of a pn junction.kairos
January 7, 2008
January
01
Jan
7
07
2008
02:07 PM
2
02
07
PM
PDT
Atom (222):
Yes, agreed that if we begin at a random point we have no hope of finding the first cluster of functionality, thus PaV and your points are indeed very valid concerns.
If you say that fit genotypes in some sense cluster in the genomic spaces (no one has specified topology) of various species, then you are saying that the fitness functions were almost certainly not drawn uniformly. Almost all functions from genotypes to fitness values are disorderly in the extreme.Semiotic 007
January 7, 2008
January
01
Jan
7
07
2008
01:47 PM
1
01
47
PM
PDT
KF: Yes, agreed that if we begin at a random point we have no hope of finding the first cluster of functionality, thus PaV and your points are indeed very valid concerns. As for Mrs. Atom, she is doing quite well, as gorgeous as ever. I hope your wife feels better soon!Atom
January 7, 2008
January
01
Jan
7
07
2008
10:08 AM
10
10
08
AM
PDT
PaV (208):
This isn’t an argument that Dembski couldn’t make himself, it’s simply an argument he wouldn’t bother taking the time to make, unless for some reason he needed to.
Come one, now. Everyone makes mistakes. Dr. Dembski did not survey the NFL literature when working on No Free Lunch. If he had, he would not have emphasized that a search algorithm should be expected to perform poorly unless matched to the instance (function). That random search performs well under the uniform distribution was established in 1996, the year before Wolpert and Macready's first NFL article appeared. A fair number of researchers have known since 2000 that optimization is easy in the typical (algorithmically random) function. The 1996 argument is based on high-school math. The 2000 argument is based on advanced math that Dr. Dembski knows well. Dr. Dembski was capable of making the arguments himself. But the arguments are most definitely not ones "he wouldn’t bother taking the time to make, unless for some reason he needed to." No one's asking you to eat crow. Marks and Dembski have put new stuff out there. It's interesting, and not at all gelatinous.Semiotic 007
January 7, 2008
January
01
Jan
7
07
2008
01:06 AM
1
01
06
AM
PDT
Kairos: Excellent! Indeed, it is the right time . . . if we are paying attention. Atom: Good summary, as per usual. (BTW, how is the ever-lovely Mrs Atom? My LKF is fighting a dose of the usual "London" flu due to the annual wave of UK based visitors for the Christmas festival here.) I add only that we should note that on the gamut of the observed cosmos, blind search based on random walks starting at arbitrary initial points in genomic config space will hopelessly fail on average and all but absolutely. [That is we are looking at a soft, probabilistic resources exhaustion impossibility -- as is typical of stat thermodynamics, e.g the stat thermo-d form of 2nd Law of Thermodynamics, not a logical-physical hard impossibility.] More broadly, the only "searches" that are empirically known to succeed in finding functionally specified hugely isolated domains within such large config spaces [cf PaV's excellent image] are intelligently directed ones based on domain knowledge. Thus, we have no good reason to imagine that searches constucted onthe RV + NS architecture will work on the gamut of our observed cosmos. And, of course in principle every instantiation of a chance-based molecular chaining in the plausible prebiotic soups is an instantiation of a search algorithm in the family. Are all such equiprobable? Ans: no, in fact one major point of the point of the classic TMLO study in the earlier chapters is that it is not easy to get to a plausible prebiotic soup at all, and that in such a soup the preferential reactions lead AWAY from chaining of life-relevant macromolecules. The resulting chemical equilibria on the relevant macromolecules for getting to life as we know it are such that it is simply utterly unlikely for such molecules to form individually on the scale of a planet full of prebiotic soup of exteremely generous concentrations in the relevant precursors. Much less, in clusters that just happen to be so spatially fitted together that life functionality can emerge by chance acting on the known laws of physics and chemistry. And of course the speculative quasi-infinite foam of subcosmi is ad hoc metaphysics to try to rhetorically blunt the force of the empirically anchored evidence, not serious science. So, we see a conundrum on OOL for evo mat thought. On BPLBD, we see that the various RV mechanisms boil down to needing to generate even more FSCI -- e.g. 100 mn + bases to get to a plausible first arthropod as Meyer pointed out in that PBSW paper, adn within the gamut of the earth, not the cosmos as a whole. At least on NDT -- and panspermia on major animal and plant groups would be an even more interesting admission that intelligent agency was involved in origin of species than anything we have seen to date. That is, the way in which in the real world the relevant claimed RV + NS search algors credibly operate, is to be inferior to the average if anything, i.e even more hopeless than raw random search! (Note my stress on getting to grips with how abstract theorems and concepts anchor down to the real world!) So, we are right back to the core points made long since by WD, and even TBO. GEM of TKIkairosfocus
January 7, 2008
January
01
Jan
7
07
2008
12:21 AM
12
12
21
AM
PDT
Atom (211):
So if biological search does indeed outperform random blind search on average over the various fitness landscapes then we can ask what are the chances we found this match of algorithm to search space structure by chance? The answer is quantifiable and this is the direction the “Active Information” framework approaches the question from.
Yes, Marks and Dembski have recently worked with information measured on instances, not distributions. (Incidentally, I interpreted their work as an attempt to go straight, and took some flak from colleagues.) Their approach has seemed reasonable to me, though I've never felt sure what to make of it. Now I can see that if NFL does not hold, then their analytic framework has a problem. The problem is that if, say, a (1+1)-EA is generally superior to random search for a uniform distribution on the set of all functions f in Y^X with sufficiently low Kolmogorov complexity for physical realization, then Marks, Dembski, and others may fall into misinterpretation of positive active information for the EA on a particular instance. The expectation of the EA's active information over all low-complexity functions would be positive as a consequence of the finitude of the observable universe. It would not be due to design. Marks and Dembski should hope the ideas shaken loose by this discussion don't help me complete a proof I've been struggling with since the summer. I wouldn't say offhand that the anticipated theorem (supported by extensive numerical experiments) would demolish their framework, but some repair would be necessary.Semiotic 007
January 6, 2008
January
01
Jan
6
06
2008
11:39 PM
11
11
39
PM
PDT
Perlopp: Re 200: Kindly look at your choice of language again! I think you will see that I have pointed out that the decisive issue is not the strawman-burning debates over NFLT terms and conditions -- on which BTW, it seems IMHCO that WD does much better than his critics allow (as is tiresomely usual on ID-related matters) -- but the realities of the statistical thermodynamics principles anchored challenges as yet unanswered by the evo mat advocates on OOL and BPLBD. (and on cosmological fine-tuning too, cf always linked). In steps:
1 --> Kindly note for instance my cite from Harry Robertson above. 2 --> For, there, he aptly points out the informational significance of probabilistic distributions, and applies that to infer the link between the informational issues and the thermal/energetic ones. 3 --> In that context, issues of vast -- far beyond merely astronomical -- configuration spaces and isolated islands of functionality within them become decisive. 4 --> Which is what PaV pointed out, and which is what I abstracted as decisive.
Unless and until evo mat advocates can cogently address this and show empirically that FSCI can and does credibly arise from chance and necessity on the gamut of our cosmos and/or that there is good empirical reason to infer to a quasi-infinite multiverse, they are guilty of resort to empty, ad hoc metaphysical speculation to rhetorically prop up a factually seriously challenged worldview. One they pretend is "scientific." (I hardly need to reiterate that we know that FSCI is routinely produced by intelligent agents.) In that already stated and discussed and linked context, I think I have reason to be less than amused to see dismissive nonsense like claims that I do not understand basic mathematical [or general] logic, whether on NFLT or otherwise. As touching NFLT, you will see that I have discussed the Wiki summary, as it helps make my own point clear: there is a plain link to the statistical thermodynamics principles issues lurking in the background so soon as one addresses inference to design related issues. I guess a further excerpt from Wiki on "your" point may help you see where I am coming from:
The original no free lunch (NFL) theorems assume that all objective functions are equally likely to be input to search algorithms.[2] It has since been established that there is NFL if and only if every objective function is as likely as each of its permutations.[4][5] (Loosely speaking, a permutation is obtained by shuffling the values associated with candidates. Technically, a permutation of a function is its composition with a permutation of its domain.) NFL is physically possible, but in reality objective functions arise despite the impossibility of their permutations, and thus there is not NFL in the world.[11] The obvious interpretation of "not NFL" is "free lunch," but this is misleading. NFL is a matter of degree, not an all-or-nothing proposition. If the condition for NFL holds approximately, then all algorithms yield approximately the same results over all objective functions.[5] Note also that "not NFL" implies only that algorithms are inequivalent overall by some measure of performance. For a performance measure of interest, algorithms may remain equivalent, or nearly so.[5] The reason that almost all objective functions are physically impossible is that they are incompressible, and do not "fit" in the world.[8] Incompressibility equates to an extreme of irregularity and unpredictability. All levels of goodness are equally represented among candidate solutions, and good solutions are scattered all about the space of candidates. A search algorithm will rarely evaluate more than a small fraction of the candidates before locating a very good solution.[8]
Thus, we may take it in steps again:
5 --> That last statement is particularly illuminating of the force of my point: "A search algorithm will rarely evaluate more than a small fraction of the candidates before locating a very good solution." 6 --> But, is that relevant to this case, where we are looking at UPB and config-space anchored probabilistic resource exhaustion on the gamut of the whole observed cosmos? [And BTW Dawkins' "foam" of "billions" of sub-cosmi in his recent debate with Lennox to rhetorically get around the probabilistic resource exhaustion issue evades the fact that we would have to be looking at a quasi-infinite array -- and one without a shred of empirical evidence, i.e it is a metaphysical ad hoc claim, not a scientific one!] 7 --> In slightly more details: when we go beyond UPB [more than 500 - 1,000 bits worth of information storage capacity to hold the relevant information in the systems of interest], and are on stat thermo-d principles dealing with credibly isolated islands of fucntionality within the resulting vast config spaces, we have no good reason to infer that on the gamut of the observed universe any RV + NS-architecture chance + necessity based search will have even the slenderest ghost of a chance of coming near just one functional solution, much less the cluster of fucntional ones required to account for either OOL or BPLBD!
That is what I discussed in details through the always linked microjets and nanobots thought experiment here. In short, we come right back to WD's main point; namely, that what is going on in the relevant situations [including "Methinks," Avida and Ev] search algorithms under the general rubric evolutionary computing, become more than "average" because active information is added in the design process of the algorithm. And such active information comes from intelligent agents, i.e. the NFL in applied context is pointing to the importance of agency in solving problems of finding isolated islands of functionality in vast configuration spaces. (Which comes full circle to the remarks in OP and in my comment no 1 on grudging acknowledgement in the guise of claimed refutations; which we can augment with the point that sometimes the concession of the key issue is not acknowledged but directly implied or entailed on bringing to bear relevant factors.) I think you will therefore appreciate my bottom-line: Cho man, do betta dan dat! GEM of TKIkairosfocus
January 6, 2008
January
01
Jan
6
06
2008
11:33 PM
11
11
33
PM
PDT
PaV, From 205:
PaV: “A uniform distribution is an equal probability distribution over an interval.” perlopp: That is one special case.
You responded by linking to the article on that special case. From 215:
The Wikipedia article describes a uniform distribution.
There's a reason you get a disambiguation page when you search for "uniform distribution." There are separate articles for the discrete and continuous cases, hinting that what perlopp says might be true. I find it very hard to believe you thought of the title "Uniform distribution (continuous)." Did you not reach the article by way of the disambiguation page?
If you want to talk about a discrete uniform distribution, then why not call it a discrete uniform distribution.
Because the intended audience of an article in an engineering journal does not need to be told both that the set of functions is finite and that a probability distribution on that set is discrete. And the intended audience certainly does not need to be reminded that a probability distribution function maps a set to real numbers.
I understand the Kronecker delta function quite well, thank you.
But the trick is in knowing when to apply it, and not the Dirac delta. I have worked with graduate students who scored 800 on the math GRE, back before the standardization changed and a perfect score was rare, and who could neither formulate nor evaluate novel mathematical arguments to save their lives.Semiotic 007
January 6, 2008
January
01
Jan
6
06
2008
09:59 PM
9
09
59
PM
PDT
kairos (207): To say that there is NFL for P is to say that P(f) = P(f o j) for all functions f and for all permutations j of the domain of functions. Suppose that p(f o j) is below threshold and that p(f) is above. If you set P(f o j) = 0 and P(f) = p(f), then P(f) - P(f o j) > p(f) - p(f o j). Loosely speaking, you've moved further from the equality necessary for NFL, not closer. And this does not involve normalization. But doesn't an improper distribution bother you? The sum of P(f) over all f is less than 1 unless you normalize in some fashion.Semiotic 007
January 6, 2008
January
01
Jan
6
06
2008
09:01 PM
9
09
01
PM
PDT
Semiotic 007, The Wikipedia article describes a uniform distribution. If you want to talk about a discrete uniform distribution , then why not call it a discrete uniform distribution. Either way, discrete or continuous, the idea is rather obvious, isn't it, equi-probability? I understand the Kronecker delta function quite well, thank you. I've studied some tensor calculus.PaV
January 6, 2008
January
01
Jan
6
06
2008
08:59 PM
8
08
59
PM
PDT
Semiotic 007 (190): "I resisted getting into a debate of an off-topic point. But you insisted, and I responded by linking to an on-topic paper that stood to be of interest to everyone reading the thread... The pointing out of the inappropriateness of teleology in computer modelling/simulation of Darwinian evolution is on topic. Criticism of your belief that competition (which is inherently teleological) is essential to Darwinian evolution is therefore entirely germane to the topic. Your mention of getting on topic occurred after other, true diversions (comparison of quotation of Darwin to Biblical exegesis, denigration of Darwin's beliefs regarding his own theory, (unnecessary) explication of the (obvious) practical limitations of CFD, etc.)... Semiotic 007 (190): "...Only ellipsis in your quote of my comment (#84, you might have mentioned) makes things seem otherwise. Here I’ve emphasized some text you omitted... At (185), I did provide the comment number [84] for easy reference:
This directly contradicts your stated intention at (84).
j
January 6, 2008
January
01
Jan
6
06
2008
05:52 PM
5
05
52
PM
PDT
1 2 3 9

Leave a Reply