Uncommon Descent Serving The Intelligent Design Community

We cannot live by scepticism alone

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Scientists have been too dogmatic about scientific truth and sociologists have fostered too much scepticism — social scientists must now elect to put science back at the core of society, says Harry Collins.

Read more…

Comments
24 --> Relevant observation [kindly note Rob, before shooting off again about citing Marks- Dembski as authorities blindly]: mathematically, the median generation number for partitioned search is 98. 40+ and 60+ generations [as Dawkins published in New Scientist and Blind Watchmaker respectively; as I have linked from the outset in this thread] are about right for selected "good" runs, both of which produce a significant number of right letters by Gen 10 and 20, and both of which NEVER have a selected, sampled letter that is correct revert. 25 --> As to JT's unworthy hint at lying and/or stupidity -- YET again for this circle, onlookers -- in 462, he evidently has not looked at what I wrote ever since 346 - 7, in which I showed how we can get explicitly guaranteed latching, and close to guaranteed implicit latching, in algors T2 and T3. [That is a failure of basic duties of care before making adverse comment.] 26 --> But, never let us forget, onlookers: in EVERY case across dozens of sample points for coming on 2 dozen letters that get right fairly early, we NEVER see a single reversion, and the program runs a number of generations that fits very well with the mathematical expectation for partitioned, letter-by-letter sear4ch coupled to choosing "good" runs. _________________ BOTTOMLINE: Dawkins' Weasel diverts attention from and begs the question that the basic challenge to proposed mechanisms of chemical and biological evolution is that bio-function rests on FSCI of great complexity, starting at some 600 k bits worth. Body-plan level biodiversity, requires some 10's - 100's of M bits additional information, dozens of times over to cover the phyla. By contrast, Weasel, whether in 1986 or currently [cf 383 etc], is about hill-climbing based on closeness of non-functional configurations to a target. Weasel is therefore foresighted -- designed -- search with a warmer/colder oracle. That is before "latching" -- which is OBSERVED -- is even an issue [and onlookers note how GLF quotemined me on this starting with his first references in 336 above . . . cf. 404 and 407]; i.e we are looking at an attempt to further distract attention from the material issue by a selectively hyperskeptical red herring. the distractive scent trail then has led out to a similarly hyperskeptical strawman on how latching can occur "naturally," never mind that the factis that the context is a designed targetted search algorithm that rewards non-functionality. This has then also been soaked with the immoral equivalency ad hominems that I am being deceptive or stupid or stubborn not to give in on the point, lit up to burn brightly and cloud and poison the atmosphere: turnabout accusation. Thus, the atmosphere for serious discussion towards truth has been thoroughly poisoned by selective hyperskepticism. This is the real danger to our civilisation, in a case in point. I am sure that you, dear reader, will be by now familiar with the ongoing pattern that is ripping apart our civilisation, and that in the face of mortal perils being studiously ignored. [And, oh, yes, I forget: GLF, refuses to acknowledge the implications of Dawkins' direct statements and the printoffs of his program circa 1986, with his statement that US$ 100 k would be there for a charity if what Daw,kins directly implies could be shown from his mouth so to speak. He demands that in effect I get a citation from Dawkins confessing in so many words. Well, Dawkins has EXPLICITLY confessed to targetted search that does not require functionality, my primary point. Such a targetted, non-functionality rewarding search algorithm already diverts from the point of Hoyle's challenge on getting to shores of functionality, and his printoffs do EXPLICITLY show non-functional configs being rewarded. Also, in EVERY instance, once a letter is right, it never ever reverts. Plainly, circa 1986, Mr Dawkins did not realise the implications of that as a signature of how his algorithm(s) of that time worked beyond REASONABLE doubt. Appeals to subsequent algors and runs are of course further diversionary.] So, we have a case study in hand. Are we willing to apply the Simon Greenleaf remedy, straight thinking guided by reasonable faith that recognises that in matters of fact we must deal with moral rather than mathematically demonstrative certainty? GEM of TKI PS: The onweard insinuation that I cannot tellt he differencfe between a scientific inference to design and prayer for God's grace in light of personal knowledge of God in the face of our Saviour, is unworthy. the grounds on which I am a Christian are independent of those on which I support intelligent design as a scientific inference. In short, having MET God in the face of the risen Christ [and having reckoned with the implications of 500+ eyewitnesses who launched an unstoppable force some 2,000 years ago], even if I were to believe that Darwinian and related mechanisms account for OOL and biodiversity, I would consider these to be God's mechanisms, not any threat to my core relationship with God. I support the design inference because it is what makes sense of the FSCI in life forms, embedded in the observed informational molecular nanomachines based computer in the heart of the cell. (BTW, viruses and recombinant DNA are proof enough of flexibility of programming, i.e the DNA-ribosome-enzymes system meets the basic criteria for a computer up to and including flexible programming -- think of that next time you catch flu.) PPS: Upright, good points on FSC, but I doubt that you will get get a serious and sober response on the merits, any more than I have. Sadlykairosfocus
March 14, 2009
March
03
Mar
14
14
2009
02:46 AM
2
02
46
AM
PDT
9 --> And, therefore, GLF's "irrelevancies" are all too relevant indeed! 10 --> BTW, a basic question: in a pre-biotic soup or other similar environment, just what [apart from a designer] would have naturally rewarded closeness to life-functionality? 11 --> Just so, assuming that we have simple unicellular life forms, what -- apart from a designer -- would have rewarded non-functional innovations towards body plans, on closeness to target? 12 --> In Dawkins' toy example, do we not see a designer rewarding closeness to a target independent of actual functionality? 13 --> Is not Weasel then, a demonstration of the power of intelligent design? [And are not therefore Genetic Algorithms similarly items from the intelligent designer's toolbox?] 14 --> And are we not in a position to conclude these things long before we come to the interesting but peripheral issue of whether or no Weasel circa 1986 used partitioned search with explicit letter latching? [Cf my actual quotemined remarks, in 404 and 407 above.] 15 --> As to JT's "new business" complaints on requiring extra code, the Dawkins-acknowledged fact of comparison with the target already has in it all that is needed to prepare a protective mask-off on the winner in each generation. (On denial and believability, I will only say that in light of Mr Dawkins' longstanding public track record, culminating in his latest sophomoric remarks in the God Delusion, his credibility on soundness, fair-mindedness or responsiveness to the truth and to correction where found in error is not very high. For sadly excellent reason.) 16 --> Back on point: in effect, to code: simply partition the search letter-wise, and make a mask that if in state 1 permits further variation; if in state 0, lock. 0 = on-target for this letter, 1 = not on target. And, of course the sum from 1 to 28 is the simplest distance metric: up to 28 1's, down to 0. 17 --> A totally off "nonsense phrase" will be distance metric 28. One that has one correct letter will be 27, and so on down to 0. In each generation, you select the lowest value [having compared the 50 or 100 or 500 or so in the varied population that was created by varying the "winner" from the last one], and select the champion as the one with the lowest metric from the new generation. 18 --> Furthermore, observe again the CITED and LINKED cases published by Mr Dawkins in 1986: these evidence that Dawkins' program has the effect of latching successful letters even without wider functionality, beyond reasonable doubt. 19 --> The above is the obvious way to do that, and requires no great additional coding effort. 20 --> Had there been a case in the published 1986 runs where we saw early reversion, that would have been different. But, on he actual evidence from 1986, the best -- simplest -- explanation, plainly is latching, not pseudo-latching or quasi- latching. [The run of a different, and plainly far more sophisticated algor in 383 that increases the number of letters, rewards non-functionality with promotion[ AGAIN!], and does not latch, plainly, is what is -- effort to code notwithstanding -- truly irrelevant.] 21 --> As to random variation, there is a [pseudo-] random number function in many implementations of Basic, and to select a letter at random from 1 to 28, compare with [saved] mask value for the champion and then if 1, allow variation, requires no great amount of "additional" coding. My T3 version or the equivalent then allows a looping that selects 0, 1,2,3, . . . 28 letters to vary at random. (JT, if he is ignorant, is commenting out of his depth. If he knows better, he is setting up and knocking over a strawman here, to try to discredit me. Selective hyperskepticism at work again.) 22 --> Furthermore, by Dawkins' testimony we have 40+ and 60+ generations in his published 1986 cases, in a context where he stated [ch 3 BW, evidently] that the initial BASIC implementation took 1/2 hour to run:
The exact time taken by the computer to reach the target doesn't matter. If you want to know, it completed the whole exercise for me, the first time, while I was out to lunch. It took about half an hour. (Computer enthusiasts may think this unduly slow. The reason is that the program was written in BASIC, a sort of computer baby-talk. When I rewrote it in Pascal, it took 11 seconds.)
23 --> Even with a 1986 PC doing BASIC, even a BEEB running a 6502 running at 1 - 5 or so MHz clock rate, much less a PC or a Mac, an hour of processing time to run 100 or so generations indicates that a LOT of processing was going on in each generation. [ . . . ]kairosfocus
March 14, 2009
March
03
Mar
14
14
2009
02:45 AM
2
02
45
AM
PDT
Onlookers: The selective hyperskepticism and refusal to attend to material information already long since in evidence continues, as do the sad concomitants of such intellectual bondage. I will comment on JT, re:
[451] It would be a lot of extra programming to require that a correct letter could not change back. You would have to keep track of correct letters, and also gurantee that the remaining incorrect letters at arbitrary locations each had an equal chance of being selected. Why would Dawkins have gone to all that extra trouble to achieve something that nearly happens on its own. [456] Maybe someone’s already said this, but the latching issue does not seem peripheral. Those who say letters are being latched are implying that design is taking place in that attribute of the process, when in fact it results purely from population dynamics. And this speaks directly to the whole evolution-creation debate. [462] to actually latch correct letters into place would involve additional code to randomly select from the remaining letters at arbitrary locations. And given this extra effort for latching and that fact it wouldn’t accomplish much anyway (over what the standard algorithm does) its certainly believable if Dawkins said he didn’t do it.
1 --> According to Dawkins, as long since cited in 346, point 1, from Ch 3 BW:
We again use our computer monkey, but with a crucial difference in its program. It again begins by choosing a random sequence of 28 letters, just as before … it duplicates it repeatedly, but with a certain chance of random error – ‘mutation’ – in the copying. The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL.
2 --> Got that, JT? repeat: The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase. Then, in the context of addressing the Hoylean challenge [ch 3 BW], Dawkins shows that he KNOWS the implications of a large config space search:
What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection: about a million million million million million years. This is more than a million million million times as long as the universe has so far existed.[Cite is courtesy Wiki article on the Weasel program]
3 --> This directly and immediately implies that:
(i) Dawkins knows at the outset that even for his toy example if functionality is imposed as a search criterion from the outset,the search is infeasible -- yes, infeasible; thus that (ii) the 1986 algorithm THEREFORE rewards closeness to target INDEPENDENT OF FUNCTIONALITY, i.e. in absence of a requirement of functionality, and that to do so (iii) it already by direct statement inspects each "nonsense phrase" by comparison with the target statement.
4 --> So, whether or not the program explicitly latches letters, it is already outside of the parameters of the alleged BLIND Watchmaker. For, natural selection requires rewarding advantageous FUNCTION, not non-function. Weasel, by Dawkins' statement, and in a context that he is ducking the implications of requiring that the search compare functional configs arrived at by chance, does not require you to be on even the shoreline of an island of functionality. 5 --> Furthermore, Weasel is by Dawkins' direct statement already a designed, targetted, foresighted search, regardless of explicit latching or not. 6 --> THUS, THE MATERIAL ISSUE IS SETTLED AT THE OUTSET, FROM DAWKINS' WORDS: Weasel is irrelevant to Hoyle's challenge, and that of the later ID thinkers. This, that in a context where it is known and acknowledged by Dawkins that if a realistic or even a toy functionality requirement is imposed, feasibility of cumulative search -- hill-climbing -- vanishes. 7 --> This is of course very directly relevant to the significance of the FSCI concept. that is, functionally specific, complex information is known to be resistant to random walk based searches that require functionality to be present before hill-climbing warmer/colder algorithms can be applied. (In this case, the Weasel sentence is beyond the reasonable reach of a PC circa 1986, even though we are dealing with "only" 10^40 or so configs. FSCI as a rule of thumb becomes relevant when we are dealing with 10^150 to 10^301 configs, with the upper end of that range being a practical threshold for exhausting the full search resources of the cosmos; as those resources could not search as much as 1 in 10^150 of the config space. Observed life forms of minimal complexit5y for independent living start with 300 - 500 k DNA elements, i.e at ~ 10^180,0000+ configs.) 8 --> So, BEFORE we deal with any specific questions, we already know that we are dealing with red herrings and strawmen, right there from BW ch 3 on in 1986. [ . . . ]kairosfocus
March 14, 2009
March
03
Mar
14
14
2009
02:43 AM
2
02
43
AM
PDT
George at 458 I'll take that as a punt.Upright BiPed
March 14, 2009
March
03
Mar
14
14
2009
01:31 AM
1
01
31
AM
PDT
JT wrote:
"And to actually latch correct letters into place would involve additional code to randomly select from the remaining letters at arbitrary locations."
I'm uncertain that latching would increase code and complexity. (Just to note, I thought your implementation was quite efficient considering the need to avoid explicit latching.) It seems to me that since the problem being solved is a guided semi-random search toward a fixed target, implementing a latching algorithm is potentially the most efficient way to solve it. Your implementation of Weasel makes significant effort to avoid the need to explicitly latch. This is taking the long way around, as the resulting data is merely consigned to the bit bucket. With an inverse goal of avoiding the need to generate extraneous populations (since only a single offspring is chosen in each generation anyway) implementing fixed behavior requires less code, less memory, less CPU, and results in a dramatically quicker overall search: #include <iostream> #include <ctime> using namespace std; int main(void) {   const char* target = "methinks it is like a weasel";   const int tLen = 28;   const char* alpha = "abcdefghijklmnopqrstuvwxyz ";   const int aLen = 27;   const float mrate = 0.05f;   char mutator[tLen+1] = {0};   srand( unsigned( time(0) ) );   for( int i = 0; i < tLen; i++ )     mutator[i] = alpha[rand()%aLen];     int iGen = 0, iters = 0;   while( ++iGen )   {     int matches = 0;     for( int i = 0; i < tLen; i++, iters++ )     {       if( mutator[i] != target[i] )       {         if( float(rand()) / (RAND_MAX+1) < mrate )           mutator[i] = alpha[rand()%aLen];       }       else         matches++;     }     if( float(rand()) / (RAND_MAX+1) < mrate*mrate )       mutator[rand()%tLen] = alpha[rand()%aLen];     if( iGen % 10 == 0 ) cout << mutator << endl;     if( matches == tLen )       break;   }   cout << mutator << endl;   cout << "--------------" << endl;   cout << "Target reached" << endl;   cout << "Generations: " << iGen << endl;   cout << "Total population: " << iGen << endl;   cout << "Iterations: " << iters << endl;   cout << "--------------" << endl;   return 0; } The output is identical to Weasel in virtually every respect. A string of gibberish makes a steady walk toward the target while also demonstrating imperfect latching. Internally, like Weasel, it makes a direct comparison to the target on every generation. Here the latching is explicit instead of implicit, while still allowing for negative mutations. Thanks for posting your code sample above. I enjoyed going through it.Apollos
March 14, 2009
March
03
Mar
14
14
2009
01:18 AM
1
01
18
AM
PDT
Scratch that - nature would be massively parallel. It wouldn’t have to look at each member of a population in sequence.
Exactly! How many parallel opportunities are there in a litre of fermenting bacterial broth?Arthur Smith
March 14, 2009
March
03
Mar
14
14
2009
01:04 AM
1
01
04
AM
PDT
GLF:
Could you perhaps explain that to KF?
JT: I bet he understands it.
Sorry, I lied - I don't if he understands or not. KF, to oblige GLF - Supposing some mutation occurs in one string that makes the letter at position N correct in that string and its also the highest scoring string for that iteration. If the population is 500, then 500 copies are made of that winning string. If during some subsequent iteration, a mutation occurs in one of those 500 strings that makes the letter a position N wrong in that string, there are still 499 copies of the string where the letter at position N is correct. So chances are highly likely that the winning string for that iteration will have the correct letter at position N. So that's why you have very long runs without correct letters changing. And to actually latch correct letters into place would involve additional code to randomly select from the remaining letters at arbitrary locations. And given this extra effort for latching and that fact it wouldn't accomplish much anyway (over what the standard algorithm does) its certainly believable if Dawkins said he didn't do it.JT
March 13, 2009
March
03
Mar
13
13
2009
10:23 PM
10
10
23
PM
PDT
[460]: Scratch that - nature would be massively parallel. It wouldn't have to look at each member of a population in sequence.JT
March 13, 2009
March
03
Mar
13
13
2009
07:06 PM
7
07
06
PM
PDT
Actually though, I wasn't sure what would happen with a million string compares on each iteration. I guess if nature could efficiently distinguish that many states, it would seem to imply it was pretty smart.JT
March 13, 2009
March
03
Mar
13
13
2009
06:54 PM
6
06
54
PM
PDT
GLF:
JT Virtual latching occurs because even if a correct letter is mutated to something incorrect in one individual, you still have 499 indiviuals with the letter correct at that location (if your population is 500 for example). By “virtual”, do you mean that what might appear to be latching behaviour would be seen most of the time but in fact there was no actual latching going on (due to the fact that a stepback was always possible)?
Yep.
Could you perhaps explain that to KF?
I bet he understands it. Seems like your fighting the Black Knight on this one, though. BTW, I said in 455 I would modify my implementation along the lines you suggested to Atom. I have written the code, but haven't debugged it completely yet. However, I won't devote any more time to it right now unless someone's interested. It should be obvious if your search string is "Me thinks it is a __________" for example, and then can plug in any arbitary noun, that will drastically decrease the search time.JT
March 13, 2009
March
03
Mar
13
13
2009
06:36 PM
6
06
36
PM
PDT
JT
Virtual latching occurs because even if a correct letter is mutated to something incorrect in one individual, you still have 499 indiviuals with the letter correct at that location (if your population is 500 for example).
By "virtual", do you mean that what might appear to be latching behaviour would be seen most of the time but in fact there was no actual latching going on (due to the fact that a stepback was always possible)? I believe you do. Apperances can be deceptive eh KF? Could you perhaps explain that to KF? Upright Biped
George, if you’ve stopped playing schoolyard with KF, I have a little challenge for you. Can you falsify these?
No, I can't. http://www.citebase.org/abstract?id=oai%3Apubmedcentral.gov%3A12 Citations: 0 It appears the vast majority of references to the paper you post are either on creationist sites, this site or blogs with people crowing about how ID is now science. From the citations index I have access to it appears this paper has a very low citation rate. Therefore I can only conclude it has not exactly set the scientific world on fire. The premise seems absurd in any case
Stochastic ensembles of physical units cannot program algorithmic/cybernetic function.
Allow me to try all possible combinations of physical units in turn so I can check that they cannot program algorithmic or cybernetic function. Yeah, right....George L Farquhar
March 13, 2009
March
03
Mar
13
13
2009
02:57 PM
2
02
57
PM
PDT
Kariosfocus
All that was needeed ha sliong since been posted, cf e.g 346 - 7, with detailed description on the latching sub issue in 364 - 5.
No, you see the thing is that I'm asking you where you are getting your information from. 1: Dawkins created Weasel. It is his example 2: Dawkins specified how Weasel works. It's clearly laid out in Watchmaker 3: The issue of "is each letter fixed when found" has been raised with Dawkins. His has explicitly said that latching was not part of his example. To paraphrase, Dawkins said says he "never even considered "latching" correct letters, as that would have been at variance with the biological principles he was attempting to communicate." http://tinyurl.com/c9nl6b All you have to do is proide a quote from Dawkins that says Weasel works how you say it does.
There you will see both Dawkins’ statement on targetting [the issue in the main, and which demonstrates that he ducked the real Hoylean challenge]
Again, you attempt to confuse the issue with side issues. The issuse is not targetting. The issue is not a "Hoylean challenge". The issue is that as noted in my comment 246 you said
Weasel sets a target sentence then once a letter is guessed it preserves it for future iterations of trials until the full target is met. That means it rewards partial but non-functional success, and is foresighted. Targetted search, not a proper RV + NS model.
The issue is clear. Dawkins says it works one way. You say it does not. Yet who better then Dawkins would know how his own example works?
and the published tabulation which plainly manifests letter-latching the secondary issue that you tried to make the focus of to wriggle out of your self-laid trap.
For probably the tenth time. The "published tabulation" is only a tiny representation of the population. Kariosfocus, the earth appears to be orbited by the sun. If you did not know better, that's how it seems to be. The "published tabulation" does indeed appear to show latching behaviour. I would be surprised if it did not. I refer you back to R0b's comment at 390, where he notes that "in the 50/5% case whose history I showed in [383], it happens an average of ~7000 times per run. " How can you ignore such a devastating response? Perhaps that is in fact why you are pretending it does not exist? A correctly implemented Weasel proves you wrong. So, a question for you. You will accept that the printed tables you refer to represent only a fraction of the popuation? If so: How do you know that none of the members of that population (that were not printed) had not stepped back from a correct to an incorrect letter in that generation? I am not trying to wriggle out of anything.
In short, sadly. your selectively hyperskeptical reductio has long since reached absurdum.
Whatever.
Associated habitual resort to ad hominems, straw men and quote mining under various guises simply manifests and underscores the problem.
Your resorting to throwing anything you can to confuse the issue has been noted.
Please, deal with the issue. (Mere money is not my primary or even secondary interest.)
Truth is my primary interest. My secondary interest is in making you honestly represent your opponents arguments.
As to FSCI, in the form of Functional Sequence Complexity [all digital data arrays can with some frameworking be represented as strings] — as long since noted but ignored onlookers — it has long sin e passed peer review and has in fact been published under Trevors, Abel, et al, including Durston et al’s 2007 paper that published 35 measurements.
Irrelevant.
Indeed, going back tot he OOL researcfhers of rthe 1970’s to 80’s, the concept is there only needing a descriptive term. Dembaki used the CSI, others have focussed on the functionality side of specification.
Irrelevant.
As to the apparent suggestion that I am scarted of being peer revioewed, I simply have no inrterst in teh game. What needed to be peer reviewed has already been so reviewed, starting 30 - 40 years and more ago.
Irrelevant.
The problem is that the implications are being suppressed by the Lewontinian a priori materialists.
Irrelevant.
read and weep, GLF, then break off “the chains of mental slavery” and — by God’s grace — turn towards the true light of day:
Don't you mean "by the designers grace?
Sorry GLF, but the evo mat monopoly on education and public discussion is busted. And UD has had a lot to do with that busting. Kudos to UD, all the warts and flaws notwithstanding.
So, you are both being supressed and free to discuss at the same time?
Hence, your side’s desperate damage control efforts.
My side? If by that you mean "the side interested in honestly represeting their opponents arguments" then yes. And you last few posts are more about desperate damage control then anything.
But in recent days, these have publicly reduced themselves to absurdity for all to see.
What is absurd is that you claim to know how Dawkins intended Weasel to work better then Dawkins himself. What is absurd is the fact that all you have to do to prove your point is provide a quote from Dawkins confirming your position.
I am just providing the correctives, building on sterling work by Simon Greenleaf and many others too numerous to mention just now.
By misrepresenting your opponents work and refusing to consider you may be in error? How is it that you can continue to claim Weasel latches with Dawkins has said it does not?George L Farquhar
March 13, 2009
March
03
Mar
13
13
2009
02:35 PM
2
02
35
PM
PDT
Maybe someone's already said this, but the latching issue does not seem peripheral. Those who say letters are being latched are implying that design is taking place in that attribute of the process, when in fact it results purely from population dynamics. And this speaks directly to the whole evolution-creation debate.JT
March 13, 2009
March
03
Mar
13
13
2009
11:39 AM
11
11
39
AM
PDT
GLF wrote [443] [to ATOM]:
Also, what are your thoughts on algorithms that “chase moving targets”? For example, we know “weasel” evolves towards a fixed target. In your new version, the letters will not be fixed in place once found. Therefore this leaves the door open to mutating the “target” phrase on the fly. This would not be possible if each letter was fixed when found, as the letter itself could become wrong in subsequent rounds if the target phrase evolves away from that already found letter. It seems to me that would be a somewhat more realistic example, as it is obvious that the enviroment represents a moving target as it is by no means static, and as such there is no one “correct target phrase”.
I was intrigued by this idea, and though presumably it could have already been done, it seems a fairly straightforward augmentation, so I will add it to the algorithm I wrote [described in 439]. So instead of the target being, "methinks it is a weasel". Now the target would be "Any valid English sentence." This would correlate to "any viable biological organism." Of course, nature would not have to be sentient as such for it to passively select an organism as viable. Either an organism works or it doesn't - (thus "selected" or not) merely by the constraints of reality itself. So "Any Valid English Sentence" - that will have to be modelled by just some large set of english sentences. So you pass in some arbitrary text file and the program first finds the longest sentence (referencing some set of delimiters). So that length is the length of our target as well. (Or we could have some maximum sentence length.) Then the file is scanned to form a list of all unique non-delimiter characters. (This is our "alphabet".) Then you run the process as always, except now check for the closest match to any sentence and preserve that. (Note: for a shorter sentence, it could be required that extra characters be blanks.) Of course, this doesn't address the issue of intermediate viability but they can wait for a subsequent version.JT
March 13, 2009
March
03
Mar
13
13
2009
11:26 AM
11
11
26
AM
PDT
I think we all realize that the Weasel controversy is ridiculous, but this thread has turned into an interesting exercise to see whether kairosfocus will admit an obvious mistake. To recap, kairosfocus describes Dawkins' algorithm thusly:
b –> Namely, he starts with the right number of letters, and then randomly changes the letters in the initial case [save for any that happen to be the right letter in the right place]. c –> After the random shifts, he tests for hits again, rewarding a “warmer” — but non-functional — configuration [ by preserving its successful letters. [Emphasis added]
He talks about explicit latching being the best explanation for Dawkins' output:
the most likely way circa 1986 is by straight partitioned search, by which one locks successful letters explicitly [Emphasis added]
and
The best explanation for that is latching, full latching, not partial latching. [Emphasis added]
And let's not forgot that he defends his position by appealing to the authority of peer-reviewed Marks and Dembski, who claim explicit latching. On all of the above he is wrong, along with Marks and Dembski. There's no evidence whatsoever that Dawkins mischaracterized his algorithm, and there's no reason for him to do so. A few of us even took the trouble of coding Dawkins' algorithm to show that it gives the output he reports. But to no avail -- kairosfocus will not admit his error. I'm reminded of the claim that Tom Schneider's evolutionary algorithm in ev performs worse than pure chance. That claim remains on record, from Dembski in an interview with the original poster in this thread, and from Marks in an interview with Casey Luskin. I guess the temptation to try covering up, instead of owning up, is a powerful one. Heaven knows we all do it.R0b
March 13, 2009
March
03
Mar
13
13
2009
11:15 AM
11
11
15
AM
PDT
George, if you've stopped playing schoolyard with KF, I have a little challenge for you. Can you falsify these?
Testable hypotheses about FSC What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses: Null hypothesis #1 Stochastic ensembles of physical units cannot program algorithmic/cybernetic function. Null hypothesis #2</i< Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function. Null hypothesis #3 Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function. Null hypothesis #4 Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time. We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified. The fundamental contention inherent in our three subsets of sequence complexity proposed in this paper is this: without volitional agency assigning meaning to each configurable-switch-position symbol, algorithmic function and language will not occur. The same would be true in assigning meaning to each combinatorial syntax segment (programming module or word). Source and destination on either end of the channel must agree to these assigned meanings in a shared operational context. Chance and necessity cannot establish such a cybernetic coding/decoding scheme [71].
Upright BiPed
March 13, 2009
March
03
Mar
13
13
2009
07:42 AM
7
07
42
AM
PDT
So that is why you see long runs without correct letters changing back. It would be a lot of extra programming to require that a correct letter could not change back. correction: So that is why you see long runs without correct letters mutating. It would be a lot of extra programming to require that a correct letter could not mutate.JT
March 13, 2009
March
03
Mar
13
13
2009
07:38 AM
7
07
38
AM
PDT
Virtual latching occurs because even if a correct letter is mutated to something incorrect in one individual, you still have 499 indiviuals with the letter correct at that location (if your population is 500 for example). So that is why you see long runs without correct letters changing back. It would be a lot of extra programming to require that a correct letter could not change back. You would have to keep track of correct letters, and also gurantee that the remaining incorrect letters at arbitrary locations each had an equal chance of being selected. Why would Dawkins have gone to all that extra trouble to achieve something that nearly happens on its own.JT
March 13, 2009
March
03
Mar
13
13
2009
07:34 AM
7
07
34
AM
PDT
GLF: All that was needeed ha sliong since been posted, cf e.g 346 - 7, with detailed description on the latching sub issue in 364 - 5. There you will see both Dawkins' statement on targetting [the issue in the main, and which demonstrates that he ducked the real Hoylean challenge] and the published tabulation which plainly manifests letter-latching the secondary issue that you tried to make the focus of to wriggle out of your self-laid trap. In short, sadly. your selectively hyperskeptical reductio has long since reached absurdum. Associated habitual resort to ad hominems, straw men and quote mining under various guises simply manifests and underscores the problem. Please, deal with the issue. (Mere money is not my primary or even secondary interest.) As to FSCI, in the form of Functional Sequence Complexity [all digital data arrays can with some frameworking be represented as strings] -- as long since noted but ignored onlookers -- it has long sin e passed peer review and has in fact been published under Trevors, Abel, et al, including Durston et al's 2007 paper that published 35 measurements. Indeed, going back tot he OOL researcfhers of rthe 1970's to 80's, the concept is there only needing a descriptive term. Dembaki used the CSI, others have focussed on the functionality side of specification. As to the apparent suggestion that I am scarted of being peer revioewed, I simply have no inrterst in teh game. What needed to be peer reviewed has already been so reviewed, starting 30 - 40 years and more ago. The problem is that the implications are being suppressed by the Lewontinian a priori materialists. read and weep, GLF, then break off "the chains of mental slavery" and -- by God's grace -- turn towards the true light of day:
Our willingness to accept scientific claims that are against common sense is the key to an understanding of the real struggle between science and the supernatural. We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [NY review of books, 1997. Now "officialised" by NAS, NSTA, NCSE, judge Jones etc etc . . . ]
THAT is what I am addressing. Sorry GLF, but the evo mat monopoly on education and public discussion is busted. And UD has had a lot to do with that busting. Kudos to UD, all the warts and flaws notwithstanding. Hence, your side's desperate damage control efforts. But in recent days, these have publicly reduced themselves to absurdity for all to see. I am just providing the correctives, building on sterling work by Simon Greenleaf and many others too numerous to mention just now. Enough for now, again. GEM of TKIkairosfocus
March 13, 2009
March
03
Mar
13
13
2009
07:19 AM
7
07
19
AM
PDT
Dave,
f most mutations are neutral why does it take billions of trillions of tries for the malaria parasite to find the three sequence changes that confer resistance to chloroquine?
perhaps that particular area in the malaria genome has a lower mutation rate than other areas. i don't know if that's true, but until anyone studies it further (maybe they have), my speculation is just as valid as yours. in any case, even if you're right, all it shows is that in one particular gene in one particular organism, most mutations are deleterious, probably bc it is a highly constrained area. are you going to ignore the reams of literature documenting that synonymous substitutions vastly outpace nonsynonymous ones? or that most of the eukaryotic genome is non-coding and thus mutations in it are neutral? yes, i know about ENCODE, but just bc something is transcribed doesn't mean it's functional. if you were to change your statement to "most mutations that have an effect on fitness are deleterious" of course i would agree with you.Khan
March 13, 2009
March
03
Mar
13
13
2009
07:17 AM
7
07
17
AM
PDT
Kariosfocus,
It is now very plain for all to see, that GLF was not serious in putting up a US$ 100k “offer,” and that when he found his bluff called,
All you had to do to "call my bluff" was produce a quote from Richard Dawkins that said that the letters are latched once found. You could not do that. You have not done that. So, why don't you try meeting the terms of my challenge before saying I was bluffing?
23 –> BOTTOMLINE: Weasel utterly fails to deliver on substance. (And rhetorical impact without substance, is misleading at best.)
No, the bottom line is that you said the letters were latched, and yet have not produced the evidence to back it up. Even Richard Dawkins says they are not, and after all he should know, it's his example!
20 –> Going further, major body-plan level diversification, to make cell types, tissues, organs and to organise same, will credibly require 10’s to 100’s of millions of base pairs worth of information innovation BEFORE a working body plan will result. this is also well beyond the reasonable reach of chance + necessity.
And this relates how exactly to my original challenge? It does not. It's a smokescreen. You claimed that the letters are fixed in Dawkins Weasel. You cannot back that up. You lose.
10 –> Also, in noting on the 1986 printoffs of runs of the original Weasel program and its underlying algorithm, Truman, Gitt, Marks- Dembski and I have all observed on one of its patently obvious features: letters, once they are right, on the evidence get latched in the output.
Please explain then how it appears that latching is taking place when considering the output even when it's quite clear that it is not taking place due to the code itself? You cannot. You have avoided this question. You have pretended that all the objections raised to your postion have not been raised. If the only way you can win is to ignore relevant objections then yes, you have indeed won. No wonder you do not want to submit your work on FSCI to peer review.George L Farquhar
March 13, 2009
March
03
Mar
13
13
2009
06:46 AM
6
06
46
AM
PDT
13 --> The most likely way circa 1986 is by straight partitioned search, by which one locks successful letters explicitly; after all, success is "rewarded" by natural selection, is it not . . . ?
[NB: Cf. My T2 model of the algorithm in 346 - 347 above which -- as I corrected myself yesterday -- relates not to Ch 3 in BW but the printoff in the New Scientist article. (Of course since Dawkins was trying to show how efficient his strategy of random search and rewarding warmer outputs were, we can assume he selected from cases better than the average of 98 for partitioned search. That "mystery" clears up, at least for the reasonable-minded onlooker to whom this note is principally addressed.)]
14 --> However, as my T3 model shows, there is a subtler way to the same effective end. For:
a] with sufficiently low per-letter per member of a generation probabilities of mutation [or substantially equivalently, a forcing of a range of numbers of mutations], b] with a sufficiently large population and c] with rewarding of mere closeness to target [which is the primary problem as already noted -- PLEASE OBSERVE THIS BEFORE RESORTING TO FURTHER QUOTE-MINING, GLF . . . ], d] a model that does not EXPLICITLY partition and latch letters will to high or very high probability, do just that. That is, on the ground letter latching can be implicitly (so, more subtly) achieved.
15 --> Q: Did Mr Dawkins do T2 or T3 type approaches? 16 --> ANS: I will not feed further strawman games. I simply underscore that this is of no major consequence, as either internal approach would exhibit letter-latching to certainty or to high probability, once there was rewarding of non-functional but closer to target. 17 --> And once there is that rewarding of non-function, the bigger and central question is being begged. 18 --> that is, Mr Dawkins' Weasel exercise fails to deliver on his claimed BLIND watchmaker. It is rhetorically effective but fundamentally specious. And, this defect propagates to onward algorithms that may work in different ways, including in much the way that Genetic Algorithms do. 19 --> For, the stated function in the case is a sentence, which would require on average 10^39 or so runs to get all at once. But that is only 27^28 ~ 1.2 *10^40 configs in the relevant space. 20 --> A minimally functional genome of 300,000 or so characters specifies a config space of 4^300k, or ~ 9.9 *10^180,617. th is is so vastly beyond the 10^301 threshold where the whole observed universe as a search engine could only sample less than 1 in 10^150 of the configs, that it underscores that OOL by chance + necessity only is utterly unlikely -- Sir Fred's material point. 20 --> Going further, major body-plan level diversification, to make cell types, tissues, organs and to organise same, will credibly require 10's to 100's of millions of base pairs worth of information innovation BEFORE a working body plan will result. this is also well beyond the reasonable reach of chance + necessity. 21 --> But in both cases we KNOW that intelligent designers routinely produce FUNCTIONAL digital information on the relevant scales. [Just look at the latest versions of the Windows OS . . . } 22 --> So on inference to best, empirically anchored explanation, cell based life and its diversity of major body plans and features strongly support an inference to design asd their best explanation. 23 --> BOTTOMLINE: Weasel utterly fails to deliver on substance. (And rhetorical impact without substance, is misleading at best.) _________________ And back on our thread's issue; it is now very plain indeed that the exercises in selective hyperskepticism above, show us all just how pernicious the hyper-skeptical mindset is. Correcting such destructive hyperskepticism, dear friends -- first checking its runaway dash to the cliffs, then turning it around and correcting it before it utterly destroys our civilisation -- is the REAL challenge. GEM of TKIkairosfocus
March 13, 2009
March
03
Mar
13
13
2009
06:07 AM
6
06
07
AM
PDT
Onlookers (And Atom, Joseph, GLI, JT etc): Much of the above -- sadly -- is beyond ridiculous. That is, it has passed from being a mere laughing matter, into the zone of sadness, that calls for remedy. That, I have already offered, on the problem of runaway selective hyperskepticism. Now, some follow-up notes: 1 --> It is now very plain for all to see, that GLF was not serious in putting up a US$ 100k "offer," and that when he found his bluff called, he has resorted to selective hyperskepticism to try to justify himself; unfortunately also resorting to various ad hominems along the way. (Those who for arguments use wagers or offers, often intend to boast elsewhere that no-one could take them up so the other side can be dismissed. This thread shows just how plainly such an argument is a destructive fallacy.) 2 --> The net result above has been to show, very strongly, that both the reduction to absurdity and the widely damaging, polarising civilisation- ripping impact of selective hyperskepticsm. 3 --> The latter being my main issue, and the thread's stated issue, we have seen enough to take heed to the implications and to take prudent action to protect our civilisation. (Yesterday, I suggested some remedial steps.) 4 --> Now too, GLF seems to be locked-in on the US$ 100 k issue, and evidently wishes to impose hurdle after hurdle on what should be obvious; apparently hoping to avoid a stiff payout to AiG and EIL and/or to preserve some shreds of his obviously hoped for knock-down boast. (As just pointed out, mere money is the LEAST of my concerns. I leave the matter of payout to his conscience. Certainly EIL and AiG could use the help! [BTW, neither of which has me affiliated to in any wise.]) 5 --> Now, by now the interested will know that Mr Dawkins, in 1986, was trying to overturn the issue raised by the late, Great Sir Fred Hoyle -- pardon a moment of shameless hero worship, there [he richly deserves that and more!] -- among others, that . . .
a] the central OOL etc challenge is not so much incremental improvements in biofunction; but, b] to get TO initial functionality. c] Per, a massive information-generation challenge.
6 --> In the 1986 book, The Blind Watchmaker, Mr Dawkins presented Weasel as an update to the Monkeys- at- a- typewriter story from the Victorian era; trying to show how a big info generating job can be divided up into smaller steps and cumulatively achieved (with much higher resulting probability and plausibility); presumably based on chance variation plus some form of natural selection, climbing up the alleged easy slope of Mt Improbable. 7 --> Mr Dawkins therefore offered the Weasel package, noting en passant [but this is a qualification the significance of which will usually be missed by the typical reader . . . ] that "nonsense phrases" were being rewarded for closeness to the target phrase. 8 --> As I noted from the Dec thread that GLF quotemined in his failed thread hijack attempt [just scroll up . . . ], as I noted repeatedly above [right from the outset], and as Atom has underscored, such a targetted search that rewards non-functional configs begs the question of getting TO the shores of an island of functionality. 9 --> That is, Sir Fred's Challenge has been rhetorically ducked in a subtle way through a question-begging strawman, not cogently answered. 10 --> Also, in noting on the 1986 printoffs of runs of the original Weasel program and its underlying algorithm, Truman, Gitt, Marks- Dembski and I have all observed on one of its patently obvious features: letters, once they are right, on the evidence get latched in the output. 11 --> That is a morally certain observational fact sustained over dozens of cases and sample-points. (The selectively hyper-credulous -- NB: the flip-side of selective hyperskepticism about what you reject is that you MUST then also accept other things uncritically . . . -- absurdities indulged to try top pretend that maybe the samples as published are misleading in this regard, simply show just how sound the observation is.) 12 --> The only remaining question, then [though, actually, I long since have addressed it at 346- 7], is how we get to that "letter-latching" observation. [ . . . ]kairosfocus
March 13, 2009
March
03
Mar
13
13
2009
06:04 AM
6
06
04
AM
PDT
Thanks Atom, sounds interesting. I'm researching my side too. I might have to pull on my coding boots too, it's been too long :)
Will functional islands ever be found in that case? Or what if we simply choose a random reward matrix, how does that affect the search?
I doubt we're the first people to think about this, so I'm going to look into what work has already been done. I'll post if I find anything relevant.George L Farquhar
March 12, 2009
March
03
Mar
12
12
2009
03:59 PM
3
03
59
PM
PDT
GLF, Yes, the source code was available in the version that I sent to the EIL (there was a link at the bottom, much like my Ev Ware GUI), but I think in trying to format the page they lost the link. (Dr. Marks isn't a CSS/XHTML guru, so sometimes his HTML editor will chomp on bits of my screens, no pun intended.) Anyway, as for your question about co-evolving targets, it would make the problem more difficult for the search. It is codable, maybe I'll code a GUI for that if I can round up some funding in the future. As for me, I think the more interesting problem is seeing what happens when the reward matrix (fitness function) is independent of target. In other words, what happens when the fitness function doesn't reward based on proximity to targets? (In all our examples, from Weasel to Ev, we assume that the closer you are to a functional state, the higher the reward.) But this matrix is just like any other and can be randomized as well...what if the reward matrix is organized based on something other than proximity, like simple ascending order of cells? Will functional islands ever be found in that case? Or what if we simply choose a random reward matrix, how does that affect the search? I am pretty sure I know the answers to these questions, but making it explicit in a GUI will be illuminating to some people. Stay tuned. AtomAtom
March 12, 2009
March
03
Mar
12
12
2009
03:27 PM
3
03
27
PM
PDT
Atom, Thanks for the update. I look forwards to playing with the new versions when available. Out of interest, will you be making the source code available? Also, what are your thoughts on algorithms that "chase moving targets"? For example, we know "weasel" evolves towards a fixed target. In your new version, the letters will not be fixed in place once found. Therefore this leaves the door open to mutating the "target" phrase on the fly. This would not be possible if each letter was fixed when found, as the letter itself could become wrong in subsequent rounds if the target phrase evolves away from that already found letter. It seems to me that would be a somewhat more realistic example, as it is obvious that the enviroment represents a moving target as it is by no means static, and as such there is no one "correct target phrase". I wonder even if the enviroment and the mutating phrase could enter eqlibrium and forever "chase" each other, never quite winning or losing. As well as the enviroment shaping organisms, organisms also feedback and shape the enviroment (oxygen crisis in prehistory for example). Interesting stuff, to be sure.George L Farquhar
March 12, 2009
March
03
Mar
12
12
2009
03:04 PM
3
03
04
PM
PDT
GFL @ 441, I work during the week, so don't have time to touch "fun" code until the weekends, usually. As for this algorithm, it will be in addition to the other three. I'll need to make a few changes and some additional cases, etc, so it isn't just commenting out a line or two. But I'll let you guys know when its up. Should be within the next two weeks or so. AtomAtom
March 12, 2009
March
03
Mar
12
12
2009
02:48 PM
2
02
48
PM
PDT
Atom @ 370
I’ll code an additional algorithm when I get a chance and you’ll see that nothing changes (except the amount of time it takes to reach the target will be slightly longer on average.)
Have you had a chance to do that yet? I would have guessed that you simply need to comment out a line or two rather then recode the algorithm :)George L Farquhar
March 12, 2009
March
03
Mar
12
12
2009
02:23 PM
2
02
23
PM
PDT
source can be viewed here.JT
March 12, 2009
March
03
Mar
12
12
2009
01:50 PM
1
01
50
PM
PDT
DaveScot [431]:
To add a bit of realism to the algorithm when a mutation occurs that doesn’t move the string closer to the goal one of the correct letters should be randomized as a penalty. Obviously the target would then never be reached even in trillions of generations as the penalties would quite reliably overwhelm the successes.
I just tried it. It did not make an astronomical difference: Population Size: 500 Mutation rate 5% without: 54,173,189,449,78,55,74,102,140,216 with: 547,5600,538,2124,555,3100,197,9834, 1888,2786JT
March 12, 2009
March
03
Mar
12
12
2009
01:27 PM
1
01
27
PM
PDT
1 2 3 4 5 18

Leave a Reply