Uncommon Descent Serving The Intelligent Design Community

Oh, you mean, there really is a bias in academe against common sense and rational thought?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Jonathan Haidt decided, for some reason, to point out the obvious to a group of American academics recently, that they are overwhelmingly modern materialist statists (liberals).

He polled his audience at the San Antonio Convention Center, starting by asking how many considered themselves politically liberal. A sea of hands appeared, and Dr. Haidt estimated that liberals made up 80 percent of the 1,000 psychologists in the ballroom. When he asked for centrists and libertarians, he spotted fewer than three dozen hands. And then, when he asked for conservatives, he counted a grand total of three.

“This is a statistically impossible lack of diversity,” Dr. Haidt concluded, noting polls showing that 40 percent of Americans are conservative and 20 percent are liberal. In his speech and in an interview, Dr. Haidt argued that social psychologists are a “tribal-moral community” united by “sacred values” that hinder research and damage their credibility — and blind them to the hostile climate they’ve created for non-liberals.

Why anyone would bother pointing that out, I don’t know. It’s not a bias against conservatives, anyway; it’s a bias against rationality, which they don’t believe in. Our brains, remember, are shaped for fitness, not for truth. Indeed, these are the very people who channel Barney Rubble and Fred Flintstone for insights into human psychology, and anyone who doubts the validity of such “research” should just shut up and pay their taxes, right?

Well, his talk had attracted  John Tierney’s attention at the New York Times (February 7, 2007), who drew exactly the right conclusion (for modern statists and Darwinists):

“If a group circles around sacred values, they will evolve into a tribal-moral community,” he said. “They’ll embrace science whenever it supports their sacred values, but they’ll ditch it or distort it as soon as it threatens a sacred value.” It’s easy for social scientists to observe this process in other communities, like the fundamentalist Christians who embrace “intelligent design” while rejecting Darwinism.

[ … ]

For a tribal-moral community, the social psychologists in Dr. Haidt’s audience seemed refreshingly receptive to his argument. Some said he overstated how liberal the field is, but many agreed it should welcome more ideological diversity. A few even endorsed his call for a new affirmative-action goal: a membership that’s 10 percent conservative by 2020. The society’s executive committee didn’t endorse Dr. Haidt’s numerical goal, but it did vote to put a statement on the group’s home page welcoming psychologists with “diverse perspectives.” It also made a change on the “Diversity Initiatives” page — a two-letter correction of what it called a grammatical glitch, although others might see it as more of a Freudian slip.

I have friends here in Canada who make bets on when the Times will finally, mercifully shut down.

Meanwhile, Megan McArdle weighs in at Atlantic Monthly, driving home the shame:

It is just my impression, but I think what conservatives want most of all is simply recognition that they are being shut out. It is a double indignity to be discriminated against, and then be told unctuously that your group’s underrepresentation is proof that almost none of you are as good as “us”. Haidt notes that his correspondence with conservative students (anonymously) “reminded him of closeted gay students in the 1980s”:

He quoted — anonymously — from their e-mails describing how they hid their feelings when colleagues made political small talk and jokes predicated on the assumption that everyone was a liberal. “I consider myself very middle-of-the-road politically: a social liberal but fiscal conservative. Nonetheless, I avoid the topic of politics around work,” one student wrote. “Given what I’ve read of the literature, I am certain any research I conducted in political psychology would provide contrary findings and, therefore, go unpublished. Although I think I could make a substantial contribution to the knowledge base, and would be excited to do so, I will not.”
Beyond that, mostly they would like academics to be conscious of the bias, and try to counter it where possible. As the quote above suggests, this isn’t just for the benefit of conservatives, either.

All together now, class, spell W-I-M-P.

Someone else writes

I have a good friend–I won’t name out him here though–who is a tenured faculty member in a premier humanities department at a leading east coast university, and he’s . . . a conservative! How did he slip by the PC police? Simple: he kept his head down in graduate school and as a junior faculty member, practicing self-censorship and publishing boring journal articles that said little or nothing. When he finally got tenure review, he told his closest friend on the faculty, sotto voce, that “Actually I’m a Republican.” His faculty friend, similarly sotto voce, said, “Really? I’m a Republican, too!”

That’s the scandalous state of things in American universities today. Here and there–Hillsdale College, George Mason Law School, Ashland University come to mind–the administration is able to hire first rate conservative scholars at below market rates because they are actively discriminated against at probably 90 percent of American colleges and universities. Other universities will tolerate a token conservative, but having a second conservative in a department is beyond the pale.

All together now, class, spell the plural, W-I-M-P-S.

Oh, heck, let me be honest, not snarky: Nothing stops the Yanks from freeing themselves from this garbage unless my British  mentor is right, and I hope he isn’t: Americans are happy to be serfs, but they don’t like being portrayed in the media as hillbillies.

So whenever the zeroes they all gladly pay taxes for threaten to do just that, they promptly cave.

If I die tonight, I want this on the record: If I couldn’t be a Canuck and managed to bear the unbearable sorrow, I’d be a true Yankee hillbilly and proud of it. Do you think we Canucks have so far stood off the Sharia lawfare crowd, with all their money and threats, by worrying much what smarmy (and sometimes vicious) tax burdens think?

Comments
KS: You said:
A typewriter set up so that typing any key will hit a Play, is simply moving up the required information pre-loading one level.
And i agree. When i say:
Bear in mind i am not suggesting an informational free lunch nor even an information gain. Merely that the language structure is fundamental to search success.
My point is that one can not disprove the theory of evolution by positing an unachievable hypothetical search challenge. The most you can hope to accomplish is to push "required information pre-loading up one level." As you observe. The net of this is to refocus the question back to the Original DNA system. Since everything points to an information rich starting point "survival of the fittest" and "mutations" seem unnecessary as information creators in the post ool world? As to where the typewriter came from. Now that's the question isn't it?JLS
February 20, 2011
February
02
Feb
20
20
2011
03:23 PM
3
03
23
PM
PDT
JLS: A typewriter set up so that typing any key will hit a Play, is simply moving up the required information pre-loading one level. Where do you think such a wonderful, functionally specif and complex typewriter would come from, and on what credible basis? [That is the problem with the idea that the laws of nature had C-chemistry cell based life using DNA and proteins written in; you just converted the laws of our cosmos into a sophisticated computer program running on nature as physical instantiating machine. And, to move up the next level where we have a quasi infinite array of programs generating sub-cosmos simply points th the next level: how do you get a cosmos bakery that searches the local hot zone that finely, as our own cosmos' parameters are very very finely balanced on dozens of aspects, instead of making the cosmic equivalent of burned hockey pucks and half baked masses of ill-mixed dough.] GEM of TKIkairosfocus
February 20, 2011
February
02
Feb
20
20
2011
12:44 PM
12
12
44
PM
PDT
KS: I don't take issue with the general thrust of your comments and i am definitely not asserting an "informational free lunch". I do continue to believe however that the chosen language is fundamental to the probability of search success. An example is the monkey theorem with a special typewriter/language. Assume that Shakespeare wrote 26 plays. Assume also the typewriter had 26 keys so that each key corresponded to one of the plays. With a single stroke the entire contents of a play could be communicated. This construct results in 100% search success for functionality and a 1 in 26 chance of writing Hamlet on the first try. Bear in mind i am not suggesting an informational free lunch nor even an information gain. Merely that the language structure is fundamental to search success. A language in this sense is a system of signs for encoding and decoding information. To my way of thinking the "sign" serves as a pointer to an information library. Given that the "source" and "sink" share the same library all that is required for communication is the exchange of a pointer (sign). I view the theory of evolution through this lens. Transcript errors (mutations) in the communication link can alter the pointer(s) and reveal previously unseen features in the information library. Note: this process of mutations can easily mimic an evolution from simple appearing creatures to more complex without any information gains. It all depends on the underlying structure of the information and the nature of the mutation/error correcting process. I agree that "lucky noise" isn't the creator of the information library. Not a reasonable assumption. I do however allow for this luck to have a role in the unveiling of the genome. It all hinges on the starting point (Original DNA) and the underlying information structures (Data base design). These two dictate the language of biology.JLS
February 20, 2011
February
02
Feb
20
20
2011
09:48 AM
9
09
48
AM
PDT
JLS: You could hook up a random text generator to the full corpus of the free ebooks published by Gutenberg, and the result would still be the same. The problem is that once we pass a reasonable threshold of complexity, linguistically functional text will be so isolated in teh space of possible configs, that a random walk search will simply not be able to find anything that functions. Similarly, such a process will predictably fail to write functional code for execution by a processor. Tha tis because the funcitonal code will be specific and deeply isolated in the space of all possible configs. For just 1,000 bit strings, 125 bytes; we are talking already of 10^301 possible configs, where the search resources of our whole cosmos run out at 10^150 possible states, even when we very generously use the Planck time, which is about 10^20 times faster than strong nuclear force interactions. That is the real bite in the infinite monkeys result, and it is why that result is at the foundation of thermodynamics. For the same reason, it is why there is no informational free lunch to be had. It is only because the notion has been subtly planted that somehow lucky noise can give us an informational free lunch, and the assumption has been made that no intelligence is possible to explain OOL etc, that gives the false impression that somehow code can write itself and find machinery to execute itself out of the resources of some warm little electrified pond with phosphoric salts in it etc. Not too long from now, people are going to shake their heads and wonder how people of our time could believe such patent absurdities. Even, as we shake our heads today at those who still propose perpetual motion machines. Then, they will tut tut on how we allowed science to be taken captive by materialistic ideologues. GEM of TKIkairosfocus
February 19, 2011
February
02
Feb
19
19
2011
11:28 PM
11
11
28
PM
PDT
KS: The monkey theorem illustrates the challenge of locating a functional configuration in a sea of non-functional alternatives.
The probability of a monkey exactly typing a complete work such as Shakespeare's Hamlet is so tiny that the chance of it occurring during a period of time of the order of the age of the universe is minuscule, but not zero.
I accept this but suggest that this result is simply an artifact of the language (English) and the specificity of the target (Hamlet). The important relationship is the ratio of total search space to the functionally specific space. With different language assumptions the ration can vary from one to infinity. To illustrate: Defining functional information (FI) as decisional or prescribing, one bit of (FI) selects between one of two states (or symbols or letters). Two bits selects between four letters and so on. Assume as a thought experiment that original DNA had a complete genetic template for each of the possible species (assume 1,048,576). Further that a 20 bit binary code had a one to one map between each state and one of these species. This sets up a situation where we have a 2^20 total search space and a corresponding 20 bits of functionally specific space for a ratio of 1:1. With this situation we can communicate 20 bits of FSCI and be assured of a functional result. Just to carry the thought experiment one step further assume we arrange it so that binary code “zero” selects for the simplest creature and all one’s (1,048,576) selects for man. Assuming we seed the original code at zero (simplest creature) how long will it take for incremental mutations to evolve the creature to man? This concept can be extended and examined by assuming different architectures of information. The above is completely flat with poor total storage efficiency but high information content per bit of functional specificity. With additional hierarchy one can improve total storage but lower functional specificity. This all leads me to think that the monkey theorem doesn’t shed much light and a rigorous definition of the “language” is fundamental. Have I missed something?JLS
February 19, 2011
February
02
Feb
19
19
2011
09:53 PM
9
09
53
PM
PDT
JLS: Actually such has been done, and the answer is that a config space of order about 10^50 or so is searcheable, and strings of letters [ASCII] of up to about 20 or so that are functional text, have been found. Look up the Infinite Monkeys theorem. 20 ASCII characters is about 140 bits; not relevant to any serious computational exercise. The FSCI limit is not a strictly biological limit, it is an information limit. (The pretence that unless it has been shown that FSCI is unreacheable by specifically biological means, it is presumably reachable by those means is a rhetorical device not a serious scientific proposition. The only known means of getting to FSCI is intelligence, which is precisely the problem for those who do not want to see FSCI as a signature of intelligence; but the infinite monkeys type search space analysis is a strong support for the empirical observation.) GEM of TKIkairosfocus
February 18, 2011
February
02
Feb
18
18
2011
02:27 PM
2
02
27
PM
PDT
Kf: Thanks for the feedback. If you would indulge a basic question. You frame the issues as a resource bound search that must produce xxx amount of functionally specified information in order to approximate what we observe in nature. If the boundary exceeds the limits of the cosmos then we can assume a false hypothesis. Thus design becomes a consideration. Wouldn't it be better to frame the question in terms of what is the minimum starting point of FSCI in order for various search algorithms to accomplish what we observe? Obviously if one assumes a rich FCSI library as a starting point finding a robust island of functionality is no problem. An additional advantage is that if a minimum can be discovered it may be testable against ool work. Thanks again, This blog is a great resource.JLS
February 18, 2011
February
02
Feb
18
18
2011
10:00 AM
10
10
00
AM
PDT
JLS: Actually, a few proteins [avg 300 AA] puts us well beyond the threshold where the search capacity of the cosmos is swamped. A minimally complex functional life form, turned out to be surprisingly complex. The minimal realistic DNA complement looks like 100 - 1,000 k bases [and the lower end are basically parasitic, i.e they are too small to be first life], and that is 2 - 3 orders of magnitude beyond the 1,000 bit point where the observed cosmos is not big enough. That is why OOL is so important. Once it is reasonably clear that not only is there no idea, but no idea of where to get the idea much less the evidence, then we see design is a serious contender. And if design of life is on the table, there is no reason to revert to anything else to account for the 10 - 100+ million new bases to make for new body plans. GEM of TKIkairosfocus
February 17, 2011
February
02
Feb
17
17
2011
04:00 PM
4
04
00
PM
PDT
kairosfocus, Thanks for the link above. Powerfully written. It seems obvious that the question resolves to "Functionally Specific Complex Information" was a prerequisite for and present in first life. Why haven't we seen more work in this direction. For example in just the past few days i have come across an article identifying Human DNA in a bacterium. Or the fact a sand flea has more genes 130,000 that a human. We seem to have assumed that Original DNA was primitive compared to the present. Am i correct on this point?JLS
February 17, 2011
February
02
Feb
17
17
2011
02:50 PM
2
02
50
PM
PDT
KF 179...a very interesting article.Upright BiPed
February 17, 2011
February
02
Feb
17
17
2011
01:41 PM
1
01
41
PM
PDT
F/N: Mrs O'Leary has made a great catch that aptly sums up much of the issue on OOL, here. (I note it makes an interesting philosophical case that turns Dawkins' infinite regress of complexity argument on its head. That is of course not a scientific argument, but pursuit of proof is no respecter of disciplinary boundaries.) This clip on OOL is interesting: ___________ >> In Dawkins' own words: What Science has now achieved is an emancipation from that impulse to attribute these things to a creator... It was a supreme achievement of the human intellect to realize there is a better explanation ... that these things can come about by purely natural causes ... we understand essentially how life came into being.20 (from the Dawkins-Lennox debate) "We understand essentially how life came into being"?! – Who understands? Who is "we"? Is it Dr. Stuart Kauffman? "Anyone who tells you that he or she knows how life started ... is a fool or a knave." 21 Is it Dr. Robert Shapiro? "The weakest point is our lack of understanding of the origin of life. No evidence remains that we know of to explain the steps that started life here, billions of years ago." 22 Is it Dr. George Whitesides? "Most chemists believe as I do that life emerged spontaneously from mixtures of chemicals in the prebiotic earth. How? I have no idea... On the basis of all chemistry I know, it seems astonishingly improbable." Is it Dr. G. Cairns-Smith? "Is it any wonder that [many scientists] find the origin of life to be utterly perplexing?" 23 Is it Dr. Paul Davies? "Many investigators feel uneasy about stating in public that the origin of life is a mystery, even though behind closed doors they freely admit they are baffled ... the problem of how and where life began is one of the great out-standing mysteries of science." Is it Dr. Richard Dawkins? Here is how Dawkins responded to questions about the Origin of Life during an interview with Ben Stein in the film Expelled: No Intelligence Allowed: Stein: How did it start? Dawkins: Nobody knows how it started, we know the kind of event that it must have been, we know the sort of event that must have happened for the origin of life. Stein: What was that? Dawkins: It was the origin of the first self replicating molecule. Stein: How did that happen? Dawkins: I told you I don't know. Stein: So you have no idea how it started? Dawkins: No, No, NOR DOES ANYONE ELSE. 24 “Nobody understands the origin of life, if they say they do, they are probably trying to fool you.” (Dr. Ken Nealson, microbiologist and co-chairman of the Committee on the Origin and Evolution of Life for the National Academy of Sciences) Nobody, including Professor Dawkins, has any idea "how life came into being!" It is only this self-deceiving view of reality that allows Dawkins to declare that science has emancipated him from the impulse to attribute the astounding wonders of the living world to a creator. There is no human intellect on the face of the earth that has achieved a "better explanation." We have shown conclusively that no chemist, physicist, biologist, nor any other type of scientist has any real clue how life could have come about through "natural processes." Scientists do not understand how life "essentially" (or non-essentially for that matter), came into being. Only a "fool," a "knave" could make such an outrageous claim. Perhaps it is time for these scientists to express not awe, not admiration ... but humility. >> _____________ Worth a thought or twokairosfocus
February 17, 2011
February
02
Feb
17
17
2011
10:57 AM
10
10
57
AM
PDT
Mathgrrl, So...you do not wish to establish the particulars of what is observed in symbols systems, and you will not allow youself to be questioned on the subject. For instance, if I ask whether or not symbols and the objects they are mapped to are discrete, then that is not a question that you intend to answer. I wonder what it is about these observed facts you wish to avoid. Could it be that even the non-controversial observations regardng digitally-encoded information work to undermine your rail against ID? If that is the case, then you are certainly not alone. Materialists often absolutely refuse to even discuss the topic - sans their assumptions. By that I mean, a simple walk through the collectively observed facts - withholding all conclusions from either side - is quite often more than can be tolerated. In fact, I do believe that this intolerance was the basis for your original comment. I asked a completely intelligible question without adding a single controversial assumption whatsoever, and your response was "Could you please rephrase to avoid loading them with what appear to be your assumptions". You see, you weren't objecting to my assumptions (because there wasn't any) you were objecting to the untainted observations themselves. Of course, despite the convenient diversions to follow, the facts of the matter remain. What are we to do with the observed fact that meaning has been instantiated into matter (long before we humans came along and "invented" symbol systems).Upright BiPed
February 17, 2011
February
02
Feb
17
17
2011
08:07 AM
8
08
07
AM
PDT
MathGrrl speaking of 'modeling' reality, let's look at reality itself and see if Sanford's (Mendel's Accountant or Schneider's (ev) is more faithful to what reality is actually telling us; Random Mutations Destroy Information - Perry Marshall - video http://www.metacafe.com/watch/4023143/ Inside the Human Genome: A Case for Non-Intelligent Design - Pg. 57 By John C. Avise Excerpt: "Another compilation of gene lesions responsible for inherited diseases is the web-based Human Gene Mutation Database (HGMD). Recent versions of HGMD describe more than 75,000 different disease causing mutations identified to date in Homo-sapiens." http://books.google.com/books?id=M1PRvkPBKfQC&pg=PA57&lpg=PA57&dq=human+75,000+different+disease-causing+mutations&source=bl&ots=gkjosjq030&sig=gAU5AfzMehArJYinSxb2EMaDL94&hl=en&ei=kbDqS_SQLYS8lQfLpJ2cBA&sa=X&oi=book_result&ct=result&resnum=6&ved=0CCMQ6AEwBQ#v=onepage&q=human%2075%2C000%20different%20disease-causing%20mutations&f=false I went to the mutation database website and found: HGMD®: Now celebrating our 100,000 mutation milestone! http://www.biobase-international.com/pages/index.php?id=hgmddatabase I really question their use of the word "celebrating". This following study confirmed the detrimental mutation rate for humans, of 100 to 300 per generation, estimated by John Sanford in his book 'Genetic Entropy' in 2005: Human mutation rate revealed: August 2009 Every time human DNA is passed from one generation to the next it accumulates 100–200 new mutations, according to a DNA-sequencing analysis of the Y chromosome. (Of note: this number is derived after "compensatory mutations") http://www.nature.com/news/2009/090827/full/news.2009.864.html Waiting Longer for Two Mutations - Michael J. Behe Excerpt: Citing malaria literature sources (White 2004) I had noted that the de novo appearance of chloroquine resistance in Plasmodium falciparum was an event of probability of 1 in 10^20. I then wrote that 'for humans to achieve a mutation like this by chance, we would have to wait 100 million times 10 million years' (1 quadrillion years)(Behe 2007) (because that is the extrapolated time that it would take to produce 10^20 humans). Durrett and Schmidt (2008, p. 1507) retort that my number ‘is 5 million times larger than the calculation we have just given’ using their model (which nonetheless "using their model" gives a prohibitively long waiting time of 216 million years). Their criticism compares apples to oranges. My figure of 10^20 is an empirical statistic from the literature; it is not, as their calculation is, a theoretical estimate from a population genetics model. http://www.discovery.org/a/9461 The Frailty of the Darwinian Hypothesis "The net effect of genetic drift in such (vertebrate) populations is “to encourage the fixation of mildly deleterious mutations and discourage the promotion of beneficial mutations,” http://www.evolutionnews.org/2009/07/the_frailty_of_the_darwinian_h.html#more High genomic deleterious mutation rates in hominids Excerpt: Furthermore, the level of selective constraint in hominid protein-coding sequences is atypically (unusually) low. A large number of slightly deleterious mutations may therefore have become fixed in hominid lineages. http://www.nature.com/nature/journal/v397/n6717/abs/397344a0.html High Frequency of Cryptic Deleterious Mutations in Caenorhabditis elegans ( Esther K. Davies, Andrew D. Peters, Peter D. Keightley) "In fitness assays, only about 4 percent of the deleterious mutations fixed in each line were detectable. The remaining 96 percent, though cryptic, are significant for mutation load...the presence of a large class of mildly deleterious mutations can never be ruled out." http://www.sciencemag.org/cgi/content/abstract/285/5434/1748 "The likelihood of developing two binding sites in a protein complex would be the square of the probability of developing one: a double CCC (chloroquine complexity cluster), 10^20 times 10^20, which is 10^40. There have likely been fewer than 10^40 cells in the entire world in the past 4 billion years, so the odds are against a single event of this variety (just 2 binding sites being generated by accident) in the history of life. It is biologically unreasonable." Michael J. Behe PhD. (from page 146 of his book "Edge of Evolution") The GS (genetic selection) Principle - David L. Abel - 2009 Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.” http://www.bioscience.org/2009/v14/af/3426/fulltext.htm etc.. etc.. etc.. Perhaps MathGrrl, you might want to address a few of these questions?bornagain77
February 17, 2011
February
02
Feb
17
17
2011
07:59 AM
7
07
59
AM
PDT
MathGrrl, I look forward to reading your peer-reviewed refutation of the Dembski-Marks paper, until then I really don't care to address your superfluous 'molehill' objections, especially since you refuse to honestly address kairosfocus's 'mountain' objections (not to mention the few objections I brought forth).bornagain77
February 17, 2011
February
02
Feb
17
17
2011
07:43 AM
7
07
43
AM
PDT
kairosfocus, I am not up to date with the current abiogenesis literature, but your comments have piqued my interest. If I find any simulations I'll let you know.MathGrrl
February 17, 2011
February
02
Feb
17
17
2011
07:33 AM
7
07
33
AM
PDT
bornagain77, Once again, the paper does not support your claim that ev is goal directed. I have provided links to the description of ev and the source code itself. Please reference those sources to show exactly how ev can be construed to be goal directed. If you wish to continue to reference the Bio Complexity paper as well, please first address the flaw I found in a cursory reading.MathGrrl
February 17, 2011
February
02
Feb
17
17
2011
07:29 AM
7
07
29
AM
PDT
MG: Apology acknowledged. my objection to claimed simulations of evolution of that ilk was and is, that Ev etc are already in intelligently set-up target zones when they begin. They may model some varieties or aspects of micro-evo [but note my concern on gradual degradation and embrittlement], but beg the big questions on macro-evo, starting with the root of the tree of life and going onward to accounting for the source of major body plans. GEM of TKI GEM of TKIkairosfocus
February 17, 2011
February
02
Feb
17
17
2011
07:29 AM
7
07
29
AM
PDT
I must say I found the conclusion a bit bizarre, take a look at what they are saying:
The success of ev is largely due to active information introduced by the Hamming oracle and from the perceptron structure. It is not due to the evolutionary algorithm used to perform the search.
and:
Schneider [16] claims that ev demonstrates that naturally occurring genetic systems gain information by evolutionary processes ...
and then:
Our results show that, contrary to these claims, ev does not demonstrate “that biological information…can rapidly appear in genetic control systems subjected to replication, mutation, and selection” [16]. We show this by demonstrating that there are at least five sources of active information in ev.
Now take a look at this:
2. The Hamming Oracle [13]. When some offspring are correctly announced as more fit than others [27], external knowledge is being applied to the search and active information is introduced. ... we are being told with respect to the solution whether we are getting “colder” or “warmer”. ... 4. Optimization by Mutation. This process discards mutations with low fitness and propagates those with high fitness.
What they appear to be saying is: because Genetic algorithms use a fitness function that is not binary (i.e. not just 'yes you are fit' or 'not you are not') but instead has a fitness function that gives higher fitness individuals a greater probability of reproducing, EV is not representative of biological evolution because external knowledge is applied. If you have NO criteria for assessing if an individual is fitter or not, or in the case of a targetted search if the individual is closer to or at the target, then you can't even perform a search. If you took D&M's criticisms on board and created a search algorithm with no criteria for success, and no mutation or any other method of moving about in the search space you wouldn't have any king of search algorithm at all! Recall this bit:
The success of ev is largely due to active information introduced by the Hamming oracle ... It is not due to the evolutionary algorithm used to perform the search.
One thing that DEFINES a GA is the use of graduated fitness evaluations BECAUSE this is what appears to occur in biology - A hamming oracle is part of an evolutionary algorithm, you can't claim that an algorithm doesn't work because it relies on part of the algorithm working!
As far as evcan be viewed as a model for biological processes in nature, it provides little evidence for the ability of a Darwinian search to generate new information. Rather, it demonstrates that preexisting sources of information can be re-used and exploited, with varying degrees of efficiency, by a suitably designed search process, biased computation structure, and tuned parameter set.
EV models selection, mutation and replication, D&M criticise it because they claim that selection injects information and so does not demonstrate biological information generation in action. Could their mistake perhaps be that they view the information in the search space as an invalid source of novelty, or perhaps as something that already exists and that the claim from biology is that it is finding something different - outside the search space? What do evolutionary algorithms do - they explore search spaces - novelty is just functional areas of a search space that haven't, or have only just, been discovered. It's a baffling conclusion to be sure!DrBot
February 17, 2011
February
02
Feb
17
17
2011
07:28 AM
7
07
28
AM
PDT
MathGrrl, You say ev is not goal directed, and yet the peer-reviewed paper I cited says that ev is a goal directed 'search algorithm' that 'mines active information' from abstract: Search algorithms mine active information from these resources, with some search algorithms performing better than others. We illustrate these principles in the analysis of ev. The sources of knowledge in ev include a Hamming oracle and a perceptron structure that predisposes the search towards its target. The original ev uses these resources in an evolutionary algorithm. Although the evolutionary algorithm finds the target, we demonstrate a simple stochastic hill climbing algorithm uses the resources more efficiently. And MathGrrl exactly why do you get so excited about this ev search algorithm which is shown to be less efficient than a standard 'hill climbing' algorithm that is used to efficiently solve a limited class of problems in engineering??? In computer science we recognize the algorithmic principle described by Darwin - the linear accumulation of small changes through random variation - as hill climbing, more specifically random mutation hill climbing. However, we also recognize that hill climbing is the simplest possible form of optimization and is known to work well only on a limited class of problems. Watson R.A. - 2006 - Compositional Evolution - MIT Press - Pg. 272 MathGrrl, let's say we get really honest with what unfettered random mutations can really do in reality and open up the operating system itself the Random Mutations? Accounting for Variations - Dr. David Berlinski: - video http://www.youtube.com/watch?v=aW2GkDkimkE A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA – David J D’Onofrio1, Gary An – Jan. 2010 Excerpt: It is also important to note that attempting to reprogram a cell’s operations by manipulating its components (mutations) is akin to attempting to reprogram a computer by manipulating the bits on the hard drive without fully understanding the context of the operating system. (T)he idea of redirecting cellular behavior by manipulating molecular switches may be fundamentally flawed; that concept is predicated on a simplistic view of cellular computing and control. Rather, (it) may be more fruitful to attempt to manipulate cells by changing their external inputs: in general, the majority of daily functions of a computer are achieved not through reprogramming, but rather the varied inputs the computer receives through its user interface and connections to other machines. http://www.tbiomed.com/content/7/1/3 MathGrrl, if you ever decide to be honest about what neo-Darwinian evolution can really do in reality, instead of 'propaganda programs' such as weasel and ev, here is the proper computer program that is faithful to the task; Using Computer Simulation to Understand Mutation Accumulation Dynamics and Genetic Load: Excerpt: We apply a biologically realistic forward-time population genetics program to study human mutation accumulation under a wide-range of circumstances. Using realistic estimates for the relevant biological parameters, we investigate the rate of mutation accumulation, the distribution of the fitness effects of the accumulating mutations, and the overall effect on mean genotypic fitness. Our numerical simulations consistently show that deleterious mutations accumulate linearly across a large portion of the relevant parameter space. http://bioinformatics.cau.edu.cn/lecture/chinaproof.pdf MENDEL’S ACCOUNTANT: J. SANFORD†, J. BAUMGARDNER‡, W. BREWER§, P. GIBSON¶, AND W. REMINE http://mendelsaccount.sourceforge.net http://www.scpe.org/vols/vol08/no2/SCPE_8_2_02.pdfbornagain77
February 17, 2011
February
02
Feb
17
17
2011
07:18 AM
7
07
18
AM
PDT
kairosfocus,
For, at no point did I do what you claimed I did; you put words in my mouth that do not belong there.
I just reviewed the thread and found the post where I mixed up my conversations with you and bornagain77. Indeed, you did not claim that ev or Tierra were goal directed. I apologize for my mistake. So, since you don't make that claim, will you be joining the argument on my side? ;-)MathGrrl
February 17, 2011
February
02
Feb
17
17
2011
06:32 AM
6
06
32
AM
PDT
Joseph,
Here is the paper: A Vivisection of the ev Computer Organism: Identifying Sources of Active Information
Thank you for the link. I read the paper last night and it does not support the claim that the ev program is "goal directed." The closest it comes is a discussion of a Hamming Oracle, but there is a fatal flaw in that section. On the third page of the paper, the authors state:
In the search for the binding sites, the target sequences are fixed at the beginning of the search. The weights, bias, and remaining genome sequence are all allowed to vary.
This is not correct. Dr. Schneider's description of ev and the code itself make it clear that the target sequences coevolve with the rest of the genome, including the recognizer components. The only feature fixed for each run is the (randomly selected) location of each target, and even that can be eliminated without changing the results. Therefore, as I've maintained in this thread, neither ev nor Tierra are "goal directed" simulations. Lost in this discussion is the most interesting aspect of ev. Dr. Schneider wrote ev to check the results of his PhD thesis on the generation of information in real biological organisms. Using only simple evolutionary mechanisms, ev demonstrated exactly the same ability to generate Shannon information as Dr. Schneider observed in the lab. That's very strong supporting evidence that the mechanisms have been correctly identified.MathGrrl
February 17, 2011
February
02
Feb
17
17
2011
06:24 AM
6
06
24
AM
PDT
MG: Pardon, but much more was at stake than you acknowledge. For, at no point did I do what you claimed I did; you put words in my mouth that do not belong there. Indeed, your remark just above materially misrepresent the objective situation and why I asked you to address the matter. GEM of TKIkairosfocus
February 17, 2011
February
02
Feb
17
17
2011
06:22 AM
6
06
22
AM
PDT
Upright BiPed,
We can certainly explore whatever equivocation “is likely to slip in”, but first we should establish the observation regarding symbols, or else, we might not recognize equivocation from obfuscation. Discussing the observation itself seems to be what you wish to avoid, so lets go to it. Symbols and the things they are mapped to are discreet, are they not?
As I said before, I'm not going to play the Socratic game with you. If and when you decide to clearly state your position and claims, I will be more than happy to engage in a mutually respectful discussion.MathGrrl
February 17, 2011
February
02
Feb
17
17
2011
06:04 AM
6
06
04
AM
PDT
kairosfocus,
1: Kindly, do please address the civility-strawman matter at 139, which also seems to have been developing with UB.
Pointing out an incorrect statement is not inherently uncivil. The claim was made that ev and Tierra are "goal directed" simulations. The simple fact is that they are not. Anyone is free to check the documentation for those two programs to confirm this.MathGrrl
February 17, 2011
February
02
Feb
17
17
2011
06:02 AM
6
06
02
AM
PDT
Ah yes, thanks Pedant. :) Pedantry has its place, and my twice mistaken spelling of dicrete is one of them.Upright BiPed
February 17, 2011
February
02
Feb
17
17
2011
05:59 AM
5
05
59
AM
PDT
Yes, thanks, Upright BiPed. I understand completely, although I still think that the correct spelling is discrete. (They don't call me Pedant for nothing.)Pedant
February 17, 2011
February
02
Feb
17
17
2011
05:03 AM
5
05
03
AM
PDT
Pedant, "Dot-Dash" is a dicreet symbol mapped to the letter "A" in the English alphabet.Upright BiPed
February 16, 2011
February
02
Feb
16
16
2011
02:31 PM
2
02
31
PM
PDT
Upright BiPed, when you asked,
Symbols and the things they are mapped to are discreet, are they not?
did you mean to say "symbols are discrete? And if that's what you meant to say, would you clarify what you were asking? Can you give examples of symbols that are discrete vs symbols that are not?Pedant
February 16, 2011
February
02
Feb
16
16
2011
02:05 PM
2
02
05
PM
PDT
F/N 2: Marks, Dembski et al paper on ev, conclusion: ___________ >> CONCLUSIONS The success of ev is largely due to active information introduced by the Hamming oracle and from the perceptron structure. It is not due to the evolutionary algorithm used to perform the search. Indeed, other algorithms are shown to mine active information more efficiently from the knowledge sources provided by ev[13]. Schneider [16] claims that ev demonstrates that naturally occurring genetic systems gain information by evolutionary processes and that “information gain can occur by punctuated equilibrium”. Our results show that, contrary to these claims, ev does not demonstrate “that biological information...can rapidly appear in genetic control systems subjected to replication, mutation, and selection” [16]. We show this by demonstrating that there are at least five sources of active information in ev. 21 1. The perceptron structure. The perceptron structure is predisposed to generating strings of ones sprinkled by zeros or strings of zeros sprinkled by ones. Since the binding site target is mostly zeros with a few ones, there is a greater predisposition to generate the target than if it were, for example, a set of ones and zeros produced by the flipping of a fair coin. 2. The Hamming Oracle [13]. When some offspring are correctly announced as more fit than others [27], external knowledge is being applied to the search and active information is introduced. As with the child’s game, we are being told with respect to the solution whether we are getting “colder” or “warmer”. \ 3. Repeated Queries. Two queries contain more information than one. Repeated queries can contribute active information [1,2,5]. 4. Optimization by Mutation. This process discards mutations with low fitness and propagates those with high fitness. When the mutation rate is small, this process resembles a simple Markov birth process [27] that converges to the target [1,2,5]. 5. Degree of Mutation. As seen in Figure 3, the degree of mutation for ev must be tuned to a band of workable values. Our analysis highlights the importance of disclosing sources of knowledge in computer searches when measuring the ability of search mechanisms to generate novel information. As far as ev can be viewed as a model for biological processes in nature, it provides little evidence for the ability of a Darwinian search to generate new information. Rather, it demonstrates that preexisting sources of information can be re-used and exploited, with varying degrees of efficiency, by a suitably designed search process, biased computation structure, and tuned parameter set. This confirms that the conservation of information principle, as manifest in the No Free Lunch Theorems, is “very useful, especially in light of some of the sometimes-outrageous claims that had been made of specific optimization algorithms” [4]. >> ______________ MG, what is your rebuttal? And, remember, once yo0u have disposed of M, D et al, my own objections still remain, in light of the claims made by the authors of the programs. GEM of TKIkairosfocus
February 16, 2011
February
02
Feb
16
16
2011
12:08 PM
12
12
08
PM
PDT
Oops :oops: bornagain77 (155) already posted the paper. :cool:Joseph
February 16, 2011
February
02
Feb
16
16
2011
11:51 AM
11
11
51
AM
PDT
1 2 3 7

Leave a Reply