Home » Fine tuning, Functionally Specified Complex Information & Organization, ID Foundations, Informatics, Intelligent Design, Mathematics » ID Foundations, 17a: Footnotes on Conservation of Information, search across a space of possibilities, Active Information, Universal Plausibility/ Probability Bounds, guided search, drifting/ growing target zones/ islands of function, Kolmogorov complexity, etc.

ID Foundations, 17a: Footnotes on Conservation of Information, search across a space of possibilities, Active Information, Universal Plausibility/ Probability Bounds, guided search, drifting/ growing target zones/ islands of function, Kolmogorov complexity, etc.

(previous, here)

There has been a recent flurry of web commentary on design theory concepts linked to the concept of functionally specific, complex organisation and/or associated information (FSCO/I) introduced across the 1970′s into the 1980′s  by Orgel and Wicken et al. (As is documented here.)

This flurry seems to be connected to the announcement of an upcoming book by Meyer — it looks like attempts are being made to dismiss it before it comes out, through what has recently been tagged, “noviews.” (Criticising, usually harshly, what one has not read, by way of a substitute for a genuine book review.)

It will help to focus for a moment on the just linked ENV article, in which ID thinker William Dembski responds to such critics, in part:

[L]et me respond, making clear why criticisms by Felsenstein, Shallit, et al. don’t hold water.

There are two ways to see this. One would be for me to review my work on complex specified information (CSI), show why the concept is in fact coherent despite the criticisms by Felsenstein and others, indicate how this concept has since been strengthened by being formulated as a precise information measure, argue yet again why it is a reliable indicator of intelligence, show why natural selection faces certain probabilistic hurdles that impose serious limits on its creative potential for actual biological systems (e.g., protein folds, as in the research of Douglas Axe [Link added]), justify the probability bounds and the Fisherian model of statistical rationality that I use for design inferences, show how CSI as a criterion for detecting design is conceptually equivalent to information in the dual senses of Shannon and Kolmogorov, and finally characterize conservation of information within a standard information-theoretic framework. Much of this I have done in a paper titled “Specification: The Pattern That Signifies Intelligence” (2005) [link added] and in the final chapters of The Design of Life (2008).

But let’s leave aside this direct response to Felsenstein (to which neither he nor Shallit ever replied). The fact is that conservation of information has since been reconceptualized and significantly expanded in its scope and power through my subsequent joint work with Baylor engineer Robert Marks. Conservation of information, in the form that Felsenstein is still dealing with, is taken from my 2002 book No Free Lunch . . . .

[W]hat is the difference between the earlier work on conservation of information and the later? The earlier work on conservation of information focused on particular events that matched particular patterns (specifications) and that could be assigned probabilities below certain cutoffs. Conservation of information in this sense was logically equivalent to the design detection apparatus that I had first laid out in my book The Design Inference(Cambridge, 1998).

In the newer approach to conservation of information, the focus is not on drawing design inferences but on understanding search in general and how information facilitates successful search. The focus is therefore not so much on individual probabilities as on probability distributions and how they change as searches incorporate information. My universal probability bound of 1 in 10^150 (a perennial sticking point for Shallit and Felsenstein) therefore becomes irrelevant in the new form of conservation of information whereas in the earlier it was essential because there a certain probability threshold had to be attained before conservation of information could be said to apply. The new form is more powerful and conceptually elegant. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks. It therefore suggests an ultimate source of information, which it can reasonably be argued is a designer. I explain all this in a nontechnical way in an article I posted at ENV a few months back titled “Conservation of Information Made Simple” (go here).

A lot of this pivots on conservation of information and the idea of search in a space of possibilities, so let us also excerpt the second ENV article as well:

Conservation of information is a term with a short history. Biologist Peter Medawar used it in the 1980s to refer to mathematical and computational systems that are limited to producing logical consequences from a given set of axioms or starting points, and thus can create no novel information (everything in the consequences is already implicit in the starting points). His use of the term is the first that I know, though the idea he captured with it is much older. Note that he called it the “Law of Conservation of Information” (see his The Limits of Science, 1984).

Computer scientist Tom English, in a 1996 paper, also used the term conservation of information, though synonymously with the then recently proved results by Wolpert and Macready about No Free Lunch (NFL). In English’s version of NFL, “the information an optimizer gains about unobserved values is ultimately due to its prior information of value distributions.” As with Medawar’s form of conservation of information, information for English is not created from scratch but rather redistributed from existing sources.

Conservation of information, as the idea is being developed and gaining currency in the intelligent design community, is principally the work of Bob Marks and myself, along with several of Bob’s students at Baylor (see the publications page at www.evoinfo.org). Conservation of information, as we use the term, applies to search. Now search may seem like a fairly restricted topic. Unlike conservation of energy, which applies at all scales and dimensions of the universe, conservation of information, in focusing on search, may seem to have only limited physical significance. But in fact, conservation of information is deeply embedded in the fabric of nature, and the term does not misrepresent its own importance . . . .

Humans search for keys, and humans search for uncharted lands. But, as it turns out, nature is also quite capable of search. Go to Google and search on the term “evolutionary search,” and you’ll get quite a few hits. Evolution, according to some theoretical biologists, such as Stuart Kauffman, may properly be conceived as a search (see his book Investigations). Kauffman is not an ID guy, so there’s no human or human-like intelligence behind evolutionary search as far as he’s concerned. Nonetheless, for Kauffman, nature, in powering the evolutionary process, is engaged in a search through biological configuration space, searching for and finding ever-increasing orders of biological complexity and diversity . . . .

Evolutionary search is not confined to biology but also takes place inside computers. The field of evolutionary computing (which includes genetic algorithms) falls broadly under that area of mathematics known as operations research, whose principal focus is mathematical optimization. Mathematical optimization is about finding solutions to problems where the solutions admit varying and measurable degrees of goodness (optimality). Evolutionary computing fits this mold, seeking items in a search space that achieve a certain level of fitness. These are the optimal solutions. (By the way, the irony of doing a Google “search” on the target phrase “evolutionary search,” described in the previous paragraph, did not escape me. Google’s entire business is predicated on performing optimal searches, where optimality is gauged in terms of the link structure of the web. We live in an age of search!)

If the possibilities connected with search now seem greater to you than they have in the past, extending beyond humans to computers and biology in general, they may still seem limited in that physics appears to know nothing of search. But is this true? The physical world is life-permitting — its structure and laws allow (though they are far from necessitating) the existence of not just cellular life but also intelligent multicellular life. For the physical world to be life-permitting in this way, its laws and fundamental constants need to be configured in very precise ways. Moreover, it seems far from mandatory that those laws and constants had to take the precise form that they do. The universe itself, therefore, can be viewed as the solution to the problem of making life possible. But problem solving itself is a form of search, namely, finding the solution (among a range of candidates) to the problem . . . .

The fine-tuning of nature’s laws and constants that permits life to exist at all is not like this. It is a remarkable pattern and may properly be regarded as the solution to a search problem as well as a fundamental feature of nature, or what philosophers would call a natural kind, and not merely a human construct. Whether an intelligence is responsible for the success of this search is a separate question. The standard materialist line in response to such cosmological fine-tuning is to invoke multiple universes and view the success of this search as a selection effect: most searches ended without a life-permitting universe, but we happened to get lucky and live in a universe hospitable to life.

In any case, it’s possible to characterize search in a way that leaves the role of teleology and intelligence open without either presupposing them or deciding against them in advance. Mathematically speaking, search always occurs against a backdrop of possibilities (the search space), with the search being for a subset within this backdrop of possibilities (known as the target). Success and failure of search are then characterized in terms of a probability distribution over this backdrop of possibilities, the probability of success increasing to the degree that the probability of locating the target increases . . . .

[T]he important issue, from a scientific vantage, is not how the search ended but the probability distribution under which the search was conducted.

So, we see the issue of search in a space of possibilities can be pivotal for looking at a fairly broad range of subjects, bridging from the world of Easter egg hunts, to that of computing to the world of life forms, and onwards to the evident fine tuning of the observed cosmos and its potential invitation of a cosmological design inference.

That’s a pretty wide swath of issues.

However, the pivot of current debates is on the design theory controversy linked to the world of life. Accordingly Dembski focuses there, and it is worth pausing for a further clip so that we can see his logic (and not the too often irresponsible caricatures of it that so often are frequently used to swarm down what he has had to say):

[I]nformation is usually characterized as the negative logarithm to the base two of a probability (or some logarithmic average of probabilities, often referred to as entropy). This has the effect of transforming probabilities into bits and of allowing them to be added (like money) rather than multiplied (like probabilities). Thus, a probability of one-eighths, which corresponds to tossing three heads in a row with a fair coin, corresponds to three bits, which is the negative logarithm to the base two of one-eighths.

Such a logarithmic transformation of probabilities is useful in communication theory, where what gets moved across communication channels is bits rather than probabilities and the drain on bandwidth is determined additively in terms of number of bits. Yet, for the purposes of this “Made Simple” paper, we can characterize information, as it relates to search, solely in terms of probabilities, also cashing out conservation of information purely probabilistically.

Probabilities, treated as information used to facilitate search, can be thought of in financial terms as a cost — an information cost. Think of it this way. Suppose there’s some event you want to have happen. If it’s certain to happen (i.e., has probability 1), then you own that event — it costs you nothing to make it happen. But suppose instead its probability of occurring is less than 1, let’s say some probability p. This probability then measures a cost to you of making the event happen. The more improbable the event (i.e., the smaller p), the greater the cost. Sometimes you can’t increase the probability of making the event occur all the way to 1, which would make it certain. Instead, you may have to settle for increasing the probability to q where q is less than 1 but greater than p. That increase, however, must also be paid for . . . . [However,] just as increasing your chances of winning a lottery by buying more tickets offers no real gain (it is not a long-term strategy for increasing the money in your pocket), so conservation of information says that increasing the probability of successful search requires additional informational resources that, once the cost of locating them is factored in, do nothing to make the original search easier . . . .

Conservation of information says that . . .  when we try to increase the probability of success of a search . . .   instead of becoming easier, [the search] remains as difficult as before or may even . . . become more difficult once additional underlying information costs, associated with improving the search and [which are] often hidden . . .  are factored in . . . .

The reason it’s called “conservation” of information is that the best we can do is break even, rendering the search no more difficult than before. In that case, information is actually conserved. Yet often, as in this example, we may actually do worse by trying to improve the probability of a successful search. Thus, we may introduce an alternative search that seems to improve on the original search but that, once the costs of obtaining this search are themselves factored in, in fact exacerbate the original search problem.

So, where does all of this leave us?

A useful way is to do an imaginary exchange based on many real exchanges of comments in and around UD, here by clipping a recent addition to the IOSE Intro-Summary (which is also structured to capture an unfortunate attitude that is too common in exchanges on this subject):

__________

>>Q1: How then do search algorithms — such as genetic ones — so often succeed?

A1: Generally, by intelligently directed injection of active information. That is, information that enables searching guided by an understanding of the search space or the general or specific location of a target. (Also, cf. here. A so-called fitness function which more or less smoothly and reliably points uphill to superior performance, mapped unto a configuration space, implies just such guiding information and allows warmer/colder signals to guide hill-climbing. This or the equivalent, appears in many guises in the field of so-called evolutionary computing. As a rule of thumb, if you see a “blind” search that seemingly delivers an informational free lunch, look for an inadvertent or overlooked injection of active information. [[Cf. here, here.& here.]) In a simple example, the children’s party game, “treasure hunt,” would be next to impossible without a guidance, warmer/colder . . . hot . . . red hot. (Something that gives some sort of warmer/colder message on receiving a query, is an oracle.) The effect of such sets of successive warmer/colder oracular messages or similar devices, is to dramatically reduce the scope of search in a space of possibilities. Intelligently guided, constrained search, in short, can be quite effective. But this is designed, insight guided search, not blind search. From such, we can actually quantify the amount of active information injected, by comparing the reduction in degree of difficulty relative to a truly blind random search as a yardstick. And, we will see the remaining importance of the universal or solar system level probability or plausibility bound [[cf. Dembski and Abel, also discussion at ENV] which in this course will for practical purposes be 500 – 1,000 bits of information — as we saw above, i.e. these give us thresholds where the search is hard enough that design is a more reasonable approach or explanation. Of course, we need not do so explicitly, we may just look at the amount of active information involved.

Q2: But, once we have a fitness function, all that is needed is to start anywhere and then proceed up the slope of the hill to a peak, no need to consider all of those outlying possibilities all over the place. So, you are making a mountain out of a mole-hill: why all the fuss and feathers over “active information,” “oracles” and “guided, constrained search”?

A2: Fitness functions, of course, are a means of guided search, by providing an oracle that points — generally — uphill. In addition, they are exactly an example of constrained search: there is function present everywhere in the zone of interest, and it follows a generally well-behaved uphill-pointing pattern. In short, from the start you are constraining the search to an island of function, T, in which neighbouring or nearby locations: Ei, Ej, Ek, etc . . .  — which can be chosen by tossing out a ring of “nearby” random tries — are apt to go uphill, or get you to another local slope pointing uphill. Also, if you are on the shoreline of function, tosses that have no function will eliminate themselves by being obviously downhill; which means it is going to be hard to island hop from one fairly isolated zone of function to the next.  In short, a theory that may explain micro-evolutionary change within an island or cluster of nearby islands, is not simply to be extrapolated to one that needs to account for major differences that have to bridge large differences in configuration and function. This is not going to be materially different if the islands of function and their slopes and peaks of function grow or shrink a bit across time or even move bodily like glorified sand pile barrier islands are wont to, so long as such island of function drifting is gradual. Catastrophic disappearance of such islands, of course, would reflect something like a mass extinction event due to an asteroid impact or the like. Mass extinctions simply do not create new functional body plans, they sweep the life forms exhibiting existing body plans away, wiping the table almost wholly clean, if we are to believe the reports.  Where also, the observable islands of function effect starts at the level of the many isolated protein families, that are estimated to be as 1 in 10^64 to 1 in 10^77 or so of the space of Amino Acid sequences. As ID researcher Douglas Axe noted in a 2004 technical paper: “one in 10^64 signature-consistent sequences forms a working domain . . . the overall prevalence of sequences performing a specific function by any domain-sized fold may be as low as 1 in 10^77, adding to the body of evidence that functional folds require highly extraordinary sequences.” So, what has to be reckoned with, is  that in general for a sufficiently complex situation to be relevant to FSCO/I [[500 - 1,000 or more structured yes/no questions, to specify configurations, En . . . ], the configuration space of possibilities, W, is as a rule dominated by seas of non-functional gibberish configurations, so that the envisioned easy climb up Mt Improbable is dominated by the prior problem of finding a shoreline of Island Improbable.

Q3: Nonsense! The Tree of Life diagram we all saw in our Biology classes proves that there is a smooth path from the last universal common ancestor [LUCA] to the different body plans and forms, from microbes to Mozart. Where did you get such nonsense from?

A3: Indeed, the tree of life was the only diagram in Darwin’s Origin of Species. However, it should be noted that it was a speculative diagram, not one based on a well-documented, observed pattern of gradual, incremental improvements. He hoped that in future decades, investigations of fossils over the world would flesh it out, and that is indeed the impression given in too many Biology textbooks and popular headlines about found “missing links.” But, in fact, the typical tree of life imagery:

Fig. G.11c, anticipated: A typical, popular level tree of life model/illustration. (Source.)

. . . is too often presented in a misleading way. First, notice the skipping over of the basic problem that without a root, neither trunks nor branches and twigs are possible. And, getting to a first, self-replicating unicellular life form — the first universal common ancestor, FUCA — that uses proteins, DNA, etc through the undirected physics and chemistry of Darwin’s warm little electrified pond full of a prebiotic soup or the like, continues to be a major and unsolved problem for evolutionary materialist theorising. Similarly, once we reckon with claims about “convergent evolution” of eyes, flight, whale/bat echolocation “sonar” systems, etc. etc., we begin to see that “everything branches, save when it doesn’t.” Indeed, we have to reckon with a case where on examining the genome of a kangaroo (the tammar wallaby), it was discovered that “In fact there are great chunks of the [[human] genome sitting right there in the kangaroo genome.” The kangaroos are marsupials, not placental mammals, and the fork between the two is held to be 150 million years old. So, Carl Wieland of Creation Ministries incorporated, was fully in his rights to say: “unlike chimps, kangaroos are not supposed to be our ‘close relatives’ . . . . Evolutionists have long proclaimed that apes and people share a high percentage of DNA. Hence their surprise  at these findings that ‘Skippy’ has a genetic makeup similar to ours.”  Next, so soon as one looks at molecular similarities — technically, homologies (and yes, this is an argument from similarity, i.e analogy in the end) — instead of those of gross anatomy, we run into many, mutually conflicting “trees.” Being allegedly 95 – 98+% Chimp in genetics is one thing, being what, ~ 80% kangaroo or ~ 50% banana or the like, is quite another. That is, we need to look seriously at the obvious alternative from the world of software design: code reuse and adaptation from a software library for the genome. Worse, in fact the consistent record from the field (which is now “almost unmanageably rich” with over 250,000 fossil species, millions of specimens in museums and billions in the known fossil beds), is that we do NOT observe any dominant pattern of origin of body plans by smooth incremental variations of successive fossils. Instead, as Steven Jay Gould famously observed, there are systematic gaps, right from the major categories on down. Indeed, if one looks carefully at the tree illustration above, one will see where the example life forms are: on twigs at the end of branches, not the trunk or where the main branches start. No prizes for guessing why. That is why we should carefully note the following remark made in 2006 by W. Ford Doolittle and Eric Bapteste:

Darwin claimed that a unique inclusively hierarchical pattern of relationships between all organisms based on their similarities and differences [the Tree of Life (TOL)] was a fact of nature, for which evolution, and in particular a branching process of descent with modification, was the explanation. However, there is no independent evidence that the natural order is an inclusive hierarchy, and incorporation of prokaryotes into the TOL is especially problematic. The only data sets from which we might construct a universal hierarchy including prokaryotes, the sequences of genes, often disagree and can seldom be proven to agree. Hierarchical structure can always be imposed on or extracted from such data sets by algorithms designed to do so, but at its base the universal TOL rests on an unproven assumption about pattern that, given what we know about process, is unlikely to be broadly true. This is not to say that similarities and differences between organisms are not to be accounted for by evolutionary mechanisms, but descent with modification is only one of these mechanisms, and a single tree-like pattern is not the necessary (or expected) result of their collective operation . . . [[Abstract, "Pattern pluralism and the Tree of Life hypothesis," PNAS February 13, 2007 vol. 104 no. 7 2043-2049.]

Q4: But, the evidence shows that natural selection is a capable designer and can create specified complexity. Isn’t that what Wicken said to begin with in 1979 when he said that “Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order’ . . .”?

A4: We need to be clear about what natural selection is and does. First, you need a reproducing population, which has inheritable chance variations [[ICV], and some sort of pressure on it from the environment, leading to gradual changes in the populations because of differences in reproductive success [[DRS] . . . i.e. natural selection [[NS] . . . among varieties; achieving descent with modification [[DWM]. Thus, different varieties will have different degrees of success in reproduction: ICV + DRS/NS –> DWM. However, there is a subtlety: while there is a tendency to summarise this process as “natural selection, “this is not accurate. For the NS component actually does not actually ADD anything, it is a short hand way of saying that less “favoured” varieties (Darwin spoke in terms of “races”) die off, leaving no descendants. “Selection” is not the real candidate designer. What is being appealed to is that chance variations create new varieties. So, this is the actual supposed source of innovation — the real candidate designer, not the dying off part. That puts us right back at the problem of finding the shoreline of Island Improbable, by crossing a “sea of non-functional configurations” in which — as there is no function, there is no basis to choose from. So, we cannot simply extrapolate a theory that may relate to incremental changes within an island of function, to the wider situation of origin of functions. Macroevolution is not simply accumulated micro evolution, not in a world of complex, configuration-specific function. (NB: The suggested “edge” of evolution by such mechanisms is often held to be about the level of a taxonomic family, like the cats or the dogs and wolves.)

Q5: The notion of “islands of function” is Creationist nonsense, and so is that of “active information.” Why are you trying to inject religion and “God of the gaps” into science?

A5: Unfortunately, this is not a caricature: there is an unfortunate  tendency of Darwinist objectors to design theory to appeal to prejudice against theistic worldviews, and to suggest questionable motives, that are used to cloud issues and poison or polarise discussion. But, I am sure that if I were to point out that such Darwinists often have their own anti-theistic ideological agendas and have sought to question-beggingly redefine science as in effect applied atheism or the like, that would often be regarded as out of place. Let us instead stick to the actual merits. Such as, that since intelligent designers are an observed fact of life, to explain that design is a credible or best causal explanation in light of tested reliable signs that are characteristic of design, such as FSCO/I, is not an appeal to gaps. Similarly, to point to ART-ifical causes that leave characteristic traces by contrast with those of chance and/or mechanical necessity, is not to appeal to “the supernatural,” but to the action of intelligence on signs that are tested and found to reliably point to it. Nor, is design theory to be equated to Creationism, which can be seen as an attempt to interpret origins evidence in light of what are viewed as accurate record of the Creator. The design inference works back from inductive study of signs of chance, necessity and art, to cases where we did not observe the deep past, but see traces that are closely similar to what we know that the only adequate, observed cause is design. So also, once we see that complex function dependent on many parts that have to be properly arranged and coupled together, sharply constrains the set of functional as opposed to non-functional configurations, the image of “islands of function” is not an unreasonable way to describe the challenge. Where also, we can summarise a specification as a structured list of YES/NO questions that give us a sufficient description of the working configuration. Which in turn gives us a way to understand Kolmogorov-Chaitin complexity or descriptive complexity of a bit-string x, in simple terms: “the length of the shortest program that computes x and halts.” This can be turned into a description of zones of interest T that are specified in large spaces of possible configurations, W. If there is a “simple” and relatively short description, D, that allows us to specify T without in effect needing to list and state the configs that are in T, E1, E2, . . En, then T is specific. Where also, if T is such that D describes a configuration-dependent function, T is functionally specific, e.g. strings of ASCII characters in this page form English sentences, and address the theme of origins science in light of intelligent design issues. In the — huge! — space of possible ASCII strings of comparable length to this page (or even this paragraph), such clusters of sentences are a vanishingly minute fraction relative to the bulk that will be gibberish. So also, in a world where we often use maps or follow warmer/colder cues to find targets, and where if we were to blindly select a search procedure and match it at random to a space of possibilities, we would be at least as likely to worsen as to improve odds of success relative to a simple blind at-random search of the original space of possibilities, active information that gives us an enhanced chance of success in getting to an island of function is in fact a viable concept.>>

__________

So, it seems that in the defined sense, conservation of information, search, active information, Kolmogorov complexity speaking to narrow zones of specific function T in wide config spaces W,  the viability of these concepts in the face of drift, etc. are coherent, relevant to the scientific phenomena under study, and important. Where, the pivotal challenge is that for complex, functionally specific organisation and associated or implied information, there is but one empirically — and routinely — known source: intelligence. Let us see if further discussion of same will now proceed on reasonable terms. END

PS: Since we are going to pause and markup JoeF’s article JoeG makes reference to in comment no 1, let me give a free plug to the ARN tee shirt (and calendar and prints), highlighting the artwork, under the doctrine of fair use (as it has become material to an exchange):

The ad blurb in part reads:

A recent book attacking intelligent design (Intelligent Thought: Science vs. the Intelligent Design Movement, ed. John Brockman, Vintage Press, May 2006), , has chapters by most of the big names in evolutionary thought: Daniel Dennett, Richard Dawkins, Jerry Coyne, Steven Pinker, Lee Smolin, Stuart A. Kauffman and others. In the introduction Brockman summarizes the situation from his perspective: materialistic Darwinism is the only scientific approach to origins, and the “bizarre” claims of “fundamentalists” with “beliefs consistent with those of the Middle Ages” must be opposed. “The Visigoths are at the gates” of science, chanting that schools must teach the controversy, “when in actuality there is no debate, no controversy.”

While Brockman intended the “Visigoths” reference as an insult equating those who do not embrace materialistic Darwinism to uneducated barbarians, he has actually created an interesting analogy of the situation, and perhaps a prophetic look at the future. For it was the Visigoths of the 3rd and 4th centuries that were waiting at the gates of the Roman Empire when it collapsed under its own weight. For years the Darwinists in power have pretended all is well in the land of random mutation and natural selection and that intelligent design should be ignored. With this book (and several others like it), they are attempting to both laugh and fight back at the ID movement. Mahatma Gandhi summarized the situation well with his quote about the passive resistive movement: “First they ignore you, then they laugh at you, then they fight you, then you win.”

Worth thinking about.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

94 Responses to ID Foundations, 17a: Footnotes on Conservation of Information, search across a space of possibilities, Active Information, Universal Plausibility/ Probability Bounds, guided search, drifting/ growing target zones/ islands of function, Kolmogorov complexity, etc.

  1. Kairosfocus,

    Please read Joe Felsenstein’s trop on CSI and NS:

    Has Natural Selection Been Refuted? The Arguments of William Dembski

    It is obvious that Joe F doesn’t even understand the concept of CSI and he thinks that natural selection chooses.
    Here read this stupidity:

    If we have a population of DNA sequences, we can imagine a case with four alleles of equal frequency. At a particular position in the DNA, one allele has A, one has C, one has G, and one has T. There is complete uncertainty about the sequence at this position. Now suppose that C has 10% higher fitness than A, G, or T (which have equal fitnesses). The usual equations of population genetics will predict the rise of the frequency of the C allele. After 84 generations, 99.9001% of the copies of the gene will have the C allele.

    This is an increase of information: the fourfold uncertainty about the allele has been replaced by near-certainty. It is also specified information — the population has more and more individuals of high fitness, so that the distribution of alleles in the population moves further and further into the upper tail of the original distribution of fitnesses.

    That is a major WTF? It boggles the mind that someone so clueless about a concept would actually try to refute it. And the scary part is that some people think that he did a good job.

    The point being is if you want to write a post, you should just rip Joe F’s article to shreds

  2. Joe:

    I think that there is a place for setting out that which is being caricatured, so that those who are serious may see the difference for themselves.

    JoeF is of course doing exactly that conflation of micro-evo with macro-evo that is so characteristic. And, in so doing, he is dodging the issue of the big difference between hill climbing within an island of function and FINDING such an island in a vast sea of gibberish.

    That last is what active info, conservation of info, Kolmogorov complexity, etc are all about.

    And it is why there has been such an attempt to deride and dismiss the very simple and relevant metaphor, islands in a vast sea, that have to be found through blind chance + necessity search without uphill-pointing clues.

    The first island is OOL, and the others beyond have to do with body plans: root and shoot, trunk and branches then twigs and leaves. No roots and no shoot, nothing beyond.

    And so also, it is necessary to highlight how the tree of life metaphor — yes that is also a metaphor — has been presented in a fundamentally misleading way, that suppresses its key discontinuities in terms of actual evidence.

    The actual evidence says, ISLANDS of function, the TOL metaphor says, continuity across a vast continent of function, filling in gaps with imaginary ancestors that after 150 years, 1/4 million fossil species and millions of samples in museums, with billions in known beds, so there is a major factual adequacy challenge here.

    What we have fits with design, perhaps frontloading or even use of viri as vectors to introduce innovations. And so forth. Or something else.

    But the bottom line is we have to deal with the islands, starting from the biggie: OOL.

    And, I suspect that is a lot of why, after 6 months the darwinism free- kick- at- goal essay challenge is still unmet.

    KF

  3. PS: Why not take some time and let’s pitch in and do a markup on JoeF’s essay and other comments on the topic? (I think I have provided a background above that takes caricatures of what Dembski has been saying off the table.)

  4. F/N: JoeF, title and 1st para:

    >>Has Natural Selection Been Refuted? The Arguments of William Dembski>>

    1 –> Design theory does not set out to refute natural selection, but as a side effect of its investigations, provides limits for its effectiveness. The “edge of evolution” in a nutshell.

    >>”Intelligent design” (ID) is the assertion that there is evidence that major features of life have been brought about, not by natural selection, but by the action of a designer.>>

    2 –> Strawman, subject changing, red herring chase caricature, note the key highlighted word. Something so simple and easily accessible as http://www.intelligentdesign.org/ will define:

    What is intelligent design?
    Intelligent design refers to a scientific research program as well as a community of scientists, philosophers and other scholars who seek evidence of design in nature. The theory of intelligent design holds that certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection. Through the study and analysis of a system’s components, a design theorist is able to determine whether various natural structures are the product of chance, natural law, intelligent design, or some combination thereof. Such research is conducted by observing the types of information produced when intelligent agents act. Scientists then seek to find objects which have those same types of informational properties which we commonly know come from intelligence. Intelligent design has applied these scientific methods to detect design in irreducibly complex biological structures, the complex and specified information content in DNA, the life-sustaining physical architecture of the universe, and the geologically rapid origin of biological diversity in the fossil record during the Cambrian explosion approximately 530 million years ago.

    See New World Encyclopedia entry on intelligent design.

    3 –> Following up that onward link shows, right at the top:

    Intelligent design (ID) is the view that it is possible to infer from empirical evidence that “certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection” [1] Intelligent design cannot be inferred from complexity alone, since complex patterns often happen by chance. ID focuses on just those sorts of complex patterns that in human experience are produced by a mind that conceives and executes a plan. According to adherents, intelligent design can be detected in the natural laws and structure of the cosmos; it also can be detected in at least some features of living things.

    Greater clarity on the topic may be gained from a discussion of what ID is not considered to be by its leading theorists. Intelligent design generally is not defined the same as creationism, with proponents maintaining that ID relies on scientific evidence rather than on Scripture or religious doctrines. ID makes no claims about biblical chronology, and technically a person does not have to believe in God to infer intelligent design in nature. As a theory, ID also does not specify the identity or nature of the designer, so it is not the same as natural theology, which reasons from nature to the existence and attributes of God. ID does not claim that all species of living things were created in their present forms, and it does not claim to provide a complete account of the history of the universe or of living things.

    ID also is not considered by its theorists to be an “argument from ignorance”; that is, intelligent design is not to be inferred simply on the basis that the cause of something is unknown (any more than a person accused of willful intent can be convicted without evidence). According to various adherents, ID does not claim that design must be optimal; something may be intelligently designed even if it is flawed (as are many objects made by humans).

    ID may be considered to consist only of the minimal assertion that it is possible to infer from empirical evidence that some features of the natural world are best explained by an intelligent agent. It conflicts with views claiming that there is no real design in the cosmos (e.g., materialistic philosophy) or in living things (e.g., Darwinian evolution) or that design, though real, is undetectable (e.g., some forms of theistic evolution) . . .

    4 –> In short, this is a willful, continuing misrepresentation on JoeF’s part. He has signally failed in duties of care to accuracy, truth, and fairness in serious discussion.

    >> This involves negative arguments that natural selection could not possibly bring about those features. And the proponents of ID also claim positive arguments. >>

    5 –> Whenever we have a conflict between alternative models in science, we have “negative arguments” on the limitations of the theories on the other side.

    6 –> Since nothing in science gets to be a theory without SOME empirical support, there will always be cases explained by a given theory, the real issue is where the limits lie. Just as with Newtonian Dynamics, so also with Natural Selection and its associated factors.

    7 –> So, to pretend or suggest (the subtext is plain) that because ID makes some negative arguments, it is somehow suspect is either disingenuous or to be seriously lacking in a base to be discussing a serious matter.

    8 –> In addition, design theory does in fact make serious positive arguments [NB: JoeF acknowledges then makes a Tee shirt his exhibit no 1 . . . I kid you not], as were sumarised here a few days back, from Meyer. So, it is a below the belt rhetorical move to snidely suggest: “proponents of ID also claim positive arguments”

    9 –> No, ID proponents MAKE positive arguments [beyond what can fit on a tee shirt . . . ], which you happen to disagree with. JoeF, kindly have the common decency to acknowledge that the people on the other side actually make detailed arguments with specific reasons and evidence argued on inference to best explanation aka abduction.

    10 –> Excerpting Meyer’s argument in a nutshell, here in reply to a dismissive review of his Signature in the Cell, by Falk:

    The central argument of my book is that intelligent design—the activity of a conscious and rational deliberative agent—best explains the origin of the information necessary to produce the first living cell. I argue this because of two things that we know from our uniform and repeated experience, which following Charles Darwin I take to be the basis of all scientific reasoning about the past. First, intelligent agents have demonstrated the capacity to produce large amounts of functionally specified information (especially in a digital form). Second, no undirected chemical process has demonstrated this power. Hence, intelligent design provides the best—most causally adequate—explanation for the origin of the information necessary to produce the first life from simpler non-living chemicals. In other words, intelligent design is the only explanation that cites a cause known to have the capacity to produce the key effect in question . . . . In order to [[scientifically refute this inductive conclusion] Falk would need to show that some undirected material cause has [[empirically] demonstrated the power to produce functional biological information apart from the guidance or activity a designing mind. Neither Falk, nor anyone working in origin-of-life biology, has succeeded in doing this . . . .

    11 –> JoeF, SIC has been published for three years or so now. If you have a reply on the substance, regarding the FSCO/I in the origin of cell based life, kindly give it to us: _____________ , and the Nobel Prize for this astounding discovery was awarded to : _________ , for the year _____ , with the acceptance speech as follows: _______ .

    12 –> Or at least, show us how blind chance and mechanical necessity are routinely able to produce FSCO/I — i.e without active information being injected by a designer, within the reasonable resources of a planet or solar system or the observed cosmos: ________ . (Cf my discussion of this here, Abel’s plausibility metric paper and Dembski’s UPB note are linked above in the OP.)

    _________

    If you cannot answer to Meyer on the merits cogently and promptly, then we will have a perfect right to draw the conclusion that you have been indulging ideological posturing while wearing the holy lab coat.

    And, with all due respect, your headline and opening paragraph have not been a promising start.

    Later,

    KF

  5. For starters we need to make sure that Joe F understands the following:

    Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems.- Wm. Dembski page 148 of NFL

    He doesn’t deal with function- increased functionality or the production of new functional protein machinery. Joe seems to think that changing the frequency of an allele is enough to account for CSI.

    Earth to JF- the INDIVIDUAL needs to gain that information. The “ORGANISM is a FUNCTIONING SYSTEM comprising of many functional subsystems”

    I can hear it now:

    “They misunderstand evolution as populations evolve, not individuals.”

    Natural selection, ie evolution, is all about individuals. You cannot have an evolving population without individuals that can imperfectly reproduce, some outreproducing the others due to heritable chance variation(s).

    Individuals pass on their biological information to other individuals. And if an individual never develops an IC system, no population ever will.

    Natural selection is not a way to produce CSI. Having more offspring does not = producing CSI and IC. To even try to make believe that it does proves that you just don’t have a clue.

  6. F/N 2: A DVD drive headache gives me a moment to continue the markup:

    >> Critics of ID commonly argue that it is not science.>>

    1 –> by begging the question by imposing a logically, epistemologically and historically unwarranted, question-begging unworkable a priori redefinition that boils down to science is applied materialist philosophy. Cf. critique here on, that gives details, without hurling an elephant.

    >> For its positive predictions of the behavior of a designer they have a good point. But not for its negative criticisms of the effectiveness of natural selection, which are scientific arguments that must be taken seriously and evaluated. >>

    2 –> Only an allusion is presented, in a context that then tries to present a tee shirt/editorial cartoon as substantially representing the design theory case. Cf just below.

    3 –> As was already shown, when two scientific theoretical claims conflict, one will need to show the limitations of the other. And in the case of NS, it is neither the source of innovations in bio-function and associated information, nor has it been observed to be able to account for origin of body plans. It does not even address the origin of cell based life.

    4 –> To point that out, in extensive technical arguments, as has been done for many years is to take NS seriously, so the pretence that a cartoon is an adequate summary of the case for design is a caricature of the worst sort.

    5 –> And literally, that is exhibit 1 used by JoeF

    >>Look at Figure 1, which shows a cartoon design from T-shirts sold by an ID website, Access Research Network, which also sells ID paraphernalia (I am grateful to them for kind permission to reproduce it).

    (click here for image)>>

    6 –> this is as classic an example of a strawman argument as I have ever seen.

    >>Figure 1. A summary of the major arguments of “intelligent design”, as they appear to its advocates, from Access Research Network’s website http://www.arn.org. Merchandise with the cartoon is available from http://www.cafepress.com/accessresearch. Copyright Chuck Assay, 2006; all rights reserved. Reprinted by permission.>>

    7 –> As the PS to the OP will show, this is not a scientific presentation, or a summary of it, but a retort to a declaration in the anti-ID literature, that the Visigoths (the ignorant barbarians bent on destruction and rapine) are coming.

    8 –> The design theorists took it up and laid out an OUTLINE at label level of the challenge to the Darwinian establishment, and the only thing that can be properly gleaned is that the establishment feels threatened and is challenged across a wide range of topics. Substance is not addressed in any detail in a cartoon.

    >>As the bulwark of Darwinism defending the hapless establishment is overcome, note the main lines of attack. In addition to recycled creationist themes such as the Cambrian Explosion and cosmological arguments about the fine-tuning of the universe, the ladder is Michael Behe’s argument about molecular machines (Behe 1996).>>

    9 –> Recycled CREATIONIST themes tries to make an invidious association, and to duck the responsibility of accounting, per observations and adequate empirical evidence, for the origin of body plans by inheritable chance variation and differential reproductive success across varieties. From Darwin’s admissions on the subject to this day, that has remained unanswered. So to label and dismiss by invidious association — we know the subtext of insinuations about right wing theocratic religious agendas with racks and thumbscrews hiding up sleeves etc — is irresponsible.

    10 –> The shift in terminology from COSMOLOGICAL FINE TUNING (a scientific discussion since Hoyle et al, where Hoyle was a lifelong agnostic) to “cosmological arguments” is also loaded as this directly implies that the arguments in question are those of natural theology. There is a serious scientific cosmological fine tuning issue to be addressed on the scientific merits, not dodged by making snide insinuations that this is natural theology in disguise.

    11 –> the issue about the observed origin of irreducible complexity, similarly, is not to be dismissed by saying that’s Behe’s argument. Have you had an empirically warranted answer to Menuge’s C1 – 5 criteria for exaptation . . . the usual attempted counter? If not, then the issue of irreducible complexity is very definitely still on the table. The criteria:

    For a working [bacterial] flagellum to be built by exaptation, the five following conditions would all have to be met:

    C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function.

    C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time.

    C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed.

    C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant.

    C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly.

    ( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.)

    >>The other main attack, the battering ram, is the “information content of DNA” which is destroying the barrier of “random mutation”.>>

    12 –> And your evidence that per observation, FSCO/I is reasonably a product of blind chance and mechanical necessity is? ________________

    13 –> Absent such, the evidence stands, that the only known causally adequate source of FSCO/I is design. So, we have every epistemic right to infer that FSCO/I is an empirically reliable sign of design as cause.

    >>The “irreducible complexity of molecular machines” arguments of Michael Behe have received most of the publicity; William Dembski’s more theoretical arguments involving information theory have been harder for people to understand.>>

    14 –> Not so as I have noticed. Both have been discussed.

    >>There have been a number of extensive critiques of Dembski’s arguments published or posted on the web (Wilkins and Elsberry 2001; Godfrey-Smith 2001; Rosenhouse 2002; Schneider 2001, 2002; Shallit 2002; Tellgren 2002; Wein 2002; Elsberry and Shallit 2003; Edis 2004; Shallit and Elsberry 2004; Perakh 2004a, 2004b; Tellgren 2005; Häggström 2007). They have pointed out many problems. These range from the most serious to nit-picking quibbles . . .>>

    15 –> Hurling the elephant. That the ideologically committed have tried rebuttals is no news. What is not being pointed out is how such have metthe criterion of warrant that is decisive: show FSCO/I as present in life forms and other relevant contexts, produced by blind chance and mechanical necessity.

    >>Digital codes

    Stephen Meyer, who heads the Discovery Institute’s program on ID, describes Dembski’s work in this way:

    We know that information — whether, say, in hieroglyphics or radio signals — always arises from an intelligent source. …. So the discovery of digital information in DNA provides strong grounds for inferring that intelligence played a causal role in its origin. (Meyer 2006)

    What is this mysterious “digital information”?>>

    16 –> Joe F pretends here that there is no genetic code dependent on the discrete state of elements in the DNA strings for its meaning.

    >> Has a message from a Designer been discovered? When DNA sequences are read, can they be converted into English sentences such as: “Copyright 4004 bce by the intelligent designer; all rights reserved”? Or can they be converted into numbers, with one stretch of DNA turning out to contain the first 10 000 digits of ?? Of course not.>>

    17 –> red herring and strawmen caricatures. DNA has been known to have digitally coded, specifically functional complex information since 1953 – 1957. That is what needs to be accounted for. That this is being diverted, speaks volumes.

    >> If anything like this had happened, it would have been big news indeed. You would have heard by now.>>

    18 –> Strawman.

    >> No, the mysterious digital information turns out to be nothing more than the usual genetic information that codes for the features of life, information that makes the organism well-adapted. The “digital information” is just the presence of sequences that code for RNA and proteins — sequences that lead to high fitness. >>

    19 –> So, JoeF actually knows what is to be addressed, but by suitably setting up a strawman, he can pretend that the issue to be addressed on the merits needs not be so addressed. it is familiar so we don’t need to account for it. FAIL.

    >> Now we already knew that they were there. Most biologists would be surprised to hear that their presence is, in itself, a strong argument for ID — biologists would regard them as the outcome of natural selection. To see them as evidence of ID, one would need an argument that showed that they could only have arisen by purposeful action (ID), and not by selection. Dembski’s argument claims to establish this. >>

    20 –> What is the known, observed source of complex functional digital codes backed up by organised implementing machinery, again?

    21 –> Has there been a surprise discovery and observation of such systems spontaneously appearing in simulations of warm little ponds or the like, so that we can show per observational warrant that FSCO/I and particularly digital codes and implementing machinery can and have been produced by blind chance and mechanical necessity. Absence of a Nobel prize for that says a loud NO.

    22 –> Similarly, has there been a demonstration per empirical observation to warrant he calim that the origin of novel body plans involving ~ 10 – 100+ Mn bits of additional FSCO/I dozens of times over has been accounted for? Again, NO. (And the context for this present exchange is that Meyer is about to release further documentation on the point.)

    23 –> In short, JoeF is trying to sit on the collective authority of Biologists without the necessary back up of warrant on the empirically grounded merits. This is a bluff, not a serious argument.

    ____________

    So far Joe F’s essay is — as shown in outline — long on rhetorical stunts, short on warrant.

    Not good enough, not by a long shot.

    KF

  7. OT: kf, you might be interested in this new book that just came out in March:

    Persecuted: The Global Assault on Christians
    http://www.amazon.com/Persecut.....1400204410

    Denyse O’Leary did a limited review of it here:

    Knowing our world: The three major reasons for persecution of Christians worldwide – Denyse O’Leary
    Excerpt: The world-wide picture is sobering. Pew Research Center, Newsweek, and The Economist all agree that Christians are the world’s most widely persecuted group.
    Marshall and team offer information about three quite different reasons for persecution by different types of regimes (pp. 9–11):
    First, there is post-Communist persecution, following the collapse of Communism in the late 1980s, where the regimes
    ” … have since retreated to an onerous policy of registration, supervision, and control. Those who will not be controlled are sent to prison or labor camps, or simply held, abused, and sometimes tortured.”
    The most intense persecutor is the still Communist (not post-Communist) regime, North Korea (pp. 9–10). There, “Christians are executed or sent to prison camps for lengthy terms for such crimes as the mere possession of a Bible.”
    Second, in some countries, “Hindu or Buddhist religious movements equate their religion with the nature and meaning of their country itself.” They persecute minority tribes as well as religions (pp. 10–11). These countries include Sri Lanka, Nepal, and Bhutan.
    Third, of course the Muslim world where
    “Even though the remaining Communist countries persecute the most Christians, it is in the Muslim world where persecution of Christians is now most widespread, intense, and, ominously, increasing. Extremist Muslims are expanding their presence and sometimes exporting their repression of all other faiths. … Even asncient churches, such as the two-thousand-year-old Chaldean and Assyrian churches of Iraq and the Coptic churches of Egypt, are under intense threat at this time. (p. 11).”
    http://www.thebestschools.org/.....worldwide/

    Throw on top of all that the open hostility towards Christians in Academia by atheists,,,

    Majority of American University Professors have Negative View of Evangelical Christians – 2007
    Excerpt: According to a two-year study released today by the Institute for Jewish & Community Research (IJCR), 53% of non-Evangelical university faculty say they hold cool or unfavorable views of Evangelical Christians – the only major religious denomination to be viewed negatively by a majority of faculty.
    Only 30% of faculty hold positive views of Evangelicals, 56% of faculty in social sciences and humanities departments hold unfavorable views. Results were based on a nationally representative online survey of 1,269 faculty members at over 700 four-year colleges and universities. Margin of error is +/- 3%. ,,,
    Only 20% of those faculty who say religion is very important to them and only 16% of Republicans have unfavorable views of Evangelicals; the percentages rise considerably for faculty who say religion is not important to them (75%) and among Democrats (65%).,,,
    “This survey shows a disturbing level of prejudice or intolerance among U.S. faculty towards tens of millions of Evangelical Christians,,,
    One-third of all faculty also hold unfavorable views of Mormons, and among social sciences and humanities faculty, the figure went up to 38%. Faculty views towards other religious groups are more positive: Only 3% of faculty hold cool/unfavorable feelings towards Jews and only 4% towards Buddhists. Only 13% hold cool/unfavorable views of Catholics and only 9% towards non-Evangelical Christians. Only 18% hold cool/unfavorable views towards atheists.
    A significant majority – 71% of all faculty – agreed with the statement: “This country would be better off if Christian fundamentalists kept their religious beliefs out of politics.” By comparison, only 38% of faculty disagreed that the country would be better off if Muslims became more politically organized.
    http://www.lifesitenews.com/ne.....y/07050808

    Slaughter of Dissidents – Book
    “If folks liked Ben Stein’s movie “Expelled: No Intelligence Allowed,” they will be blown away by “Slaughter of the Dissidents.” – Russ Miller
    http://www.amazon.com/Slaughte.....0981873405

    Slaughter of the Dissidents – Dr. Jerry Bergman – video lecture
    http://www.youtube.com/watch?v=x_ygt_mqzO8

    And please note that this persecution of Christians is widespread in the world in spite of the fact that it can be forcefully argued that Christianity has had, by far, the most positive impact on the world than any other group has:

    From Josh McDowell, Evidence for Christianity, in giving examples of the influence of Jesus Christ cites many examples. Here are just a few:
    1. Hospitals
    2. Universities
    3. Literacy and education for the masses
    4. Representative government
    5. Separation of political powers
    6. Civil liberties
    7. Abolition of slavery
    8. Modern science
    9. The elevation of the common man
    10. High regard for human life

    The History of Christian Education in America
    Excerpt: The first colleges in America were founded by Christians and approximately 106 out of the first 108 colleges were Christian colleges. In fact, Harvard University, which is considered today as one of the leading universities in America and the world was founded by Christians. One of the original precepts of the then Harvard College stated that students should be instructed in knowing God and that Christ is the only foundation of all “sound knowledge and learning.”
    http://www.ehow.com/about_6544.....erica.html

    Christianity Gave Birth To Science – Dr. Henry Fritz Schaefer – video
    http://vimeo.com/16523153

    Supplemental note:

    The Soviet Union Story – documentary video
    http://www.documentarytube.com/the-soviet-story

    Music and Verse:

    Natalie Grant – Held
    http://www.youtube.com/watch?v=8GDUBd2eWFw

    John 15:18
    “If the world hates you, keep in mind that it hated me first.

  8. In the following article Dr. Paul Nelson tells of the extreme poverty of evidence for the claim by Darwinists that mutation and selection can generate the diversity of life we see around us::

    When Nature Resists: Explaining the Origin of the Animal Phyla – Paul Nelson – April 5, 2013
    Excerpt: ,,,lately, I’ve run across something related to ontogenetic depth that is, well, mind-blowing.
    Since 1859, the origin of not a single bilaterian phylum (animal body plan) has been explained in a step-by-step (neo-Darwinian) fashion, where random mutation and natural selection were, as textbooks assert, the primary causal mechanisms. Take your pick of the phyla: Mollusca, Brachiopoda, Chordata, Arthropoda, you name it — and go looking in the scientific literature for the incremental pathway, via mutation and selection, showing how that body plan was assembled from its putative bilaterian Last Common Ancestor.
    You’ll be looking a long time.,,,
    http://www.evolutionnews.org/2.....70871.html

  9. F/N: Let me continue the markup to the point where JoeF addresses CSI. This should be enough to show where the critique of design theory goes off the rails irrecoverably. And since, by his own admission, his critique is in effect a summary of those made by others he has listed, the failure extends across the board, it is not just a problem for JoeF.

    Now, JF introduces the concept of specified complexity reasonably well, though I am concerned about a strawmannish claim he makes:

    Specified complexity does one thing — when it is observed, we can be sure that purely random processes such as mutation are highly unlikely to have produced that pattern, even once in the age of the universe . . .

    The problem here is, context.

    By failing to adequately address specified complexity in the context of an empirically grounded per aspect explanatory filter approach, JF fails to recognise that the very first default addressed is mechanical necessity.

    That is, we have first of all ruled out natural regularities as the credible cause of the aspect of the phenomenon in question, as we are dealing with high contingency. There are two and only two known sources of highly contingent outcomes, chance and choice. The filter then distinguishes the two on a criterion rooted in sampling theory, that sufficiently isolated narrow and unrepresentative target zones are utterly unlikely to be hit on by sampling based on blind chance.

    That sets up a strawman target.

    To see why I say this, observe from the original post, my note on the source of variation and the actual role of differential reproductive success in populations:

    Q4: But, the evidence shows that natural selection is a capable designer and can create specified complexity. Isn’t that what Wicken said to begin with in 1979 when he said that “Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order’ . . .”?

    A4: We need to be clear about what natural selection is and does. First, you need a reproducing population, which has inheritable chance variations [[ICV], and some sort of pressure on it from the environment, leading to gradual changes in the populations because of differences in reproductive success [[DRS] . . . i.e. natural selection [[NS] . . . among varieties; achieving descent with modification [[DWM]. Thus, different varieties will have different degrees of success in reproduction: ICV + DRS/NS –> DWM. However, there is a subtlety: while there is a tendency to summarise this process as “natural selection, “this is not accurate. For the NS component actually does not actually ADD anything, it is a short hand way of saying that less “favoured” varieties (Darwin spoke in terms of “races”) die off, leaving no [oops!] descendants. “Selection” is not the real candidate designer. What is being appealed to is that chance variations create new varieties. So, this is the actual supposed source of innovation — the real candidate designer, not the dying off part. That puts us right back at the problem of finding the shoreline of Island Improbable, by crossing a “sea of non-functional configurations” in which — as there is no function, there is no basis to choose from. So, we cannot simply extrapolate a theory that may relate to incremental changes within an island of function, to the wider situation of origin of functions. Macroevolution is not simply accumulated micro evolution, not in a world of complex, configuration-specific function. (NB: The suggested “edge” of evolution by such mechanisms is often held to be about the level of a taxonomic family, like the cats or the dogs and wolves.)

    And, I daresay, the root of the tree of life issue comes up, as the first body plan origin to be explained on cogent reasoning rooted in sound empirical observations of reasonable pre-life situations. Including, origin of an encapsulated, gated, metabolising entity with a built-in von Neumann kinematic self replicator [vNSR] using inter alia genetic, coded information. (Where also in basic form, that was on the table since Paley’s second example of 1804, the thought exercise on the self-replicating time keeping watch; which easily explains why Darwin — who was very familiar with Paley — was so coy about origin of life in his Origin of Species.)

    The point being, no root, no shoot, no branches and no twigs, i.e. the first decisive issue is OOL. Where, there was simply no pre-existing vNSR to allow for self-replication so no differential reproductive success. So called natural selection for cell based life — what needs to be addressed (hypothetical scenarios about self-replicating molecules don’t count and in case one sees this one again, the PCR chain reaction is NOT a case of self replication) is not on the cards until one can explain an encapsulated, gated, organised metabolic automaton with vNSR.

    Such an entity is chock full of FSCO/I, and as has been underscored for good reason the only empirically grounded adequate cause for FSCO/I is design. So it is reasonable for design to sit at the table of explanations from OOL on up. And so thereafter it is sitting at the table — as of right, not sufferance — when it comes to OO body plans etc.

    This underlying context becomes important to understand where JoeF goes wrong in trying a game of scrambling a pixellated picture, to try to undermine the law of conservation of information outlined above in the OP:

    The flaw in Dembski’s argument is that, to test the power of natural selection to put specified information into the genome, we must evaluate the same specification (“codes for an organism that is highly fit”) on it before and after. If you could show that the scrambled picture and the unscrambled picture do equally well in satisfying that same specification, you would go far to prove that natural selection cannot put adaptive information into the genome. Our flower example shows that there is a big difference in whether the original specification is satisfied before and after the permutation. Scrambling the sequence of a gene may not destroy its information content, if we have used a known permutation that can later be undone. But the scrambling certainly will destroy the functioning, and thus the fitness, of the gene. Likewise, unscrambling it can dramatically increase the fitness of the gene. Thus Dembski’s argument, in its original form, can be seen to be irrelevant.

    By failing to understand that scrambling a genome through random variations sufficient to destroy function, one is now in the sea of non-functioning configs so first the life form would be non-viable, this misses the mark decisively. Then, the notion of unscrambling runs into two difficulties: (i) the non-viable life form has been eliminated, and (ii) there is no record of the scrambling function so that it can be neatly inverted.

    But, what about drift among “junk” DNA?

    This is drifting all right, drifting in the sea of non-function. So, selection pressure per differential reproductive success is irrelevant. The only source of appeal is chance variations, and these would provide no means of guidance to where islands of function are.

    In short, one may scramble information easily enough by injection of noise, the problem is that it is much harder to get back to the current or another island of function than to drift in the vast sea of gibberish, per the overwhelming proportion of sequences that are gibberish.

    This problem comes out clearly in the next point to be clipped:

    Evolution does not happen by deterministic or random change in a single DNA sequence, but in a population of individuals, with natural selection choosing among them. The frequencies of different alleles change. Considering natural selection in a population, we can clearly see that a law of conservation of specified information, or even a law of conservation of information, does not apply there.

    In short, here we have a conflation of two distinct things:

    (A) incremental changes well within the step size of the FSCO/I limit within an island of function that would lead to adaptive specialisation of an existing functional body plan . . . micro evolution, being confused with:

    (B) large step changes required to move from one island of function to another, across the sea of non-functional gibberish . . . body plan level macro evolution [which per the pattern of body plans and in the run up to the Cambrian fossil life pattern, would require moving from 1 mn or so bits of genomic info in plausible "simple" cells to 10 - 100 mn bits, dozens of times over for the different major body plans].

    Patently, once we understand the evidence of deep isolation of islands of function (cf. the OP) in genomic and body plan organisation space, we are looking at very different phenomena in case A and case B. Indeed, the logic is, that if the genome that varies too much is going downhill or into the sea of non-function, the variety will lose out in the differential reproductive success contest. natural selection here functions as a conservative force, eliminating defective “sports.” [E.g. most fancy goldfish would be utterly non-viable in a real world natural environment.]

    And if instead one appeals to non-functional DNA allowing chance variation, one is automatically drifting in the sea of gibberish.

    So, the following point falls apart (and points to another problem . . . time and pop size to fix changes):

    If we have a population of DNA sequences, we can imagine a case with four alleles of equal frequency. At a particular position in the DNA, one allele has A, one has C, one has G, and one has T. There is complete uncertainty about the sequence at this position. Now suppose that C has 10% higher fitness than A, G, or T (which have equal fitnesses). The usual equations of population genetics will predict the rise of the frequency of the C allele. After 84 generations, 99.9001% of the copies of the gene will have the C allele.

    This is an increase of information: the fourfold uncertainty about the allele has been replaced by near-certainty. It is also specified information — the population has more and more individuals of high fitness, so that the distribution of alleles in the population moves further and further into the upper tail of the original distribution of fitnesses.

    The Law of Conservation of Information has not considered this case.

    Notice, this hypothetical is about a smallest possible increment within an island of function, where there is no good evidence that there is instead a vast continent of function easily traversible by incremental changes. That runs smack into the logic of multiple part functionality dependent on proper arrangement and coupling of components. namely, the vast bulk of possible arrangements of parts will not work.

    So, we are again seeing a strawman argument.

    Of course, the FSCO/I filter does not address this case, it was never meant to. And, the real issue is ducked.

    The next problem is that we now see how many generations it takes to fix small increments. Blend in population sizes for say whales and generation lengths to suit, and we are in deep trouble relative to the claimed timelines for macro evolution. (Cf Sternberg’s discussion as is embedded here.)

    The same basic problem comes out again:

    evolution does not do a completely random search. A reasonable population genetic model involves mutation, natural selection, recombination and genetic drift in a population of sequences. But we can make a crude caricature of it by having only one sequence, and making, at each step, a single mutational change in it. If the change improves the fitness, the new sequence is accepted. Suppose that we continue to do this until 10 000 different sequences have been examined. We will end with the best of those 10 000.

    Will this do better? In the real world, it will if we start from a slightly good sequence. Each mutation carries us to a sequence that differs by only one letter. These tend to be sequences that are somewhat lower, or sometimes somewhat higher, in fitness. On average they are lower, but the chance that one reaches a sequence that is better is not zero. So there is some chance of improving the fitness, quite possibly more than once. A fairly good way to find sequences with nonzero fitnesses is to search in the neighborhood of a sequence of nonzero fitness.

    In short, the matter is that a discussion of incremental changes within an island of function is conflated erroneously with the challenge of finding such islands.

    It is evident that by refusing to examine the issue of the threshold of complexity involved in the design inference JoeF has misled himself and those who look to him for leadership on this matter. Patently, a step change of one of four states is at most two bits of info, well within the reach of a random walk reinforced by a selection filter. Where are the other 499 required to pas the FSCO/I or CSI threshold, for something within our solar system?

    Missing from the account.

    And with that issue on the table, the whole critique collapses as misdirected at a strawman caricature.

    A theory that has some empirical warrant as accounting for incremental changes within islands of function for populations, is being drastically stretched to try to explain something of a different order, origin of novel body plans requiring on evidence, not 500 – 1,000 bit increments of information, but 10 – 100 mn bits.

    The root of that seems to be a confusion that in effect assumes without proper evidence that there is a vast continent of functional genomes that can be traverses incrementally step by step from some universal common ancestor.

    The phenomena of isolated protein folds speaks against that, the isolation of coded foundational descriptions speaks against that, the lack of the transitionals making up the root, trunk and branchings of the tree of life speaks against that, the population and time to fix even fairly small changes and replace a previous dominant population speaks against that, and more.

    Unfortunately, it seems that the blinding power of a dominant paradigm — a way of not seeing as much as a way of seeing (paradigms are double-edged swords) — has led to a failure to see such gaps.

    It seems that JF has here failed to do due diligence and needs to severely correct his critiques.

    KF

  10. BA77, OT but relevant on ideology:

    I wonder what would happen if we were to see instead:

    A significant majority – 71% of all faculty – agreed with the statement: “This country would be better off if Christian fundamentalists [ATHEISTS and fellow travellers] kept their religious [anti-theistic, radical secularist] beliefs out of politics.”

    KF

  11. Kairosfocus posted this:

    >> Critics of ID commonly argue that it is not science.>>

    1 –> by begging the question by imposing a logically, epistemologically and historically unwarranted, question-begging unworkable a priori redefinition that boils down to science is applied materialist philosophy. Cf. critique here on, that gives details, without hurling an elephant.

    Critics of ID commonly point out that the only difference between an evolutionary explanation of how biology works and how ID explains the same observations is that ID requires that an intelligent designer must be present. Occam’s Razor applies: if two hypotheses explain the same observations with the same accuracy, but the second explanation requires an additional cause, then ditch the second one.

    a –> Have you consulted the definition of design theory recently? If you had, you would not make that sort of claim: ID is the scientific investigation of the possibility of and empirical warrant for signs of design in our world, where there are — contrary to your schoolyard taunt level dismissal below — in fact quite clear phenomena that manifest that only intelligent design is a known adequate cause. KF [My responses will be interleaved, in part in response to that decision to play taunt games.]

    >> For its positive predictions of the behavior of a designer they have a good point. But not for its negative criticisms of the effectiveness of natural selection, which are scientific arguments that must be taken seriously and evaluated. >>

    Valid predictions are a feature of scientific theories that are likely to be correct.

    It would assist ID if it were to make predictions about how biology works from its (ID’s) premises about the requirement for design. For example, are there any predictions from ID about the nature of the designer? When and where the designer undertook its actions? How did it do its work? By what means? What are the characteristics of the designer?

    b –> The principal prediction of design theory is that some things in our world, the world of life and that of the cosmos generally, are such that they exhibit signs that are best explained causally on design and not on blind natural forces of chance and necessity.

    c –> As you know or should know, such is directly testable by simply providing a case, for instance of Wicken’s functionally specific, information-rich complex organisation per a wiring diagram, coming about in our observation by such blind forces of chance and necessity. This is eminently empirically testable, and in fact it is subjected to routine tests, and keeps on passing. (Indeed, every post in this thread is another passed test, as NONE of these FSCO/I-rich posts has come about by lucky noise on the Internet. As also you full well know or should know.)

    d –> This is an issue of trying to find out a truth about our world, that is subject to empirical warrant, and the continued support of the basic contention that there are observable signs that point to design as cause has patently revolutionary implications for origins science. Hence the many attempts to shoot it down; too often by questionable means.

    e –> As you know or should know, it is more than enough for design theory to be a scientific undertaking, that it shows such signs. To then demand all sorts of extraneous requisites, is then a piece of red herring distraction driven by selective hyperskepticism.

    2 –> Only an allusion is presented, in a context that then tries to present a tee shirt/editorial cartoon as substantially representing the design theory case. Cf just below.

    3 –> As was already shown, when two scientific theoretical claims conflict, one will need to show the limitations of the other. And in the case of NS, it is neither the source of innovations in bio-function and associated information, nor has it been observed to be able to account for origin of body plans. It does not even address the origin of cell based life.

    Nonsense. The whole of evolutionary biology aims to explain the origins of “body plans” and “innovations in bio-function”. In the context of the debate at the UD site, the point is that evolutionary biologists think that your version of “information” is incoherent.

    f –> The deliberate act of JoeF to represent a serious argument and movement by using a cartoonist’s cartoon on a tee shirt made in response to a snide accusation of “the Vandals are coming!” as the substantive positive case being made by design theory is an inexcusable strawman caricature and rhetorical stunt that should be apologised for. Period.

    g –> You change the subject to how evolutionary biology seeks to explain origin body plans etc. yes indeed, since Darwin and Wallace. The problem is, we are here seeing consistent explanatory failure and insistence on a preferred explanation in the teeth of contrary evidence, for decades.

    h –> As for the notion that functionally specific complex organisation and associated information is “incoherent” that is a claim to self contradiction if anything. THAT is what is nonsense, as for instance, such FSCO/I is manifest in every post in this thread, which requires string data structures with glyphs in sequences controlled by the syntax and semantics of English, further shaped by the context of the discussion in this thread. In addition, the world of technology around you manifests just such functionally constrained organisation of components and implied information as can be seen by simply consulting AutoCAD files of blueprints.

    4 –> To point that out, in extensive technical arguments, as has been done for many years is to take NS seriously, so the pretence that a cartoon is an adequate summary of the case for design is a caricature of the worst sort.

    A case for design, without a case for a designer, is (how can I say this politely), trivial.

    i –> Not at all, the very intensity and rhetorical desperation of the response is a demonstration that the simple step of identifying that there are reliable signs observable in the natural world that point to design as best causal explanation per what we know about causes, is revolutionary.

    5 –> And literally, that is exhibit 1 used by JoeF

    >>Look at Figure 1, which shows a cartoon design from T-shirts sold by an ID website, Access Research Network, which also sells ID paraphernalia (I am grateful to them for kind permission to reproduce it).

    (click here for image)>>6 –> this is as classic an example of a strawman argument as I have ever seen.

    I agree with you. The cartoon is a strawman en gros.

    i –> In short, you IMPLY that JoeF was wrong to have misrepresented Design Theory by presenting its argument as a cartoon made in reply to an irresponsible accusation of barbarism and ignorant destructiveness, i.e. “The Visigoths are coming.”

    >>Figure 1. A summary of the major arguments of “intelligent design”, as they appear to its advocates, from Access Research Network’s website http://www.arn.org. Merchandise with the cartoon is available from http://www.cafepress.com/accessresearch. Copyright Chuck Assay, 2006; all rights reserved. Reprinted by permission.>>

    7 –> As the PS to the OP will show, this is not a scientific presentation, or a summary of it, but a retort to a declaration in the anti-ID literature, that the Visigoths (the ignorant barbarians bent on destruction and rapine) are coming.

    I don’t understand what you are saying here. The cartoon is a product of the creationist Access Resource Network. Do you agree with its meaning, or do you not?

    j –> First, your conflation of design theory with creationism is itself a case of strawman distortion and invidious comparison intended to raise the spectre of the bogeyman of right wing theocratic tyranny and the like. This is the red herring led away to the strawman caricature soaked in ad hominems and set alight through invited snide inferences. This, to cloud, confuse, poison and polarise the atmosphere. [You are hereby invited to consult the WAC's here, on this gross error.]

    k –> Next the cartoon as indicated, is in fact a retort by an editorial cartoonist to a similar loaded false accusation, The Visigoths are coming, as the PS to the original post above shows. Your failure to respond to easily accessible evidence, is itself a demonstration of rhetoric in bad faith, with all due respect.

    l –> As both the original post and its antecedent as wel as the wider context of UD shows, there is such a thing as a substantial presentation of the scientific argument of design theory, one that has little to do with a well-merited cartoon retort to the false accusation of “The Visigoths [= barbarians] are coming.”

    m –> A cartoonist’s retort to what is a poisonously loaded and studied insult, is not to be construed as the main argument of a movement. That you want to sustain the pretence, speaks sadly revealing volumes.

    8 –> The design theorists took it up and laid out an OUTLINE at label level of the challenge to the Darwinian establishment, and the only thing that can be properly gleaned is that the establishment feels threatened and is challenged across a wide range of topics. Substance is not addressed in any detail in a cartoon.

    >>As the bulwark of Darwinism defending the hapless establishment is overcome, note the main lines of attack. In addition to recycled creationist themes such as the Cambrian Explosion and cosmological arguments about the fine-tuning of the universe, the ladder is Michael Behe’s argument about molecular machines (Behe 1996).>>

    “and the only thing that can be properly gleaned is that the establishment feels threatened and is challenged across a wide range of topics.”

    Until you supply evidence (concerning the nature of your designer, its mode of operation, and the times and places that it did its work) you should not be surprised that the “establishment” thinks you are blowing smoke.

    n –> Red herring-strawman tactic again. Design theory is what it is, and its point is patently revolutionary: there are credible, empirically warranted as reliable, signs of design as best causal explanation. It so happens tha the world of life is full of them, and the observed cosmos. Where does this lead a REASONABLE discussion?

    o –> Obviously, you do not want to go there. We can guess why, in light of the Lewontinian agenda of a priori imposed materialism redefining science, its methods and conclusions, highlighted here on. Citing a key clip:

    It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [[--> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [[--> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. [From: “Billions and Billions of Demons,” NYRB, January 9, 1997. if you think the usual false accusation of "quote mining has merits, kindly examine the fuller cite and discussion here on.]

    p –> Johnson’s retort is richly deserved:

    For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”

    . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

    9 –> Recycled CREATIONIST themes tries to make an invidious association, and to duck the responsibility of accounting, per observations and adequate empirical evidence, for the origin of body plans by inheritable chance variation and differential reproductive success across varieties. From Darwin’s admissions on the subject to this day, that has remained unanswered. So to label and dismiss by invidious association — we know the subtext of insinuations about right wing theocratic religious agendas with racks and thumbscrews hiding up sleeves etc — is irresponsible.

    There is nothing invidious in associating you with creationism. You do it yourself. You are the one who requires that biology can only work if a supernatural entity intervenes in its processes at some point (who knows when: maybe 10,000 years ago, maybe all the time, maybe only once).

    q –> Insistence on a misrepresentation: continuing misrepresentation. As you full well know or should know, there is no insistence in design theory on a supernatural entity intervening, and there is no Young Earth creationist timeline imposition on the age of either the earth or the cosmos. The willful persistence in a false assertion in defiance of duties of care to truth, accuracy and fairness, is outright deceptive.

    r –> In correction (for onlookers at minimum) I note, for the umpteenth time, that from the days of Plato in The Laws Bk X, the proper contrast to “natural” is not “supernatural,” but the ART-ificial, i.e. the intelligently designed. That is, “natural” denotes that which is by blind chance and/or mechanical necessity, and the ART-ificial, that which is by intelligent cause. Kindly, read here on. In addition, from the very first design theory technical work by Thaxton et al in the mid 1980′s, it has been openly and consistently acknowledged by design thinkers that the evidence in the world of life by itself does not currently warrant an inference to whether a designer of life was within or beyond the observable cosmos. Indeed, it has been pointed out any number of times by me and by others, that some3thing like a molecular nanotech lab some generations beyond where Venter et al have reached, would be adequate. In addition there is a side of design theory that does infer to design beyond the observed cosmos, one that follows up on discoveries by Hoyle [a holder of a Nobel Equivalent prize in astrophysics and a life-long agnostic . . . not exactly the right-wing fundy, theocratic would be tyrant YECs of the slanderous caricature you have been alluding to all along . . . ] on cosmological, life sustaining fine tuning. That work pivots on evidence that is in many ways connected to the standard model of cosmological origins, i.e the Big Bang theory, which last I checked, had a timeline of 13.7 BY to date.

    s –> If you cannot bring yourself to acknowledge so elementary a distinction, in the interests of making an invidious association and further implying false accusations of nefarious cultural intent, that speaks volumes, sheer volumes, and none of it to your benefit. In short, this is a pons asinorum.

    10 –> The shift in terminology from COSMOLOGICAL FINE TUNING (a scientific discussion since Hoyle et al, where Hoyle was a lifelong agnostic) to “cosmological arguments” is also loaded as this directly implies that the arguments in question are those of natural theology. There is a serious scientific cosmological fine tuning issue to be addressed on the scientific merits, not dodged by making snide insinuations that this is natural theology in disguise.

    Hint: science and theology are two incompatible modes of thought. Science works, theology doesn’t.

    t –> In the teeth of my pointing out in outline the cosmological design theory challenge and its roots in science, you insist on making up accusations and distortions. This speaks volumes.

    11 –> the issue about the observed origin of irreducible complexity, similarly, is not to be dismissed by saying that’s Behe’s argument. Have you had an empirically warranted answer to Menuge’s C1 – 5 criteria for exaptation . . . the usual attempted counter? If not, then the issue of irreducible complexity is very definitely still on the table. The criteria:

    For a working [bacterial] flagellum to be built by exaptation, the five following conditions would all have to be met:

    C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function.

    C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time.

    C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed.

    C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant.

    C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly.

    ( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.)

    >>The other main attack, the battering ram, is the “information content of DNA” which is destroying the barrier of “random mutation”.>>

    And biology has demonstrated that each of these putative criteria have been met by actual biological organisms. So your point would be?

    u –> Bluffing based on just so stories. Simply show an example where per our observation, blind chance and mechanical necessity in an organism has constructed through incremental evolutionary steps, a significant irreducibly complex entity. Failing that, you are simply flailing.

    12 –> And your evidence that per observation, FSCO/I is reasonably a product of blind chance and mechanical necessity is? ________________

    13 –> Absent such, the evidence stands, that the only known causally adequate source of FSCO/I is design. So, we have every epistemic right to infer that FSCO/I is an empirically reliable sign of design as cause.

    FIASCO is your claim. Produce evidence that it exists, that it can be measured without prior knowledge of the system under observation (no smuggling allowed).

    v –> Descent into puerile school-yard taunts, sadly revelatory of underlying mentality. Have the common decency and respect to deal with a descriptive summary of functionally specific, complex organisation and/or associated information [FSCO/I]. And last I checked, my name was not Orgel, nor Wicken. As I have pointed out over and over again — but the strawman distortion is too tempting to give up in the face of its being exposed as oh so inconveniently false — FSCO/I is a summary of phenomena noted by distinguished origin of life theorists across the 1970′s. Let me pause to again cite the key references:

    WICKEN, 1979: ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]

    ORGEL, 1973: . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.]

    >>The “irreducible complexity of molecular machines” arguments of Michael Behe have received most of the publicity; William Dembski’s more theoretical arguments involving information theory have been harder for people to understand.>>

    14 –> Not so as I have noticed. Both have been discussed.

    >>There have been a number of extensive critiques of Dembski’s arguments published or posted on the web (Wilkins and Elsberry 2001; Godfrey-Smith 2001; Rosenhouse 2002; Schneider 2001, 2002; Shallit 2002; Tellgren 2002; Wein 2002; Elsberry and Shallit 2003; Edis 2004; Shallit and Elsberry 2004; Perakh 2004a, 2004b; Tellgren 2005; Häggström 2007). They have pointed out many problems. These range from the most serious to nit-picking quibbles . . .>

    This depends on what someone has read. Whether Behe’s ideas or Dembski’s ideas are difficult to understand depends on how clearly they are expressed and how much attention they receive from people who understand the arguments they are making. In the case of Behe and Dembski, the counter-arguments have been comprehensive. It would help if you were to lay out what critiques you think are “most serious”.

    w –> JoeF has claimed to represent the cumulative main critiques of Behe and Dembski. I have simply taken him at his word, and his summary collapses into a collection of rhetorical stunts and refusals to engage the substance on its merits. I particularly find the attempt to present a one-step mutation taking 80-odd generations to fix itself as an adequate retort to something that addressed increments of info that require 500 – 1,000 bits to be relevant is revealing, as is his refusal to address the issue that the assumed continent of incrementally linked functional configurations is in the teeth of abundant evidence and good reasons that FSCO/I will naturally be found in islands in a space of possible configs of components overwhelmingly dominated by non-functional gibberish.

    15 –> Hurling the elephant. That the ideologically committed have tried rebuttals is no news. What is not being pointed out is how such have metthe criterion of warrant that is decisive: show FSCO/I as present in life forms and other relevant contexts, produced by blind chance and mechanical necessity.

    More FIASCO. First show that FIASCO exists, and then show how to measure it without any “background knowledge” (no smuggling allowed).

    x –> More schoolyard taunting, and refusal to acknowledge the reality of something as evident as the difference between posts in this thread in English and random strings like:fi3egwsfi or determined ones like: hhhhhhhhhh. If this is the level of denial of patent reality required to sustain the evolutionary materialism dominated neo-darwinian paradigm for macroevolution and the associated claims on origin of life on blind chemistry and physics in whatever prebiotic environment is popular just now, that is telling.

    >>Digital codes

    Stephen Meyer, who heads the Discovery Institute’s program on ID, describes Dembski’s work in this way:

    We know that information — whether, say, in hieroglyphics or radio signals — always arises from an intelligent source. …. So the discovery of digital information in DNA provides strong grounds for inferring that intelligence played a causal role in its origin. (Meyer 2006)

    What is this mysterious “digital information”?>>

    16 –> Joe F pretends here that there is no genetic code dependent on the discrete state of elements in the DNA strings for its meaning.

    I am sorry, but your statement is incoherent. What do you mean?

    y –> Stunning revelation of stubborn, willful resort to rhetorical stunts. You full well know or should know that a genetic code exists, that has been demonstrated across the 1950′s – 60′s, with Nobel Prizes duly awarded. That code depends on strings of monomers used in three letter codons that specify start, stop and load with amino acid ABC just now, etc. The monomers are discrete state, symbolic elements used in protein synthesis. As is common knowledge. But no, the rhetorical pretence that design thinkers of any level are ignorant and incoherent destructive barbarians is too tempting.

    >> Has a message from a Designer been discovered? When DNA sequences are read, can they be converted into English sentences such as: “Copyright 4004 bce by the intelligent designer; all rights reserved”? Or can they be converted into numbers, with one stretch of DNA turning out to contain the first 10 000 digits of ?? Of course not.>>

    17 –> red herring and strawmen caricatures. DNA has been known to have digitally coded, specifically functional complex information since 1953 – 1957. That is what needs to be accounted for. That this is being diverted, speaks volumes.

    I assume that means your answer is: No, I have no evidence that DNA contains any pre-determined messages. Thanks for confirming.

    z –> Doubling down on a misrepresentation, yet another rhetorical stunt. DNA contains digitally coded algorithmic information and associated regulatory codes, as you know or full well should know. That digital information is used by molecular nanomachines, to assemble proteins used as the workhorse molecular machines of the cell. The pretence that this is not known context and that design thinkers are ignorant, incoherent and destructive dangerous barbarians is hereby revealed to be a case of willfully speaking in defiance of duties of care to accuracy, truth, and fairness, in the hopes of profiting from such misrepreaentation. That is deception, willful deception, not to mention a waste of our time.

    >> If anything like this had happened, it would have been big news indeed. You would have heard by now.>>

    18 –> Strawman.

    Or a missed opportunity.

    aa –> More of the same.

    >> No, the mysterious digital information turns out to be nothing more than the usual genetic information that codes for the features of life, information that makes the organism well-adapted. The “digital information” is just the presence of sequences that code for RNA and proteins — sequences that lead to high fitness. >>

    19 –> So, JoeF actually knows what is to be addressed, but by suitably setting up a strawman, he can pretend that the issue to be addressed on the merits needs not be so addressed. it is familiar so we don’t need to account for it. FAIL.

    What needs to be addressed is the answer to the question: how does genetic material capture changes in the environment in which organisms live? Biology tries to do that, ID just asserts that somethingdidit (but not nature).

    bb –> Again, refusal to address what is material, in order to distract attention and go down the road of red herrings, led away to strawman distortions laced with ad hominems, multiplied here by drumbeat repetition of resulting big lies [case in point here complete with turnabout accusation compounding tactic that projects blame to the other side . . . ] — and yes, for more than sufficient cause as shown above, I am saying that at this point this is an outright deceptive propaganda tactic that you are carrying forward in defiance of your own duties of care to accuracy, truth and fairness — as though that insistent repetition would convert them into truth.

    >> Now we already knew that they were there. Most biologists would be surprised to hear that their presence is, in itself, a strong argument for ID — biologists would regard them as the outcome of natural selection. To see them as evidence of ID, one would need an argument that showed that they could only have arisen by purposeful action (ID), and not by selection. Dembski’s argument claims to establish this. >>

    20 –> What is the known, observed source of complex functional digital codes backed up by organised implementing machinery, again?

    1. What humans do. 2. What all biological organisms do.

    cc –> Ducking the point: “known, observed source.”

    dd –> As in, we did not observe the cause of biological systems, so we are here forced to use the methods of inference of historical/origins science to infer on the uniformity principle. Namely, that we have in hand credible traces of a remote, past that we did not observe and cannot observe. We are interested in understanding the processes that have shaped what we observe. We therefore examine cases in our observation of candidate causes at work and their effects and characteristic traces. We identify that certain effects are reliable — inductively speaking — signs of particular causes in action. We then infer that similar traces form the past, are best explained on the action of these causes. Which is a widespread practice in origins science.

    ee –> What is plainly happening here is that now that the uniformity principle show is on the other foot [pointing tot hat ever so unwelcome causal factor, design], ther is a game of selective hyperskepticism driven by an ideological Lewontinian a priori materialistic imposition. Boiled down: willful question-begging.

    ff –> Next, in that context, there is one observed source of such information processing systems, knowledgeable, skilled designers. Being human is not a sufficient or inherently relevant criterion. What is, is knowledge and skill. There is no good reason to infer that humans who are computer designers etc exhaust the set of actual or potential intelligent designers.

    21 –> Has there been a surprise discovery and observation of such systems spontaneously appearing in simulations of warm little ponds or the like, so that we can show per observational warrant that FSCO/I and particularly digital codes and implementing machinery can and have been produced by blind chance and mechanical necessity. Absence of a Nobel prize for that says a loud NO.

    The answer is we don’t yet know. The research is continuing. Leaping to the assumption that it is impossible for life to emerge from non-life is premature. Your call.

    ff –> Deliberately mis-labelling as “assumption” a well-warranted inductive inference on what we do know. namely, the only known adequate cause of FSCO/I which as Wicken et al pointed out, is a known characteristic of life.

    22 –> Similarly, has there been a demonstration per empirical observation to warrant he calim that the origin of novel body plans involving ~ 10 – 100+ Mn bits of additional FSCO/I dozens of times over has been accounted for? Again, NO. (And the context for this present exchange is that Meyer is about to release further documentation on the point.)

    Since FIASO has not been empirically demonstrated to be measurable, your question is incoherent.

    gg –> Drumbeat repetition of a schoolyard taunt and a willful falsehood maintained by refusal to acknowledge what ASCII text in English is, as just one example. That, while producing such text in English. This is self-referential absurdity on TA’s part.

    23 –> In short, JoeF is trying to sit on the collective authority of Biologists without the necessary back up of warrant on the empirically grounded merits. This is a bluff, not a serious argument.

    Hang on a sec, are you saying that the “collective authority of Biologists” has no reason to be taken seriously? I believe they do, and you are doing the bluffing.

    hh –> I am saying that no authority, individual or collective, is any stronger than his/her/their facts, assumptions and reasoning. As in wasn’t that the alleged premise of “free thought”? (or is it that you think that dressing up in Lab coats and pronouncing solemnly in the name of Science under control of a priori materialist ideology confers a power to capture truth that wearing ecclesiastical robes does not? And BTW, in case you want it, here is a context of warrant at 101 level regarding the claims of the Christian view. The common assertions of a cook-up are patently false.)

    ____________

    So far Joe F’s essay is — as shown in outline — long on rhetorical stunts, short on warrant.

    Not good enough, not by a long shot.

    KF

    Of course.

    ii –> And, TA, with all due respect, you have followed exactly in the same path of rhetorical stunts and fallacies of misrepresentation and atmosphere-poisoning substituting for actual serious engagement on the merits. Please, pull up your socks. KF

  12. TA: It is clear that if to object to my comments, you have had to resort to silly schoolyard taunts and present willful false hoods as though they were facts, refuse to acknowledge that there is such a thing as a digital genetic code — thus a linguistic phenomenon — that functions algorithmically in the cell, and while writing posts in ASCII coded English text pretend that there is no such thing as functionally specific complex organisation and associated information [FSCO/I], that speaks volumes on the reductio ad absurdum of the objections to design. Accordingly, I have marked up your above remarks, having released them from moderation (as I recently discovered that I have power to do in my own threads). Please, do better next time. KF

  13. Joe: Any further thoughts on the markup I have done? KF

  14. How about this strawman:

    Note that if Dembski’s arguments were valid, they would make adaptation by natural selection of any organism, in any phenotype, essentially impossible.

    And the strawman above pertaining to reducing the uncertainty of a gene = an increase of information.

    I just cannot believe that Joe Felsenstein is a professor at a university.

  15. Joe: This is one form of the conflation between small increments that are not beyond the complexity threshold (where micro evo is accepted by even YECs), and the need to find new islands of function by blind search that is implied by Darwinian Macroevo, which is what the FSCO/I challenge targets. So, the real pivot of the origin of body plans question is the Darwinist tree of life implication of a vast continent of function incrementally accessible through stepwise change (which, if true, would be the dominant feature of the fossil record and the world of life as we see . . . but it obviously isn’t), and the empirically supported reality of isolation of islands of function in relevant config spaces. This has been pointed out over and over, but it seems rhetorically convenient to the sort of Darwinist advocates we are facing, to substitute a convenient strawman target. Utterly revealing on the true balance on the merits. KF

  16. As to timothya’s claim here:

    science and theology are two incompatible modes of thought. Science works, theology doesn’t.

    Actually timothya, as niwrad elegantly pointed out in the “Comprehensibility of World” thread,,,

    Comprehensibility of the world
    Excerpt: ,,,Bottom line: without an absolute Truth, (there would be) no logic, no mathematics, no beings, no knowledge by beings, no science, no comprehensibility of the world whatsoever.
    http://www.uncommondescent.com.....the-world/

    Theism is a required presupposition in ‘science’. In fact it can be forcefully argued that ‘modern science’ would have never even gotten off the ground in the first place without ‘improperly’ injecting the Theistic philosophy into science. Particularly ‘improperly’ injecting Christian Theism into it!

    Christ and Science – Stanley L. Jaki
    Excerpt:,,Why is it that although that law appears to be so natural, it came to be formulated in none of the great ancient (and pagan) cultures, but in the medieval Christian West? The question is momentous because exact science assures control over nature and secures for the modern West its global dominance. As shown in this booklet, which summarizes its author’s major studies on the subject, the answer to that question lies with a particular facet of belief in Christ as the “only begotten Son of God.” There is, indeed, a very deep reason, both scientific and theological, that justifies the tying of Christ and science together.,,,
    http://www.realviewbooks.com/c.....l#chriscie

    ,,, Sure science is dependent on empirical evidence for validating various competing ‘interpretations’ within the Theistic philosophy, but we must never forget that unless Theism is held as unconditionally true throughout a ‘scientific’ investigation then the entire enterprise of science winds up in epistemological failure as is evidenced so clearly by Boltzmann’s Brain and Plantinga’s Evolutionary Argument Against Naturalism. It is not that Theists are demanding that Theism is the only answer allowed to be considered true prior to investigation, as atheist demand with their artificial imposition of methodological naturalism onto science, it is that if Theism is not held as unconditionally true prior to scientific investigation then nothing else can ever be held as unconditionally true there afterwards!

    Furthermore, as to Epistemological Naturalism, which holds that science is the only source of knowledge, Dr. Craig states it is a false theory of knowledge since,,,

    a). it is overly restrictive
    and
    b) it is self refuting (‘science’ is the only source of knowledge is a philosophical claim about reality that itself not deduced from ‘science’)

    Moreover Dr Craig states, epistemological naturalism does not imply metaphysical naturalism.,, In fact a Empistemological Naturalist can and should be a Theist, Dr. Craig observes, since Metaphysical Naturalism is reducto ad absurdum on (at least) these eight following points:

    1. The argument from the intentionality (aboutness) of mental states implies non-physical minds (dualism), which is incompatible with naturalism
    2. The existence of meaning in language is incompatible with naturalism, Rosenberg even says that all the sentences in his own book are meaningless
    3. The existence of truth is incompatible with naturalism
    4. The argument from moral praise and blame is incompatible with naturalism
    5. Libertarian freedom (free will) is incompatible with naturalism
    6. Purpose is incompatible with naturalism
    7. The enduring concept of self is incompatible with naturalism
    8. The experience of first-person subjectivity (“I”) is incompatible with naturalism

    I strongly suggest watching Dr. Craig’s presentation, that I have linked, to get a full feel for just how insane the metaphysical naturalist’s position actually is.

    Is Metaphysical Naturalism Viable? – William Lane Craig – video
    http://www.youtube.com/watch?v=HzS_CQnmoLQ

    Moreover it can be forcefully argued that not only was Christian Theism required for the beginnings of ‘modern science’ but that to bring ‘modern science’ to successful completion (at least as far as physics and mathematics go for a ‘theory of everything’) then a understanding of Christ’s centrality in reality must be once again be accepted into ‘science’:

    The Center Of The Universe Is Life – General Relativity, Quantum Mechanics, Entropy and The Shroud Of Turin – video
    http://vimeo.com/34084462

  17. timothya:

    Scientific Naturalism Will NEVER Lead to God’s Existence – video
    http://www.youtube.com/watch?v=hhXzxE_MMGk

  18. timothya:

    Why No One (Can) Believe Atheism/Naturalism to be True – video
    Excerpt: Since we are creatures of natural selection, we cannot totally trust our senses. Evolution only passes on traits that help a species survive, and not concerned with preserving traits that tell a species what is actually true about life.
    Richard Dawkins – quoted from “The God Delusion”
    http://www.youtube.com/watch?v=N4QFsKevTXs

  19. 19
    Kantian Naturalist

    Paul Churchland, perhaps one of the most forceful proponents for naturalism today, has in fact responded to Plantinga’s EAAN. Here’s the citation:

    “Is Evolutionary Naturalism Epistemologically Self-Defeating?”

    Paul Churchland
    Philo 12 (2):135-141 (2009)

    Abstract: Alvin Plantinga argues that our cognitive mechanisms have been selected for their ability to sustain reproductively successful behaviors, not for their ability to track truth. This aspect of our cognitive mechanisms is said to pose a problem for the biological theory of evolution by natural selection in the following way. If our cognitive mechanisms do not provide any assurances that the theories generated by them are true, then the fact that evolutionary theory has been generated by them, and even accepted by them, provides no assurance whatever that evolutionary theory is true. Plantinga’s argument, I argue, innocently assumes that the (problematic) “truth-tracking character” of our native cognitive mechanisms is the only possible or available source of rational warrant or justification for evolutionary theory. But it isn’t. Plantinga is ignoring the artificial mechanisms for theory creation and theory-evaluation embodied in the complex institutions and procedures of modern science.

    Churchland’s latest, Plato’s Camera, also fills in quite a bit of the story he thinks naturalism requires. I’m not entirely convinced that Churchland’s approach does all the work he thinks he does, because I’m less confident than he is that we can treat brain-states as bearers of semantic content, although they are clearly implicated in the causes of semantic content. But brain-states aren’t merely “syntactical”, either — they do have representational content — but is that enough to warrant the name of semantic content? Still working on this one!

  20. @KN #19

    Churchland: Plantinga is ignoring the artificial mechanisms for theory creation and theory-evaluation embodied in the complex institutions and procedures of modern science.

    The procedures of modern science don’t cover many subjects and certainly not metaphysics – e.g. naturalistic beliefs -, which is the point Plantinga wanted to make.

  21. 21
    Kantian Naturalist

    Fair enough, Box, but Churchland’s point is that the procedures of modern science, such as evolutionary theory, do not depend on the reliability of our ordinary belief-formation mechanisms. So if evolutionary theory shows that our ordinary belief-formation mechanisms are not altogether reliable, that doesn’t undermine evolutionary theory. So the EAAN doesn’t work.

    Put otherwise, all that a good naturalist (like Churchland) need be committed to is that our ordinary belief-formation mechanisms are generally reliable about some things, and that scientific procedures are highly artificial (so not “natural”, in one sense) but highly reliable techniques for arriving at much more reliable (though often counter-intuitive) beliefs.

    So there’s no paradox involved in holding the second-order, scientifically-informed belief that our first-order, ordinary beliefs are not perfectly reliable. We do, on the whole, take our ordinary beliefs to be perfectly reliable — and that, too, is a second-order belief. What we need, and in fact have, is the third-order belief that beliefs formed through scientific techniques tend to be more reliable than beliefs arrived at through other means. So given two competing beliefs, I have good reason to prefer the one that has been arrived at by well-established scientific practices — even if that belief is a second-order belief about the reliability of my first-order, ordinary perceptual/empirical beliefs. The EAAN paradox is avoided because the reliability of scientific institutions and practices doesn’t depend on the reliability of ordinary, first-order perceptual beliefs.

    More generally: the structure of beliefs is not a pyramid, with scientific beliefs at the top resting on a foundation of ordinary, perceptual beliefs at the bottom. Plantinga is so committed to foundationalism that he doesn’t seem to appreciate that an anti-foundationalist epistemology is immune to his skeptical argument — whereas Churchland, following in the footsteps of Hegel, Peirce, Sellars, and Quine, is working out a very interesting anti-foundationalistic and naturalized epistemology.

    Plantinga might have a point if foundationalism were the only live option in epistemology, but it isn’t, so he’s not.

  22. KN #21: (..) Churchland’s point is that the procedures of modern science, such as evolutionary theory, do not depend on the reliability of our ordinary belief-formation mechanisms.

    KN, Churchland may very well have a point about certain empirical scientific knowledge, but surely this does not include the pseudoscience called evolutionary biology!

  23. 23
    Kantian Naturalist

    Ok, but if Churchland’s response to Plantinga is right, then appealing to Plantinga won’t help you at all in attacking evolutionary biology.

  24. KN, as usual you appeal to nonsense to support your nonsense:

    Plantinga is ignoring the artificial mechanisms for theory creation and theory-evaluation embodied in the complex institutions and procedures of modern science.

    Plantinga’s point is not to refute evolution on empirical grounds (as if evolution had any observational evidence to refute), his point is to refute naturalism on cognitive reliability grounds i.e. naturalism cannot support a consistent reliable epistemology! Ironically KN, YOU YOURSELF, as well as all other atheists, are the ones who refuse to address the fact that the pseudo-science of Darwinism has no solid empirical warrant, no falsification criteria, to be considered science in the first place! And here you accuse Plantinga, in your citation, of ignoring empirical warrant. Are you completely oblivious to what you just did?

    “On the other hand, I disagree that Darwin’s theory is as `solid as any explanation in science.; Disagree? I regard the claim as preposterous. Quantum electrodynamics is accurate to thirteen or so decimal places; so, too, general relativity. A leaf trembling in the wrong way would suffice to shatter either theory. What can Darwinian theory offer in comparison?”
    (Berlinski, D., “A Scientific Scandal?: David Berlinski & Critics,” Commentary, July 8, 2003)

    “nobody to date has yet found a demarcation criterion according to which Darwin(ism) can be described as scientific” – Imre Lakatos (November 9, 1922 – February 2, 1974) a philosopher of mathematics and science, quote was as stated in 1973 LSE Scientific Method Lecture

    Oxford University Seeks Mathemagician — May 5th, 2011 by Douglas Axe
    Excerpt: Grand theories in physics are usually expressed in mathematics. Newton’s mechanics and Einstein’s theory of special relativity are essentially equations. Words are needed only to interpret the terms. Darwin’s theory of evolution by natural selection has obstinately remained in words since 1859. …
    http://biologicinstitute.org/2.....emagician/

    Macroevolution, microevolution and chemistry: the devil is in the details – Dr. V. J. Torley – February 27, 2013
    Excerpt: After all, mathematics, scientific laws and observed processes are supposed to form the basis of all scientific explanation. If none of these provides support for Darwinian macroevolution, then why on earth should we accept it? Indeed, why does macroevolution belong in the province of science at all, if its scientific basis cannot be demonstrated?
    http://www.uncommondescent.com.....e-details/

  25. Churchland’s response is only right if evolutionary biology would be a product of the ‘infallible’ procedures of modern science.
    The few (incomplete) empirical truths that science provides us is no foundation for trusting our cognitive mechanisms if they are produced by Darwinian evolution.
    I say ‘incomplete’ because even a science as physics leaves us with many questions.

    ‘Why do the constants and parameters of theoretical physics obey such tight constraints? If this is one question, it leads at once to another. The laws of nature are what they are. They are fundamental. But why are they true? Why do material objects attract one another throughout the universe with a kind of brute and aching inevitability? Why is space-and-time curved by the presence of matter? Why is the electron charged? Why? Yes, why?’
    Berlinski, p. 111, The Devil’s Delusion.

  26. notes:

    The Heretic – Who is Thomas Nagel and why are so many of his fellow academics condemning him? – March 25, 2013
    Excerpt: Neo-Darwinism insists that every phenomenon, every species, every trait of every species, is the consequence of random chance, as natural selection requires. And yet, Nagel says, “certain things are so remarkable that they have to be explained as non-accidental if we are to pretend to a real understanding of the world.”
    Among these remarkable, nonaccidental things are many of the features of the manifest image. Consciousness itself, for example: You can’t explain consciousness in evolutionary terms, Nagel says, without undermining the explanation itself. Evolution easily accounts for rudimentary kinds of awareness. Hundreds of thousands of years ago on the African savannah, where the earliest humans evolved the unique characteristics of our species, the ability to sense danger or to read signals from a potential mate would clearly help an organism survive.
    So far, so good. But the human brain can do much more than this. It can perform calculus, hypothesize metaphysics, compose music—even develop a theory of evolution. None of these higher capacities has any evident survival value, certainly not hundreds of thousands of years ago when the chief aim of mental life was to avoid getting eaten. Could our brain have developed and sustained such nonadaptive abilities by the trial and error of natural selection, as neo-Darwinism insists? It’s possible, but the odds, Nagel says, are “vanishingly small.” If Nagel is right, the materialist is in a pickle. The conscious brain that is able to come up with neo-Darwinism as a universal explanation simultaneously makes neo-Darwinism, as a universal explanation, exceedingly unlikely.,,,
    ,,,Fortunately, materialism is never translated into life as it’s lived. As colleagues and friends, husbands and mothers, wives and fathers, sons and daughters, materialists never put their money where their mouth is. Nobody thinks his daughter is just molecules in motion and nothing but; nobody thinks the Holocaust was evil, but only in a relative, provisional sense. A materialist who lived his life according to his professed convictions—understanding himself to have no moral agency at all, seeing his friends and enemies and family as genetically determined robots—wouldn’t just be a materialist: He’d be a psychopath.
    http://www.weeklystandard.com/.....tml?page=3

    The following interview is sadly comical as a evolutionary psychologist realizes that neo-Darwinism can offer no guarantee that our faculties of reasoning will correspond to the truth, not even for the truth that he is purporting to give in the interview, (which begs the question of how was he able to come to that particular truthful realization, in the first place, if neo-Darwinian evolution were actually true?);

    Evolutionary guru: Don’t believe everything you think – October 2011
    Interviewer: You could be deceiving yourself about that.(?)
    Evolutionary Psychologist: Absolutely.
    http://www.newscientist.com/ar.....think.html

    Evolutionists Are Now Saying Their Thinking is Flawed (But Evolution is Still a Fact) – Cornelius Hunter – May 2012
    Excerpt: But the point here is that these “researchers” are making an assertion (human reasoning evolved and is flawed) which undermines their very argument. If human reasoning evolved and is flawed, then how can we know that evolution is a fact, much less any particular details of said evolutionary process that they think they understand via their “research”?
    http://darwins-god.blogspot.co.....their.html

    “Atheism turns out to be too simple. If the whole universe has no meaning, we should never have found out that it has no meaning…”
    CS Lewis – Mere Christianity

    “But then with me the horrid doubt always arises whether the convictions of man’s mind, which has been developed from the mind of the lower animals, are of any value or at all trustworthy. Would any one trust in the convictions of a monkey’s mind, if there are any convictions in such a mind?” – Charles Darwin – Letter To William Graham – July 3, 1881

  27. 27
    Kantian Naturalist

    BornAgain77,

    I think you’ve misunderstood Churchland’s response to Plantinga. (Or perhaps Churchland has misunderstood Plantinga?)

    Churchland’s response takes it that Plantinga’s argument goes as follows:

    (1) Evolutionary biology strongly suggests that our first-order, ordinary beliefs (e.g. perceptual beliefs, beliefs about probability, memories) are much less reliable than we ordinarily take them to be;

    (2) But evolutionary theory, like all scientific theories, depends upon those ordinary beliefs;

    (3) So, evolutionary theory undermines itself — anyone who accepts it has good reason not to accept it.

    Churchland thinks that (1) is true, but that (2) is not. The reason why Churchland thinks that (2) is false is because scientific theories are not based on ordinary beliefs, as myths and fables and ‘old-wives tales’ are, but on highly complex institutions and practices that have taken us the better part of two thousand years to develop.

    But, if (2) is false, then (3) doesn’t follow from (1), and so evolutionary theory isn’t self-undermining, even if (1) is correct.

  28. The reason why Churchland thinks that (2) is false is because scientific theories are not based on ordinary beliefs, as myths and fables and ‘old-wives tales’ are, but on highly complex institutions and practices that have taken us the better part of two thousand years to develop.

    This doesn’t even begin to supply a response to Plantinga’s argument.

    The complexity of the institution doesn’t matter, just as the complexity of a computer does nothing to guarantee the accuracy of its results. (If you think otherwise, just ask yourself if it’s possible for an incredibly complicated computer to consistently given incorrect answers.) The same holds for the complexity of the practices.

    Put otherwise, all that a good naturalist (like Churchland) need be committed to is that our ordinary belief-formation mechanisms are generally reliable about some things, and that scientific procedures are highly artificial (so not “natural”, in one sense) but highly reliable techniques for arriving at much more reliable (though often counter-intuitive) beliefs.

    Plantinga’s argument takes aim at the very idea that our ‘ordinary belief-formation mechanisms’ are ‘generally reliable’. Likewise, the ‘artificiality’ of scientific procedures lend no assistance here, because it’s not as if artificiality confers accuracy. At the end of the day, you’re still dealing with human beings arriving at beliefs based on what they believe to be valid data, etc. Scientific procedures and practices are themselves products of human beliefs and ideas.

    Really, KN, this is – certainly as you’ve summarized so far – really bad response to Plantinga. It’s a little like saying that, while we can’t trust the claims and theories of a completely irrational person, we can trust the claims and theories produced by a computer manufactured and programmed by the completely irrational person. If you see the problem with that proposition, you’re going to see the problem with Churchlands’ reply to Plantinga. It either sneaks in and assumes the very thing under question, or it totally ignores the problem posed to begin with.

  29. KN your, and other atheist’s primary problem with ‘science’, is best summed up by these quotes from Plantinga:

    Philosopher Sticks Up for God – NY TIMES
    Excerpt: “Theism, with its vision of an orderly universe superintended by a God who created rational-minded creatures in his own image, “is vastly more hospitable to science than naturalism,” with its random process of natural selection, he (Plantinga) writes. “Indeed, it is theism, not naturalism, that deserves to be called ‘the scientific worldview.’”"

    “Modern science was conceived, and born, and flourished in the matrix of Christian theism. Only liberal doses of self-deception and double-think, I believe, will permit it to flourish in the context of Darwinian naturalism.”
    ~ Alvin Plantinga

    In that atheists deny any outside perspective from the ‘natural’ world (i.e. they deny they have a ‘mind’ with free will) so as to be able to make unbiased judgements about the ‘natural’ world. As long as atheists maintain that they are nothing more than accidental products of the ‘natural’ world, with no mind, then they will always lack the proper perspective that enables people to judge whether or not our perceptions about the ‘natural’ world are reliable. Your predicament reminds me a little bit of David Chalmers’s ‘zombie argument’ for consciousness:

    David Chalmers on Consciousness – video
    http://www.youtube.com/watch?v.....age#t=127s

    Moreover, consciousness is found to be a primary element of reality rather than a secondary element of reality as atheists hold:

    Quantum Mechanics – Double Slit Experiment. Is anything real? (Prof. Anton Zeilinger)
    http://www.youtube.com/watch?v=ayvbKafw2g0

    1. Consciousness either preceded all of material reality or is a ‘epi-phenomena’ of material reality.
    2. If consciousness is a ‘epi-phenomena’ of material reality then consciousness will be found to have no special position within material reality. Whereas conversely, if consciousness precedes material reality then consciousness will be found to have a special position within material reality.
    3. Consciousness is found to have a special, even central, position within material reality.
    4. Therefore, consciousness is found to precede material reality.

    Four intersecting lines of experimental evidence from quantum mechanics that shows that consciousness precedes material reality (Wigner’s Quantum Symmetries, Wheeler’s Delayed Choice, Leggett’s Inequalities, Quantum Zeno effect):
    https://docs.google.com/document/d/1G_Fi50ljF5w_XyJHfmSIZsOcPFhgoAZ3PRc_ktY8cFo/edit

    Quantum Zeno effect
    Excerpt: The quantum Zeno effect is,,, an unstable particle, if observed continuously, will never decay.
    http://en.wikipedia.org/wiki/Quantum_Zeno_effect

    The reason why I am fascinated with this Zeno effect in particular is, for one thing, that Entropy is, by a wide margin, the most finely tuned of the initial conditions of the Big Bang:

    The Physics of the Small and Large: What is the Bridge Between Them? Roger Penrose
    Excerpt: “The time-asymmetry is fundamentally connected to with the Second Law of Thermodynamics: indeed, the extraordinarily special nature (to a greater precision than about 1 in 10^10^123, in terms of phase-space volume) can be identified as the “source” of the Second Law (Entropy).”

    How special was the big bang? – Roger Penrose
    Excerpt: This now tells us how precise the Creator’s aim must have been: namely to an accuracy of one part in 10^10^123.
    (from the Emperor’s New Mind, Penrose, pp 339-345 – 1989)

    For another thing, it is interesting to note just how foundational entropy is in its explanatory power:

    Shining Light on Dark Energy – October 21, 2012
    Excerpt: It (Entropy) explains time; it explains every possible action in the universe;,,
    Even gravity, Vedral argued, can be expressed as a consequence of the law of entropy. ,,,
    The principles of thermodynamics are at their roots all to do with information theory. Information theory is simply an embodiment of how we interact with the universe —,,,
    http://crev.info/2012/10/shini.....rk-energy/

    In fact, entropy is also the primary reason why our physical bodies grow old and die,,,

    *3 new mutations every time a cell divides in your body
    * Average cell of 15 year old has up to 6000 mutations
    *Average cell of 60 year old has 40,000 mutations
    Reproductive cells are ‘designed’ so that, early on in development, they are ‘set aside’ and thus they do not accumulate mutations as the rest of the cells of our bodies do. Regardless of this protective barrier against the accumulation of slightly detrimental mutations still we find that,,,
    *60-175 mutations are passed on to each new generation.
    - mutation rates quoted from geneticist Dr. John Sanford

    This following video brings the point personally home to us about the effects of genetic entropy:

    Aging Process – 80 years in 40 seconds – video
    http://www.youtube.com/watch?v=hSdxYmGro_Y

    And yet, to repeat the paper,,,

    Quantum Zeno effect
    Excerpt: The quantum Zeno effect is,,, an unstable particle, if observed continuously, will never decay.
    http://en.wikipedia.org/wiki/Quantum_Zeno_effect

    This is just fascinating! Why in blue blazes should conscious observation put a freeze on entropic decay, unless consciousness was/is more foundational to reality than entropy is? And seeing as to how entropy is VERY foundational to reality, I think the implications are fairly obvious:

    Verse and Music:

    Romans 8:18-21
    I consider that our present sufferings are not worth comparing with the glory that will be revealed in us. The creation waits in eager expectation for the sons of God to be revealed. For the creation was subjected to frustration, not by its own choice, but by the will of the one who subjected it, in hope that the creation itself will be liberated from its bondage to decay and brought into the glorious freedom of the children of God.

    Evanescence – The Other Side (Lyric Video)
    http://www.vevo.com/watch/evan.....tantsearch

  30. Kantian Naturalist:

    The reason why Churchland thinks that (2) is false is because scientific theories are not based on ordinary beliefs, as myths and fables and ‘old-wives tales’ are, but on highly complex institutions and practices that have taken us the better part of two thousand years to develop.

    Nonsense. If individual belief systems are unreliable from a Darwinist perspective, then so are the institutions and practices that build on and derive from those individual belief systems. Accordingly, the amount of time that it takes for an institution to develop is irrelevant since it would simply be the case of newer unreliable beliefs being piled on top of older unreliable beliefs.

  31. 31
    Kantian Naturalist

    I just re-read Churchland’s response to Plantinga, so I have a better grasp of what he’s doing there.

    Churchland begins by conceding, or appearing to concede, the central point at issue: “Our cognitive mechanisms have been selected for their ability to sustain reproductively successful behaviors, not for their ability to track truth” (136). (I say “appearing to concede” because I think that Churchland’s neurosemantics undermines how damning this is supposed to be, so everything turns on the plausibility of neurosemantics.)

    Having made that (apparent) concession, he nevertheless rejects Plantinga’s inference:

    Plantinga’s argument innocently assumes that the (problematic) “truth-tracking character” of our native cognitive mechanisms is the only possible or available source of rational warrant or justification for evolutionary theory. But it isn’t. Plantinga is ignoring the artificial mechanisms for theory-creation and theory-evaluation embodied in the complex institutions and procedures of modern science. These super-added mechanisms lie mostly outside the biological brain, and they provide a much more creative environment for generating interesting theories, and a much more demanding filter for evaluating them, than a single biological brain could ever provide with its native resources alone. (136-7)

    Churchland then proceeds to list several of the practices and technologies he has in mind, such as double-blind studies, testing for statistical significance, comparing theories against each other, directly comparing predictions with experimental data, and also the extensive augmentation of our sensory modalities with telescopes, microscopes, nucleic-acid sequencers, and radioactive dating.

    The upshot of all this is that we have perfectly good reason to confer more rational warrant upon the artificial “cognitive engine of the Collective Scientific Community” (138) than on what can be produced by “a single individual with his native smarts and sensory equipment” (ibid.).

    So even if individual biological brains are not terribly good at tracking truth, that doesn’t affect the warrant for our best scientific theories, since those theories do not derive their warrant from that source, but rather from the artificial and communal practices of scientific inquiry that have taken us thousands of years to develop, from ancient Greece (and before, no doubt) to the present day.

    I think that Churchland would even be willing to say that evolutionary theory can explain just why it is that our native biological endowments are often not as reliable as we take them to be, and why it took so long for us to develop a system of institutions that can reliably detect and filter out just all the ways in which our native cognitive mechanisms fail us (e.g. in judgments of probability, why we’re prone to “the Gambler’s fallacy,” and so on).

    As a pragmatist, Churchland regards human inquiry as both fallible and corrigible — he’s not interested in infallible knowledge, certainty, whatever. So all he needs to do, he thinks, is show that the fallible-but-corrigible structure of inquiry is biologically grounded. And to do that, what we need is (a) an account of how our cognitive mechanisms are generally reliable for getting a partial grasp on objective reality for practical purposes and (b) an account of how we are able to detect and correct cognitive errors. (On this account, Churchland’s “neuropragmatism” turns scientific inquiry itself into a self-correcting, generally reliable mechanism for getting a partial grasp on objective reality for practical purposes.)

    So the question that remains is this: on naturalistic grounds, do we have good reasons to think that cognitive mechanisms are even so much as generally reliable for getting a partial grasp on objective reality for practical purposes? And to that question, Churchland thinks that the answer is an unequivocal “yes”, because of how brains represent the stable and fleeting features of their environments.

  32. Nonsense. The practices and technologies that humans have come up with are of absolutely no use in determining what’s true. So you can take your genetic analysis, and your telescopes and calculus, and shove em you know where. Because the truth is that evolution didn’t happen, and the sun and everything revolves around the earth.

  33. As to trusting what our technologies are telling us about reality:

    ‘Spooky action at a distance’ aboard the ISS – April 9, 2013
    Excerpt: Albert Einstein famously described quantum entanglement as “spooky action at distance”; however, up until now experiments that examine this peculiar aspect of physics have been limited to relatively small distances on Earth.
    In a new study published today, 9 April, in the Institute of Physics and German Physical Society’s New Journal of Physics, researchers have proposed using the International Space Station (ISS) to test the limits of this “spooky action”,,,
    “According to quantum physics, entanglement is independent of distance. Our proposed Bell-type experiment will show that particles are entangled, over large distances—around 500 km—for the very first time in an experiment,” continued Professor Ursin.
    “Our experiments will also enable us to test potential effects gravity may have on quantum entanglement.”
    http://phys.org/news/2013-04-s.....d-iss.html

    Perhaps Anthony Leggett, who is an atheist who devised the Leggett inequality to try to disprove quantum theory, will finally admit that quantum theory is correct,,

    A team of physicists in Vienna has devised experiments that may answer one of the enduring riddles of science: Do we create the world just by looking at it? – 2008
    Excerpt: Leggett’s theory was more powerful than Bell’s because it required that light’s polarization be measured not just like the second hand on a clock face, but over an entire sphere. In essence, there were an infinite number of clock faces on which the second hand could point. For the experimenters this meant that they had to account for an infinite number of possible measurement settings. So Zeilinger’s group rederived Leggett’s theory for a finite number of measurements. There were certain directions the polarization would more likely face in quantum mechanics. This test was more stringent. In mid-2007 Fedrizzi found that the new realism model was violated by 80 orders of magnitude; the group was even more assured that quantum mechanics was correct.
    Leggett agrees with Zeilinger that realism is wrong in quantum mechanics, but when I asked him whether he now believes in the theory, he answered only “no” before demurring, “I’m in a small minority with that point of view and I wouldn’t stake my life on it.” For Leggett there are still enough loopholes to disbelieve. I asked him what could finally change his mind about quantum mechanics. Without hesitation, he said sending humans into space as detectors to test the theory. In space there is enough distance to exclude communication between the detectors (humans), and the lack of other particles should allow most entangled photons to reach the detectors unimpeded. Plus, each person can decide independently which photon polarizations to measure. If Leggett’s model were contradicted in space, he might believe. When I mentioned this to Prof. Zeilinger he said, “That will happen someday. There is no doubt in my mind. It is just a question of technology.” Alessandro Fedrizzi had already shown me a prototype of a realism experiment he is hoping to send up in a satellite. It’s a heavy, metallic slab the size of a dinner plate.
    http://seedmagazine.com/conten....._tests/P3/

    supplemental note:

    Quantum physics says goodbye to reality – Apr 20, 2007
    Excerpt: They found that, just as in the realizations of Bell’s thought experiment, Leggett’s inequality is violated – thus stressing the quantum-mechanical assertion that reality does not exist when we’re not observing it. “Our study shows that ‘just’ giving up the concept of locality would not be enough to obtain a more complete description of quantum mechanics,” Aspelmeyer told Physics Web. “You would also have to give up certain intuitive features of realism.”
    http://physicsworld.com/cws/article/news/27640

    So lastyearon and KN, do you guys trust your cognitive faculties enough to believe what quantum mechanics is telling us??? :)

    The Mental Universe – Richard Conn Henry – Professor of Physics at John Hopkins University
    Excerpt: The only reality is mind and observations, but observations are not of things. To see the Universe as it really is, we must abandon our tendency to conceptualize observations as things.,,, The universe is entirely mental,,,, The Universe is immaterial — mental and spiritual. Live, and enjoy.
    http://henry.pha.jhu.edu/The.mental.universe.pdf

  34. 34

    Its interesting to watch LYO emote his dissonance over these past weeks. Imagine having mockery as your only means to stave off fears and preserve your interest.

    :|

  35. Plantinga is ignoring the artificial mechanisms for theory-creation and theory-evaluation embodied in the complex institutions and procedures of modern science.

    Even from the naturalist perspective, institutions and practices cannot be “artificial mechanisms.” They would derive from and build on the individual truth tracking mechanisms and unreliable belief systems that are shaped solely by survival instincts. An amalgamation or accumulation of unreliable beliefs contains no more truth value than a single unreliable belief. The resultant diversity of opinion would simply add confusion to the unreliability.

  36. lastyearon

    Nonsense. The practices and technologies that humans have come up with are of absolutely no use in determining what’s true. So you can take your genetic analysis, and your telescopes and calculus, and shove em you know where. Because the truth is that evolution didn’t happen, and the sun and everything revolves around the earth.

    As is often the case, the logic in your nitwitted parody is so bad that your second non-sequitor cancels out the first one, causing you to stumble back on to the truth. Churchland (and Kantian Naturalist) are arguing that unreliable people do not create reliable theories, artificial institutions do. Now here you are admitting that it is, indeed, humans that develop those practices, undermining the very argument you sought to support and never really did follow. Remarkable.

  37. F/N: While the thread is off track a bit, it is an interesting off-track.

    I think the issue that is emerging is that there is a dearth of understanding an inference to best explanation case. Yes, we do have knowledge, we do reason logically and correctly, we do have reason to believe we have ability to access truth, though we sometimes err.

    So, what best explains such? Why?

    If we were designed to do so, that makes a far better sense than expecting a process that rewards mere survival, to produce such an entity as a being capable of abstract, logical, truthful reasoning and knowledge.

    Indeed, it seems the proposed mechanisms of genetic, cultural and social conditioning, would decisively undermine truth seeking and tracking capacity. And, that has been a very common argument by various types of materialists across the recent decades: Marxists seeing bourgeois institution induced false consciousness [so what about your own social-cultural class background, uncle Charlie?), Freudians seeing critics as suffering overly strict potty training [and Uncle Sig, what was your own like . . . ?), behaviourists seeing us as glorified rats trapped in operantly conditioning mazes (and Uncle Burrus, how’s your part of the maze), and so forth.

    When I hear today’s Dawkinsians decrying religion and religious upbringing as inducing borderline lunacy and worse, I wonder about what the implications of their favoured form of “free thought” so called, are, on the same terms. Especially when I see the sort of scientism that fails to see that the notion that “science is the only begetter of truth” is a self-refuting philosophical claim.

    Likewise, Crick’s suggestion that “you’re nothing but a pack of neurons” in a context that asserted that “your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules” invited Johnson’s retort that such claims implied a quiet excepting of the theorist who made such assertions.

    Then, some days back, someone was a bit heated in retort to my pointing out that the claim that there is an unbridgeable gulf between the world of our appearances and the world of things in themselves, is itself an implicit claim to know about the external world, which was what was being denied.

    And, as my home discipline has shown by having two major revolutions in 250 years, scientific theories are often more about empirical reliability of useful models than actually capturing the truth about the world.

    In short, Plantinga is not exactly writing in some weird abstract vacuum, he could easily have named some names and pulled skeletons out of a few closets.

    His challenge to account for reason and its deliverances per the surrounding evolutionary materialist frame, stands as significant. (My own 101 notes are here on for those who want a short and dirty summary, and my similar 101 on a better base for building worldviews is here on.)

    KF

  38. 38
    Kantian Naturalist

    Churchland’s claim, to be precise, is that the warrant of scientific theories does not depend on the reliability of any untutored cognitive endowments of any individual human brain. (I take it that that’s obviously true and invites no further argument.)

    So the Hard Questions here are:

    (1) just how much reliability must be assigned to our ‘natural’ cognitive mechanisms, in order for them to be so much as capable of generating the self-correcting process of empirical inquiry, of which the institutions and practices of modern science are a significant example?

    (2) is that reliability consistent with what we know from evolutionary theory and cognitive neuroscience?

    Notice, by the way, that Plantinga’s key premise isn’t “naturalism and evolution together entail that our cognitive mechanisms are unreliable” but rather “naturalism and evolution together entail that we cannot ascertain to what degree our cognitive mechanisms are reliable” — as he puts it, given N&E, the probability of R is either low or inscrutable.

    In mulling over this conversation earlier, it struck me that there are two major points of contention between Plantinga and Churchland, one in epistemology and one in semantics.

    Epistemology: foundationalism or anti-foundationalism? Plantinga is a committed foundationalist (I think — but correct me if I’m wrong about this) — that is, he thinks that there’s a stock of “properly basic beliefs,” which cannot be argued for but which would be irrational to reject (the existence of other minds is one of this examples — we can’t justify our belief in other minds, but it would be irrational to just flat-out reject it). And other beliefs, such as our scientific beliefs, rest on this foundation of properly basic beliefs. They aren’t indubitable, a la Descartes — they are open to skeptical worries — but it wouldn’t make any sense to do so. (I believe that this is a rather deep and interesting point that Plantinga gets from Reid’s response to Hume.)

    By contrast, Churchland is an anti-foundationalist, following in the model of Hegel, Peirce, and Sellars. (Churchland in fact did his undergrad senior thesis on Peirce and wrote his Ph.D. under Sellars.) Here’s how Sellars puts the really key point:

    . If I reject the framework of traditional empiricism, it is not because I want to say that empirical knowledge has no foundation. For to put it this way is to suggest that it is really “empirical knowledge so-called,” and to put it in a box with rumors and hoaxes. There is clearly some point to the picture of human knowledge as resting on a level of propositions — observation reports — which do not rest on other propositions in the same way as other propositions rest on them. On the other hand, I do wish to insist that the metaphor of “foundation” is misleading in that it keeps us from seeing that if there is a logical dimension in which other empirical propositions rest on observation reports, there is another logical dimension in which the latter rest on the former.

    Above all, the picture is misleading because of its static character. One seems forced to choose between the picture of an elephant which rests on a tortoise (What supports the tortoise?) and the picture of a great Hegelian serpent of knowledge with its tail in its mouth (Where does it begin?). Neither will do. For empirical knowledge, like its sophisticated extension, science, is rational, not because it has a foundation but because it is a self-correcting enterprise which can put any claim in jeopardy, though not all at once.

    (Churchland is also influenced by Quine’s anti-foundationalist “web of belief,” though I have reasons of my own for preferring Sellars over Quine.)

    Semantics: is semantic content a ‘target’ of natural selection? Well, that depends on just what semantic content is! Plantinga assumes (not without justification) that the bearers of semantic content are beliefs, and that since the ‘pairings’ between beliefs and behavior are (conceivably) arbitrary, and only behaviors can be selected against, beliefs are invisible to selection.

    By contrast — and this is actually what is most radical in Churchland’s view, I think, and a position that has considerable merit — Churchland thinks that semantic contents are not identifiable with beliefs, but with patterns of neuronal activity (modeled in connectionist networks). There is, he thinks, a kind of semantic content that is evolutionarily more primitive than the distinctive kind of content we find in language (“linguaformal content,” in his terms).

    He’s quite happy to concede that notions like “belief” (or “desire”) only make sense when talking about linguistic animals; only that there’s a kind of semantic content of non-linguistic animals, the content involved in representing features of the environment, which is just what brains do.

    Now, since this kind of semantic content is non-propositional, it cannot be assigned truth-values. But it can still be regarded as reliable or unreliable by other criteria. So Churchland’s neurosemantics really amounts to a rejection of Plantinga’s initial assumption: that cognitive reliability is measured in terms of producing mostly true beliefs. Instead, Churchland proposes to measure cognitive reliability in terms of producing generally or good-enough maps of the environment. (Maps, of course, that are not used by by the organism but which the animal’s behavior instantiates.) But since semantic content is, for Churchland, readily identifiable with patterns of neuronal activity, it can be a target of selection.

  39. KN,

    Churchland then proceeds to list several of the practices and technologies he has in mind, such as double-blind studies, testing for statistical significance, comparing theories against each other, directly comparing predictions with experimental data, and also the extensive augmentation of our sensory modalities with telescopes, microscopes, nucleic-acid sequencers, and radioactive dating.

    And here’s the problem: double-blind studies, tests for statistical significance, theory comparisons, comparing predictions to data, etc, all are or involve human beings making judgments, stating beliefs, providing arguments, etc. It’s happening at step after step of the described scientific process. Theories do not pop into existence of their own accord – these are developed by humans. What counts as ‘data’ is not granted to us by the Magical Science Golem – this is decided by humans. Etc, etc.

    Again: I gave the example of the irrational, crazy man programming a computer. Can we suddenly trust the results of the computer, just because it IS a computer (It’s complex!), despite it being built and programmed by a lunatic? If not, well, then you begin to see why Churchland’s response isn’t going to work.

    Churchland’s claim, to be precise, is that the warrant of scientific theories does not depend on the reliability of any untutored cognitive endowments of any individual human brain. (I take it that that’s obviously true and invites no further argument.)

    No, that’s going to need to be unpacked.

    If Churchland’s argument relies on the assumption that there are ‘tutored cognitive endowments’ – “someone went off and became a scientist, or did scientific experiments, or…” etc, then we’re going to need an argument for what it is about these practices that (given the lack of reliability about our cognitive processes otherwise) make them immune to the problems Plantinga has argued affect us generally. Note that it’s not going to do any good here to list off all the practices of science that you have confidence in, because each and every one of those practices is going to be fallible in the sense that someone can always draw the long conclusion either from them, or in the process of performing them.

    Re: Churchland’s talk about ‘maps’ versus ‘beliefs’, I don’t think this is going to help out at all on this subject. Arguably, Plantinga’s examples – the person who has irrational beliefs (or thoughts, if you like) but nevertheless engages in behavior that promotes survival – has an ‘accurate map’ insofar as it’s a map conducive to survival. If we live in a world of ‘accurate maps’ yet the reliability of our thoughts is low or inscrutable, I think you’d see why this doesn’t exactly threaten the EAAN. Trying to argue that the reliability of our thoughts is not low or inscrutable on the grounds that reliability is measured in terms of a God’s eye view of our actions and whether or not they’re conducive to survival, *regardless of the accuracy of the particular thoughts or beliefs we have*, burns this as a response to Plantinga.

    There’s another problem. The thrust of Churchland’s response so far focuses on trying to insist that our belief that evolution is true can be justified on E&N, thanks to the practices of science. I’ve already argued/pointed out why I think this is going to fail, but beyond that, metaphysical and philosophical views are going to fall outside the scope of science. A reply which results in the conclusion of ‘given E&N, our beliefs about evolutionary theory may be reliable, but our beliefs about naturalism are not’ would be a pyrrhic victory to say the least.

  40. Kantian Naturalist

    Churchland’s claim, to be precise, is that the warrant of scientific theories does not depend on the reliability of any untutored cognitive endowments of any individual human brain. (I take it that that’s obviously true and invites no further argument.)

    Go ahead and humor me with an argument. If human cognition is unreliable as a truth tracker, how can it distinguish relevant data from irrelevant data, interpret the data rationally, or build a theory for which there is warrant?

  41. 41

    Its interesting to watch LYO emote his dissonance over these past weeks.

    Actually, some of us find it boring

  42. 42
    Kantian Naturalist

    Churchland seems committed to a fairly interesting and provocative assumption, which I confess I might not have even noticed if I hadn’t been reading a lot of Hegel and C. I. Lewis lately. (Davidson makes a similar point in his triangulation argument, but I think the basic points come across without the form in which Davidson puts them.)

    The assumption is this: objectivity requires intersubjectivity. Put otherwise, communal social interactions can do something that merely individual cognizers cannot do: have warrant that their cognitive activities are bearing on objective reality.

    [Note: I am using "objective" to mean "independent of any particular cognitive subject", in contrast with "absolute", which I would use to mean "independent of all particular cognitive subjects". So in talking of our cognitive access to objective reality, I am not talking of our cognitive access to the God's-eye view, but of our access to how things are regardless of how any particular cognitive subject takes them to be. Thus construed, the converse of "objective" is "subjective", and the converse of "absolute" is "relative".]

    Now, why might this assumption seem reasonable? It seems reasonable because a community of cognitive subjects is a plurality that is able to share perspectives. So no individual cognitive subject is enclosed within its own perspective. (Think of Leibniz’s monads.) Cognitive subjects are differentiated by virtue of embodiment, spatio-temporal location, and cognitive capacities (including, importantly, perceptual capacities). Cognitive subjects who able to exchange their perspectives through a shared language — discursive cognitive subjects — are thereby able to coordinate their orientations on objects and properties. (Non-discursive cognitive subjects do this as well — e.g. a wolf-pack cooperating on a hunt — but the kinds of social activities are much more limited, partly because of the kinds of cognitive mechanisms each animal has, and because they can transmit much less information to each other.)

    (Compare: “Did you hear that?” “Yeah, I did!” “What was that?” with “Did you hear that?” “No” “Oh, I thought I heard something.”)

    Individual cognitive subjects may indeed have some cognitive grasp of objective reality, but communal interactions (esp. the distinctive kind mediated by a shared language) make it possible for them to have some warrant that they have such a grasp. So having more than one cognitive subject is not merely additive, as StephenB seemed to suggest, but rather results in a radical transformation of one’s epistemic situation. (Including, it should be noted, one’s epistemic situation with regard to one’s self — the very complex kind of self-consciousness that we enjoy is itself mediated a long and complex history of social transactions that begin in infancy.)

    And this process of mutual adjustment and coordination is not just diachronic, but also synchronic — we can and do learn from the insights and errors of previous generations, seeing new ways of improving upon the former and avoiding the latter, in light of our own uptake of objective reality.

  43. As to KN’s comment:

    Churchland’s claim, to be precise, is that the warrant of scientific theories does not depend on the reliability of any untutored cognitive endowments of any individual human brain. (I take it that that’s obviously true and invites no further argument.)

    empirical evidence just ain’t your friend KN:

    Children Act Like Scientists – October 1, 2012
    Excerpt: New theoretical ideas and empirical research show that very young children’s learning and thinking are strikingly similar to much learning and thinking in science. Preschoolers test hypotheses against data and make causal inferences; they learn from statistics and informal experimentation, and from watching and listening to others. The mathematical framework of probabilistic models and Bayesian inference can describe this learning in precise ways.
    http://crev.info/2012/10/child.....cientists/

  44. 44
    Kantian Naturalist

    Churchland (again):

    . . . the dominant scheme of representation in biological creatures generally, from the Ordovician to the present, is the internal map of a range of possible types of sensorily accessible environmental features. Not a sentence, or a system of them, but a map. Now a map, of course, achieves its representational successes by displaying some sort of homomorphism between its own internal structure and the structure of the objective domain it purports to portray. And unlike the strictly binary nature of sentential success (a sentence is either true or it’s false), maps can display many different degrees of success and failure, and can do in many distinct dimensions of possible ‘faithfulness’, some of which will be relevant to the creature’s practical (and reproductive) success, and many of which will not.

    In other words, what Churchland calls “synaptically-encoded feature-space maps” carry out the Hard Work of representing the environment. Beliefs are, on his view, late-comers to the game — they only arise with a shared language, and animals were reliably portraying their environments for hundreds of millions of years before that happy event.

    Now, one might ask, “could a discursive animal have generally reliable synaptically-encoded feature-space maps and yet have mostly false beliefs?” This is basically how Nullasalus takes up Plantinga’s “Paul the hominid” case:

    Arguably, Plantinga’s examples – the person who has irrational beliefs (or thoughts, if you like) but nevertheless engages in behavior that promotes survival – has an ‘accurate map’ insofar as it’s a map conducive to survival. If we live in a world of ‘accurate maps’ yet the reliability of our thoughts is low or inscrutable, I think you’d see why this doesn’t exactly threaten the EAAN.

    (I’ve been told, though I don’t know this for sure, that Plantinga calls his hominid “Paul” as a way of poking fun at Churchland.)

    Now, Plantinga himself formulates the “Paul the hominid” case in terms of external behavior — Paul does, after all, run away from the tiger — rather than in terms of what’s going on in Paul’s neurobiological mechanisms. In terms of how Paul represents the situation, Plantinga notes that it is entirely conceivable that Paul’s psychological representations — what Paul believes to be the case — could be wildly off from what is the case. But notice the assumption: that when we talk about semantic content, that’s got to be in terms of beliefs and desires. And that’s the assumption that Churchland rejects, because on his view, Paul’s neurobiological processes are his semantic contents.

    So, to Nullasalus’ implicit challenge, “”could a discursive animal have generally reliable synaptically-encoded feature-space maps and yet have mostly false beliefs?”, the answer it seems to me has to be “No”.

    I say that because there cannot be a total and systematic discrepancy between “having generally reliable synaptically-encoded feature-space maps” and “having mostly true beliefs”, and that is because “having generally reliable synaptically-encoded feature-space maps” and “having mostly true beliefs” are really just two different ways of talking about the semantic content of discursive cognitive subjects. (For non-discursive cognitive subjects, we have only the first way.)

    (Having written this up, I do worry that I’m making a slight conflation of Churchland and Davidson — I’ll run this past some friends and see what they think — but it’s good enough for the time being, I think.)

  45. 45
    Kantian Naturalist

    BornAgain77, that’s an interesting article — and very much in line with one of my philosophical heroes, John Dewey — so thank you! But it doesn’t really touch on the point I was making, which is about warrant. It’s a nice point that children can engage in the epistemic habits of scientists (does our school system destroy that habit, I wonder?), but the rational warrant of scientific theories lies in how we test theories, not in how we generate them. The original article is by Allison Gopnik — I’ll check it out!

    (Interestingly, Gopnik also wrote The Philosophical Baby: What Children’s Minds Tell Us About Truth, Love, and the Meaning of Life — I vaguely recall that a friend of mine read it last year, when he became a father. I’ll see what he thought of it.)

  46. KN,

    In terms of how Paul represents the situation, Plantinga notes that it is entirely conceivable that Paul’s psychological representations — what Paul believes to be the case — could be wildly off from what is the case. But notice the assumption: that when we talk about semantic content, that’s got to be in terms of beliefs and desires. And that’s the assumption that Churchland rejects, because on his view, Paul’s neurobiological processes are his semantic contents.

    And this just opens you up to the exact difficulty that I mentioned. You say further:

    So, to Nullasalus’ implicit challenge, “”could a discursive animal have generally reliable synaptically-encoded feature-space maps and yet have mostly false beliefs?”, the answer it seems to me has to be “No”.

    I say that because there cannot be a total and systematic discrepancy between “having generally reliable synaptically-encoded feature-space maps” and “having mostly true beliefs”, and that is because “having generally reliable synaptically-encoded feature-space maps” and “having mostly true beliefs” are really just two different ways of talking about the semantic content of discursive cognitive subjects.

    That’s an equivocation on the word ‘reliable’. From the evolutionary perspective that Churchland is offered up – at least as you describe it – a ‘reliable’ “synaptically-encoded feature-space map” is measured in terms of fitness and survival. Is the population of the species with such and such map surviving and thriving more than the nearest alternative? Yes? Well, then it’s a reliable map.

    But that’s not the sort of reliability that Plantinga is casting doubt on. In fact, Plantinga seems to grant that you can have that kind of reliability in E&N – hence his examples of creatures that, despite having false beliefs, or irrational beliefs, or (I would personally add) even no beliefs whatsoever, you can still have behaviors and actions that are conducive to survival. In that sense, they are reliable. It just happens to not be a reliability anyone is concerned about.

    You say that the answer to you seems to be no. But frankly, the weight of evidence is on my side here: I can point at no shortage of creatures that engage in behavior which is, on the whole, individually or collectively beneficial to the survival of their population – despite them having no beliefs whatsoever, possibly no conscious awareness to speak of. (Do bacteria have beliefs? Etc.)

    My implicit challenge wasn’t really a challenge: it was a statement, one that I can easily defend, and which I just provided some more evidence for.

    The best way you can cook what you’re saying here, on behalf of Churchland, is that Churchland is an eliminativist about beliefs to begin with – so Plantinga’s charge doesn’t even get off the ground. Guys like Alex Rosenberg play this kind of card too, and we’ve kicked it around here before. To say that’s not the most compelling response is putting it lightly – go ahead, try to make the argument that everyone here thinks they have beliefs, but they’re actually mistaken. (I suppose, they can’t be mistaken, because that would require they had a belief, therefore…)

    But once you’re accepting the existence of beliefs to begin with, then the ‘reliability’ cashes out the way I’ve noted – and it’s not a concern to Plantinga’s EAAN. After all, the EAAN does not argue that we couldn’t *survive* or even thrive in a reproductive sense given E&N. If anything, it assumes the opposite.

  47. 47
    Kantian Naturalist

    Interesting response, Nullasalus! I’ll have to think more on this and respond later — probably tomorrow, because my brain is cooked for the night.

    Let me say this, though: I do reject Churchland’s eliminative materialism, but for the following reason — I don’t think that “folk psychology” (propositional attitude ascriptions, etc.) is best understood as an empirical theory, so it’s not something that could be replaced by a better empirical theory. But I do acknowledge that that puts me in a more difficult bind than Churchland’s — I have to confront questions that he can simply evade.

    (As for Rosenberg — uggh!)

  48. Denialism as usual KN,,,

    you cite a specific claim that “the warrant of scientific theories does not depend on the reliability of any untutored cognitive endowments of any individual human brain”

    And when shown that ‘scientific cognitive endowments’ are already present (i.e. made in the image of God), you deny it is relevant. How convenient to make up the rules as you go just so to preserve your atheistic belief system.

    Science practiced KN style:
    http://conversationsofchange.c.....15;307.jpg

  49. No rush, KN.

    Pardon me if my tone is aggressive. I’m not trying to be an ass – I just have never learned how to be anything but direct and forceful in some contexts. You’re one of the more pleasant skeptics around here in a number of ways.

  50. Kantian Naturalist


    but the rational warrant of scientific theories lies in how we test theories, not in how we generate them!

    Are we supposed to forget that the main element of your original claim, which I refuted, was that, in the case of unreliable human cognition, we can confer more warrant on the Collective Scientific Community than on a single individual?

  51. H’mm:

    Both these threads seem to have converged on a similar somewhat tangential focus.

    Let me post here too what I just noted to BD in ID Founds 17, noting that scientism underwritten by evolutionary materialist ideology dressed up in the lab coat and dominating science, in spite of serious issues of question begging, epistemological breakdown and more, is a context in which all the discussion proceeds.

    So, let us be bold enough to ask, whether we are today living in an evolutionary materialist cave of question-begging shadow shows presented by devotees of scientism dressed up in the holy lab coat, in the name of Big-S Science (how dare you doubt or question . . . ):

    ____________

    >> In the universe of discourse we must address, the question of grounding the human mind as a reasonably effective cognitive system does arise. For, we have a persistent evolutionary materialism that seeks to pin mind down to brain and CNS in action.

    In that setting, the following from Leibniz’s Monadology, i.e. the analogy of the mill [HT: Frosty], is quite apt:

    14. The passing condition which involves and represents a multiplicity in the unity, or in the simple substance, is nothing else than what is called perception. This should be carefully distinguished from apperception or consciousness . . . .

    16. We, ourselves, experience a multiplicity in a simple substance, when we find that the most trifling thought of which we are conscious involves a variety in the object. Therefore all those who acknowledge that the soul is a simple substance ought to grant this multiplicity in the monad . . . .

    17. It must be confessed, however, that perception, and that which depends upon it, are inexplicable by mechanical causes, that is to say, by figures and motions. Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill. Now, on going into it he would find only pieces working upon one another, but never would he find anything to explain perception. It is accordingly in the simple substance, and not in the compound nor in a machine that the perception is to be sought. Furthermore, there is nothing besides perceptions and their changes to be found in the simple substance. And it is in these alone that all the internal activities of the simple substance can consist.

    We may bring this up to date by making reference to more modern views of elements and atoms, through an example from chemistry. For instance, once we understand that ions may form and can pack themselves into a crystal, we can see how salts with their distinct physical and chemical properties emerge from atoms like Na and Cl, etc. per natural regularities (and, of course, how the compounds so formed may be destroyed by breaking apart their constituents!). However, the real issue evolutionary materialists face is how to get to mental properties that accurately and intelligibly address and bridge the external world and the inner world of ideas. This, relative to a worldview that accepts only physical components and must therefore arrive at other things by composition of elementary material components and their interactions per the natural regularities and chance processes of our observed cosmos. Now, obviously, if the view is true, it will be possible; but if it is false, then it may overlook other possible elementary constituents of reality and their inner properties. Which is precisely what Leibniz was getting at.

    Moreover, as C S Lewis aptly put it (cf. Reppert’s discussion here), we can see that the physical relationship between cause and effect is utterly distinct from the conceptual and logical one between ground and consequent, and thus we have no good reason to trust the deliverances of the first to have anything credible to say about the second. Or, as Reppert aptly brings out:

    . . . let us suppose that brain state A, which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.

    That is the naturalist’s dilemma: he must use his mind to reason and must trust his capability to perceive accurately, and to know, but in his scheme of things, the ground on which such must stand is undercut by the frame of the system itself. His scheme becomes self referentially incoherent and I daresay, absurd. In many ways and by many paths, often projected to opponents of the schemes of the likes of a Crick or a Marx or Freud or a Skinner or a Dawkins etc, but on reflection the self referentiality, it is apparent that he same knife cuts both ways.

    Coming back to your own view so far as I have made it out, there is a similar position. For, your view reduces the world of our experiences of external reality to in effect a Plato’s Cave delusion.

    No scheme that does that escapes self-referentiality and an explosively self-defeating spiral of challenges: why should we accept the credibility of perceptions, beliefs and arguments at level n +1 if those of levels 1 to n have fallen to the acid of doubt and dismissal?

    Instead, it seems much wiser to me to accept that the consensus of our senses, experiences and insights is capturing something real, however prone we are to err. Where indeed, that fact of reality itself turns into a pivot, a point of undeniably certain truth and warranted self-evident knowledge [to deny that error exists entails that error exists]. Thus schemes of thought that deny external reality as what is there to be in error about, or deny truth as that which accurately refers to reality, or dismiss knowledge as that which warrants beliefs concerning reality, in some cases to undeniable certainty, etc etc, all fail. In particular, the notion that suddenly, because we construct a system — one dominated by a priori materialism often wearing the lab coat of scientism and its notions that ideologically materialist science embraces and reveals knowledge whilst metaphysics, epistemology, philosophy and “theology” can be derided and dismissed across the board as outdated and dubious speculations, ends up in question-begging and self-referential incoherence. The artificial construct, institutional science dominated by scientism and unexamined materialism (let’s not fool ourselves), stand on a fatally cracked foundation.

    And, given just how widespread such schemes are in our day, that analysis on the implications of the undeniable reality that error exists therefore cuts a wide mowing swath indeed across the contemporary scene in the marketplace of ideas and values.

    Back to basics — first principles of right reason, self evident truths, the possibility of real knowledge etc — and a much more serious respect for old fashioned common good sense.

    It is time to notice that the chains of mental slavery have been snapped, and that we are no longer tied to the post in the cave of shifting shadow shows and manipulation power games that stage these. So, let us step up into the sunshine, and step out of the shade.

    For, this is one time that we can get a breakthrough to truth and to liberation thereby: you shall know the Truth, and the Truth shall make ye free. But, that requires understanding why the same Worthy, in that same context, warned his interlocutors, that they were in a situation where because he spoke the truth, they were unable to hear and understand what he had to say; indeed, were violently inclined to object and oppose.

    As he said in his famous Sermon on the Mount, the eyes are the lamp of the body, so if our eyes are good, we are full of light. but if they are bad, so bad that what we think is light is in fact darkness, how great is our darkness.

    I think Jesus knew exactly what the Greek thought on enlightenment so decisively shaped by Plato’s parable of the Cave was all about, and the spreading influence of such ideas, e.g. from Sepphoris, a major Gentile centre in Galilee. So, he spoke at several levels, some corrective to Hebrew caves, and some to Greek, Roman and wider gentile ones.

    The gospel is light.

    Our problem is, that light has come but too often we choose darkness instead of light, as our deeds are evil and for fear that our addiction to evil will be exposed. Indeed, we are often confused by light, and even angered by it. Sometimes to the point of murderous rage.

    As, happened to him.

    But, that was Friday, Sunday was a-coming.

    Sunday has come, with the duly prophesied resurrection power (of which we have good warrant), so let us be as the one Jesus spoke of who lives by the truth — yes, in the teeth of a day that derides and dismisses truth itself — and so will walk into the light so that it may be manifest that what he does is done through the grace and redemption of God.

    And, so, let us restore our civilisation to light, rather than surrendering it to the ever advancing darkness. >>

    ______________

    I trust this will be helpful.

    KF

  52. 52
    Kantian Naturalist

    Nullasalus,

    First, let me assure you that I don’t find your tone aggressive at all — critical, yes, but by no means aggressive. In fact, I quite enjoy our conversations, and I get a lot of them. Now, to work!

    That’s an equivocation on the word ‘reliable’. From the evolutionary perspective that Churchland is offered up – at least as you describe it – a ‘reliable’ “synaptically-encoded feature-space map” is measured in terms of fitness and survival. Is the population of the species with such and such map surviving and thriving more than the nearest alternative? Yes? Well, then it’s a reliable map.

    I don’t think I’m equivocating on “reliable,” because we can distinguish between what constitutes a reliable cognitive map and the usual consequences of a reliable map. A set of neurobiological processes is functioning as a reliable map if there is a homomorphism between those processes and some part of the environment.

    Take, for example, color. On the neurological side, there is the space of all possible colors that are humanly perceivable. On the physical side, there is the range of electromagnetic frequencies to which our retinas are sensitive. And there is a homomorphism between them, as mediated by the kinds of cones in our retinas, how those cells send information to each other and to other parts of the brain, and so forth. This is not a first-order resemblance, as with Locke — each bit of semantic content stands in some relation to some bit of external reality — but rather a second-order resemblance — the system of relations at the neurobiological level stands in a homomorphic relation to the system of relations at the physical level.

    I’ve gone into this point in order to stress the key idea behind Churchland’s “neurosemantics” (as he calls it): it’s not that we eliminate semantic content, but that we use neurobiology to construct a better theory of what semantic content really is.

    Now, Churchland does, of course, think that animals that can reliably map their environments will tend to leave more offspring than those than cannot — because reliably mapping the environment allows the organism to coordinate its perceptual ‘input’ and motor ‘output’, and so better accomplish all of its practical activities, including reproduction. What gets mapped, and how, depends on the particular environment, the kind of organism, and their mode of interaction. Oysters map their environment far differently from lobsters or tigers.

    In other words, the kind of reliability that Churchland is talking about is different from the kind of reliability that Plantinga concedes. The kind of reliability that Plantinga concedes seems to me to be a reliability constituted by reproductive success, aka staving off extinction one generation at a time. That’s “purely external,” so to speak — semantic content doesn’t matter, or seems not to. Whereas the kind of reliability that Churchland is talking about is a reliability of semantic content — it’s just non-propositional semantic content.

    Put this way, we can see Churchland as rejecting Plantinga’s implicit dichotomy between ‘external’ behaviors and ‘internal’ beliefs. There’s a third category: non-propositional semantic contents that are realized as synaptically-encoded feature-space maps of the motivationally salient environment and that are causally efficacious in coordinating perceptual and motor activity. So this allows Churchland to put semantic content back in the causal nexus, and as such, it can be subject to selective forces.

    Two further questions: (1) do all organisms reliably map their environments? and (2) what about discursive semantic contents — beliefs and desires — where do they fit? (And what, after all, is there to say about truth and justification?)

    On the first question, I don’t know what Churchland would say, but I myself would say that an absolute minimal requirement for cognition (qua reliable mapping of some environmental domain) is that there be intermediary neurons between the sensory receptors and the motor effectors. So I wouldn’t say that bacteria or even complex eukaryotes would count as cognitive subjects. (Evan Thompson would disagree with me, and there are molecular biologists who think that even molecules are cognitive on some level. I confess that I find that view utterly baffling.)

    I’ll return later on today to say something more about the relation between semantic content qua reliable mapping and semantic content qua propositional attitudes.

  53. 53
    Kantian Naturalist

    In re: StephenB @ 50:

    Are we supposed to forget that the main element of your original claim, which I refuted, was that, in the case of unreliable human cognition, we can confer more warrant on the Collective Scientific Community than on a single individual?

    In response to your skeptical challenge @ 30 (repeated @ 35 and 40), I responded at my 42, where I gave a sketch as to why we have good reasons to believe that intersubjective or communal cognitive achievements are more reliable than those of individual cognitive subjects. I’d like to see a response from you before I’m convinced that I’ve been refuted.

  54. Kantian Naturalist

    I’d like to see a response from you before I’m convinced that I’ve been refuted.

    OK.

    Your original point was that even if the truth-tracking character of our native cognitive mechanisms is unreliable, artificial mechanisms in the form of institutions and practices can provide warrant for scientific theories. Collective cognition can, one gathers, compensate for the lack found in individual cognition.

    As I pointed out, this is logically impossible. The reliability of Collective cognition builds on the reliability of individual cognition and cannot be separated from it. In keeping with that point, institutions and practices do not create anything or evaluate anything. On the contrary, they are the thing being created by the people who bring them into being.

    At 42, you write this:

    Put otherwise, all that a good naturalist (like Churchland) need be committed to is that our ordinary belief-formation mechanisms are generally reliable about some things, and that scientific procedures are highly artificial (so not “natural”, in one sense) but highly reliable techniques for arriving at much more reliable (though often counter-intuitive) beliefs.

    You seem to be confusing the idea of “perfect” knowledge, which no one has about anything, and “reliable” knowledge, which all rational people have in the context of evaluating what is true from what is false. In a naturalistic, neo-Darwinian framework, there is no reason to accept (and every reason to reject) the idea that any beliefs at all are reliable.

    As I pointed out @30,

    “If individual belief systems are unreliable from a Darwinist perspective, then so are the institutions and practices that build on and derive from those individual belief systems. Accordingly, the amount of time that it takes for an institution to develop is irrelevant since it would simply be the case of newer unreliable beliefs being piled on top of older unreliable beliefs.”

    Your comment @42 does not address the problem. You write:

    Individual cognitive subjects may indeed have some cognitive grasp of objective reality, but communal interactions (esp. the distinctive kind mediated by a shared language) make it possible for them to have some warrant that they have such a grasp.

    Again, with respect to the metaphysical principles that guide science, you are confusing perfect knowledge with reliable knowledge. Our grasp of reality (knowing things in themselves as they are) is either reliable or it isn’t; there is no middle ground. We either know an apple “as an apple” or we do not. We either know that it is not a banana, or we do not. Reason’s rules are either true or they are false, but if they are false, then there is no such thing as true and false, meaning there is no rationality.

    Communal interactions involving unreliable beliefs and unreliable evaluations about beliefs cannot provide any warrant for true beliefs. Among other things, the emerging communal system of checks and balances that is supposed to do the testing is, itself, built on the unreliable belief systems of individuals and cannot, therefore, be trusted. The evaluative mechanism would be no more reliable than the beliefs that are being evaluated and no progress would be possible. A million instances of unreliable input cannot generate one reliable belief. The whole idea is preposterous.

    There is such a thing as “synergy” or the “assembly effect bonus,” which occurs when goal-oriented people forge a consensus in a spirit of true dialogue. But that dynamic is built on rational interaction, which, in turn, is built on reason’s rules, which, in turn, constitute the basic reliable beliefs by which all other beliefs are evaluated. Because you deny reason’s rules, you have no rational standard for evaluating any belief system. On the contrary, you build your concept of social interaction on the absurd idea that rationality is determined by communal norms, which are always changing and, therefore, useless as a standard for meaningful dialogue.

    So having more than one cognitive subject is not merely additive, as StephenB seemed to suggest, but rather results in a radical transformation of one’s epistemic situation.

    This is a good example of a “poof–there it is” argument. There is not (nor could there ever be) any mechanism by which an amalgamation or accumulation of unreliable beliefs could be transformed into a true belief. Without a pre-existing self-evident truth, such as the Law of Non-Contradiction serving as a rational base, there is no way to separate true beliefs from false beliefs. If a community of naturalists denies that truth, social interaction and time will not “transform” them into rational people. Here is the sociological and anthropological fact: A community of rational people reinforces rationality; a community of irrational people reinforces irrationality. As a member of the community of naturalists, you promote irrationality, albeit in a refreshingly congenial way.

  55. KN,

    I don’t think I’m equivocating on “reliable,” because we can distinguish between what constitutes a reliable cognitive map and the usual consequences of a reliable map. A set of neurobiological processes is functioning as a reliable map if there is a homomorphism between those processes and some part of the environment.

    a second-order resemblance — the system of relations at the neurobiological level stands in a homomorphic relation to the system of relations at the physical level.

    Here’s another problem: when you talk about a ‘homomorphism between those processes and some part of the environment’, you’re off into intentionality discussions. Both the first order and the second order resemblance that you’re talking about here can’t be intrinsic by Churchland’s view – they would have to be derived. But a derived relation wouldn’t be of use anyway in this context, at least not without tracing things back to the intrinsic – and Churchland, as far as I know, will not go for intrinsic meaning.

    I’ve gone into this point in order to stress the key idea behind Churchland’s “neurosemantics” (as he calls it): it’s not that we eliminate semantic content, but that we use neurobiology to construct a better theory of what semantic content really is.

    I’ve seen these kinds of responses before, and they’ve never been compelling. The difference between a reduction and an elimination is, at times, very thin. An eliminative materialist can deny they’re eliminative about anything and argue that they’re simply trying to show everyone what the mental ‘really is’, and it turns out that the mental is nothing but the mechanistic physical.

    Now, Churchland does, of course, think that animals that can reliably map their environments will tend to leave more offspring than those than cannot — because reliably mapping the environment allows the organism to coordinate its perceptual ‘input’ and motor ‘output’, and so better accomplish all of its practical activities, including reproduction. What gets mapped, and how, depends on the particular environment, the kind of organism, and their mode of interaction. Oysters map their environment far differently from lobsters or tigers.

    In other words, the kind of reliability that Churchland is talking about is different from the kind of reliability that Plantinga concedes. The kind of reliability that Plantinga concedes seems to me to be a reliability constituted by reproductive success, aka staving off extinction one generation at a time. That’s “purely external,” so to speak — semantic content doesn’t matter, or seems not to. Whereas the kind of reliability that Churchland is talking about is a reliability of semantic content — it’s just non-propositional semantic content.

    Again, whether we’re talking about maps or beliefs, we’re going to have to ask just how these things are constituted given such and such metaphysics. For Churchland, from what I read, there is no intrinsic meaning in the brain – not for a belief, and not for a map. A map whose semantic content is entirely derived won’t be of any use against the EAAN. This I bring up before pointing out that Churchland, again, is denying the existence of ‘beliefs’ altogether – which makes his reply not exactly the most compelling one right out of the gates.

    Further, saying that such and such neural states contain ‘non-propositional semantic content’ itself won’t do much. Keep in mind that Plantinga’s argument didn’t assume the absolute invisibility of semantic content to selection – he points out problems both with epiphenomenal semantic content (which Churchland’s scheme may well fall prey to) as well as situations where semantic content can be said to enter the causal chain. Here’s one quote to consider from Patricia Churchland: Improvements in sensorimotor control confer an evolutionary advantage: a fancier style of representing [the world] is advantageous so long as it is geared to the organism’s way of life and enhances the organism’s chances of survival. Truth, whatever that is, definitely takes the hindmost.

  56. 56
    Kantian Naturalist

    Yes, intentionality is going to be a big problem here, and I’m glad you raised it.

    I used to be fairly confident that intentionality could be “naturalized,” one way or the other, but now I’m not so sure — the problem here is, if someone puts forth a theory of ‘naturalized intentionality,’ and we say, “but that’s not real intentionality, because it doesn’t fit our intuitions about what intentionality is!”, how are we to adjudicate between the theory and our pre-theoretic intuitions? My preference is for the theory over the intuitions, but not always. There does seem to be something basically right about the thought that the vocabulary of agency — Sellars’ “manifest image” — has a kind of transcendental priority over empirical descriptions and explanations, and that priority cannot be easily accommodated by “naturalism”.

    So far I’ve been concerned with explicating Churchland’s neurosemantics. But I’ve also indicated here and there that I have some reservations about it, and now’s the time to make those reservations explicit: I don’t think that Churchland’s neurosemantics is a really a theory of semantics. I think that when it comes to semantic content, I far prefer Robert Brandom’s account of inferential semantics, wherein semantic content is something done by persons, not by brains, insofar as those persons are members of a community bound together by a shared linguistic tradition. On this account, the content of a concept is constituted by its inferential role — what judgments it licenses, what judgments are incompatible with it, and so on.

    What I think Churchland has given us is not really a semantic theory at all, although it is a theory of representations — in effect he shows us how to treat “representation” as a biological category. (Much the same has been said about Ruth Millikan’s work, with which I’m not yet familiar.) The reason why I resist calling Churchland’s account a semantic account is because synaptically-encoded feature-space maps do not, all by themselves, participate in norm-governed inferences. However, I do think that Churchland is right to resist the suggestion that brains are merely syntactical.

    At the heart of my thinking about these (and, indeed, many other) topics is the distinction between conceptual explication and causal explanation. A conceptual explication specifies what’s going on conceptually, e.g. specifying what other concepts we need to understand in order to have a firm and clear grasp of some problematic target-concept. A causal explanation specifies what various causal powers (objects, properties, etc.) must be realized in order to bring about the phenomenon referred to by some concept.

    Here’s an example — solubility. A conceptual explication of solubility would be, “x is soluble in y if and only if, if x is placed in y, then (ceteris paribus) it would dissolve”. A causal explanation of solubility would involve talking about the distribution of positive and negative charges over molecular surfaces.

    So here too — Churchland’s feature-space mappings may causally explain semantic content, but they aren’t the same concept as semantic content — no more than distribution of positive and negative charges ‘means the same thing as’ solubility.

    As for Patricia Churchland’s quote (from her “Epistemology in the Age of Neuroscience”, 1987, The Journal of Philosophy), I don’t take her to be saying that truth is epiphenomenal or whatever, but rather that the job of the naturalistic epistemologist is to first figure out what semantic content looks like “in the order of nature”, and then figure out how the acquisition of language affects pre-linguistic content. Only once language has come on the scene do we have anything like judgments, a fortiori only with the advent of language are there any mental contents with truth-value. That’s her view, as I understand it.

    As for me, it doesn’t seem right to say that non-discursive animals lack beliefs and desires — rather, my own view is that there is a kind of “opacity” to their beliefs and desires; we know that they have them, but we cannot tell what they are, except within certain rough-
    and-ready approximations.

    But I think that there’s basically the same story going on with non-discursive cognitive subjects as there is with us discursive cognitive subjects — we are justified in attributing true beliefs to them insofar as they have generally reliable cognitive maps of their environments. My cats have ‘true beliefs’ about where the food bowl is, and ‘false beliefs’ about how easy it would be to catch the birds outside my apartment. (They are both indoor cats; I don’t let them out.)

    Notice, again, that I’m trying to cash out “reliable” in terms of the homomorphic relation between domains and feature-spaces, not in terms of overall reproductive success. (By that criterion my cats are both dismal failures, since they are both fixed.)

  57. 57
    Kantian Naturalist

    StephenB, yes, I think I’m willing to just deny that our collective cognitive achievements are “built on” our individual ones — at least, I’ll deny it if “built on” is understood in a merely additive sense. I say that because a plurality of cognitive subjects can share their perspectives, and with that comes an drastic increase in our grasp of objective reality. That is, our very capacity to grasp objective reality as objective, to take it as objective-for-us, is interdependent with our capacity to engage in intersubjective discourse — to regard each other as cognitive subjects.

    Now, there is the question as to whether any cognitive grasp of objective reality is fully intelligible in light of “naturalism”. But I don’t see why the Churchlandian account — synaptically-encoded feature-space mappings that stand in a homomorphic relation to the environmental features thereby mapped, and actualized through the causal nexus between brain, body, and world — isn’t at least a promising explanation of the cognitive grasp of objective reality enjoyed by many different kinds of animals, including human beings.

    One interesting wrinkle in Churchland’s story is that the structure-preserving relation is homomorphic rather than strictly isomorphic, as Sellars had insisted. I would like to say that this makes a big difference, but I’m not entirely sure what it is. But I do want to retain Sellars’ insistence, apparently rejected by Churchland, that “signifying” is different from “picturing” — the former being the proper home of notions such as “truth” and “meaning,” and the latter being the proper home of the relation between conceptual frameworks and the world.

    So it is indeed quite central to my pragmatism that nothing of the discursive order, just as such, stands in a representational relation to anything in the natural order, just as such. However, I do worry that this commitment ultimately derives from taking for granted the conception of nature grounded in modern science, and there could be very good reasons for rejecting, or at least questioning, the dominance of that particular conception.

  58. KN: Pardon a simple case — such as, how “everybody” knows that the objectors to Columbus thought the world was flat and he proved it was round? In short, a strong consensus can be spectacularly in error. KF

  59. 59
    Kantian Naturalist

    I’m not sure what the point of that is supposed to be, KF — nothing I’ve said anywhere suggests that, on my view, we have a completely correct grasp of objective reality at any given time.

    My point was that — and I thought I’d made this perfectly clear, no? — we must have a partial grasp of objective reality in order for there to be the fallible-but-corrigible process that is empirical inquiry, and that that partial grasp is intelligible on naturalistic terms, as per Churchland’s account of neurobiological representations.

    In fact, on his account, any creature that has any sort of synaptically-encoded feature-space mapping will, just for that reason, have some cognitive grasp of objective reality. That strikes me as eminently plausible and very likely true — whereas I take it that it strikes most of you here as completely daft — and so the conversation continues!

  60. I say that because a plurality of cognitive subjects can share their perspectives, and with that comes an drastic increase in our grasp of objective reality.

    How does a plurality of inaccurate perspectives increase our grasp of objective reality?

    From one perspective, I’m holding a banana.
    From another, I’m holding love.
    From another, I’m holding nothing.
    From another, I’m holding the universe.

    Therefore, the objective reality is? _________________

  61. Kantian Naturalist

    I say that because a plurality of cognitive subjects can share their perspectives, and with that comes an drastic increase in our grasp of objective reality.

    I don’t think that approach will work for reasons stated earlier. If an individual naturalist, who does not know that an apple is an apple, interacts with a million other naturalists, each of whom is ignorant in the same way, how does social interaction and time cause everyone concerned to know the apple for what it is?

    Now, there is the question as to whether any cognitive grasp of objective reality is fully intelligible in light of “naturalism”.

    Since naturalism does not recognize the ontological distinction between the knower and the thing known, that would, indeed, be a problem.

    But I don’t see why the Churchlandian account — synaptically-encoded feature-space mappings that stand in a homomorphic relation to the environmental features thereby mapped, and actualized through the causal nexus between brain, body, and world — isn’t at least a promising explanation of the cognitive grasp of objective reality enjoyed by many different kinds of animals, including human beings.

    In order to grasp objective reality, the knower must apprehend the essence of the thing known, not simply its features. Otherwise, he will never understand the difference between (x) and (y). He cannot simply sense all the features of x and y, add them up, and, in each case, assign a name to their sum total. He must apprehend the thing as being what it is. He must know what it is that unifies those features that help to describe it. Otherwise, he may describe the thing, but he will never know the thing as it is.

    If you sense the color of your best friend’s eyes, the texture of his hair, and the shape of his body, your sense knowledge alone provides a radically incomplete picture. Unless your intellectual knowledge informs you that he is a human being and not a giraffe, you do not really know him for “what” he is. In other words, you apprehend reality only if you know in what way he is different from every other human (sense knowledge) and in what way he is the same as every other human (intellectual knowledge).

    One interesting wrinkle in Churchland’s story is that the structure-preserving relation is homomorphic rather than strictly isomorphic, as Sellars had insisted. I would like to say that this makes a big difference, but I’m not entirely sure what it is. But I do want to retain Sellars’ insistence, apparently rejected by Churchland, that “signifying” is different from “picturing” — the former being the proper home of notions such as “truth” and “meaning,” and the latter being the proper home of the relation between conceptual frameworks and the world.

    It sounds more like an innocent intramural disagreement between to naturalists who are far more serious about the prospect of joining hands to reject the rationality of essentialism.

    So it is indeed quite central to my pragmatism that nothing of the discursive order, just as such, stands in a representational relation to anything in the natural order, just as such. However, I do worry that this commitment ultimately derives from taking for granted the conception of nature grounded in modern science, and there could be very good reasons for rejecting, or at least questioning, the dominance of that particular conception.

    Why not just accept the fact that an apple is an apple?

  62. That should be “between [two] naturalists.”

  63. @StephenB (61), I could not agree more.

    SB #61: I don’t think that approach will work for reasons stated earlier. If an individual naturalist, who does not know that an apple is an apple, interacts with a million other naturalists, each of whom is ignorant in the same way, how does social interaction and time cause everyone concerned to know the apple for what it is?

    Typically the naturalist is under the false notion that one can fabricate a whole [the concept of the apple] from parts [plurality of partial fallible perspectives].

    SB #61: He cannot simply sense all the features of x and y, add them up, and, in each case, assign a name to their sum total.

    Indeed he cannot, but this is naturalism’s core business!

    SB #61: He must apprehend the thing as being what it is. He must know what it is that unifies those features that help to describe it. Otherwise, he may describe the thing, but he will never know the thing as it is.

    Indeed. Understanding is about the whole – not the parts.

    SB #61: Why not just accept the fact that an apple is an apple?

    Indeed. Why not accept that the whole is more than the sum of its parts?

  64. KN,

    the problem here is, if someone puts forth a theory of ‘naturalized intentionality,’ and we say, “but that’s not real intentionality, because it doesn’t fit our intuitions about what intentionality is!”, how are we to adjudicate between the theory and our pre-theoretic intuitions?

    I don’t think this is a fair way to put it. Sure, intuitions play a role, but they also play a role in the ‘naturalized’ attempts: go down the chain and eventually you’re going to find someone going ‘the world just has to be like that!’ or ‘the world can’t be that way!’

    The criticisms of naturalized intentionality often involve showing their inadequacy, picking out contradictions, or showing that the move made can barely be called ‘naturalistic’ at all. I actually think all projects to ‘naturalize’ most things are pointless, because ‘naturalism’ has been put through the wringer too many times.

    Anyway, I think you may be taking on too much here. You seem to want to make use of Churchland’s argument – but then you start talking about ‘belief’ again, which Churchland afaik rejects. I don’t think you can just take Churchland’s talk of ‘maps’ and ‘accuracy’ and then just graft ‘belief’ on to it – do that, and you’re engaged in a radically different project than he is.

    As for the Patricia Churchland quote, I think what she was getting at isn’t quite what you think. Evolutionary processes do not select for ‘truth’ – they select for survival. I don’t see how you can read her to be talking about the duties of philosophers when the context is her describing evolutionary processes.

  65. Box @63,

    Yes, indeed. Understanding is about the whole – not the parts.

  66. 66
    Kantian Naturalist

    I don’t think you can just take Churchland’s talk of ‘maps’ and ‘accuracy’ and then just graft ‘belief’ on to it – do that, and you’re engaged in a radically different project than he is.

    Yes. :)

  67. KN:

    In brief again, I noted a simple case of a widespread case of belief that is in fact spectacularly false, but which is reinforced by the various social systems and networks.

    Going back a bit, we can see a very similar pattern with Crick’s neuron-network determinism, Marx’s Dialectic Materialism, Freud’s notions on id, ego and superego [and, as I put it crudely, the role of potty training . . . ], Skinner’s rat in an operantly conditioning maze, etc. It is not just that we do not have a perfect grasp of reality, but that we sometimes have individuals AND collectives in mutual reinforcement of error. Where evolutionary materialistic views consistently end in the problems of self referentiality. As a result somewhere along the line, rationality is pulled out of a magician’s hat.

    That is, Plantinga, as I said above, could have named names and taken prisoners on the patterns he playfully decided to cast in terms of a silly hominid-like creature fleeing from a tiger equivalent on some sci fi world out there. The issues he raises have serious merit.

    In my simple and rough rendering, if our world is wholly material, and is wholly shaped and controlled by forces that are blind forces on matter acted on by chance, necessity and time, then all phenomena must so be explained. This ends up in genetic and social conditioning on chance plus necessity, leading to meltdown of the perceived grounds for reliability or rational capacity of reasoning.

    Where, the exemplars I gave give cases of appealing to just such forces and factors without spotting the self-referential absurdity involved. I added the current misperception on events c. 1492, to show that the problem persists.

    Inter-subjective consensus without well grounded warrant and due reckoning with strengths and limitations in the individual as well as the collective — i.e. naked rationality accepted as its own force — will predictably end up in deep trouble.

    KF

  68. “I” am my body”:
    Kantian Naturalist,

    KN: as to man’s increase of knowledge, there is interesting ‘spiritual’ aspect to note:

    Alan Turing and Kurt Godel – Incompleteness Theorem and Human Intuition – video (notes in video description)
    http://www.metacafe.com/watch/8516356/

    Are Humans merely Turing Machines?
    https://docs.google.com/document/d/1cvQeiN7DqBC0Z3PG6wo5N5qbsGGI3YliVBKwf7yJ_RU/edit

    In fact,,, Here is what Gregory Chaitin, a world-famous mathematician and computer scientist, said about the limits of the computer program he was trying to develop to prove that material processes could generate information:

    At last, a Darwinist mathematician tells the truth about evolution – VJT – November 2011
    Excerpt: In Chaitin’s own words, “You’re allowed to ask God or someone to give you the answer to some question where you can’t compute the answer, and the oracle will immediately give you the answer, and you go on ahead.”
    http://www.uncommondescent.com.....evolution/

    Here is the video where, at the 30:00 minute mark, you can hear the preceding quote from Chaitin’s own mouth in full context:

    Life as Evolving Software, Greg Chaitin at PPGC UFRGS
    http://www.youtube.com/watch?v=RlYS_GiAnK8

    In fact the limits for information generation by material processes are much more severe than is indicated by the preceding in that for even that modest gain of information it takes a intelligent, conscious, mind to program the computer in the first place:

    “The Search for a Search: Measuring the Information Cost of Higher-Level Search,” Journal of Advanced Computational Intelligence and Intelligent Informatics 14(5) (2010): 475-486

    “Conservation of Information in Search: Measuring the Cost of Success,” IEEE Transactions on Systems, Man and Cybernetics A, Systems & Humans, 5(5) (September 2009): 1051-1061
    http://www.evoinfo.org/

    Put even more simply, It ALWAYS takes a ‘mind’ to produce information:

    “The mechanical brain does not secrete thought “as the liver does bile,” as the earlier materialists claimed, nor does it put it out in the form of energy, as the muscle puts out its activity. Information is information, not matter or energy. No materialism which does not admit this can survive at the present day.”
    Norbert Wiener created the modern field of control and communication systems, utilizing concepts like negative feedback. His seminal 1948 book Cybernetics both defined and named the new field.

    “Our experience-based knowledge of information-flow confirms that systems with large amounts of specified complexity (especially codes and languages) invariably originate from an intelligent source from a mind or personal agent.”
    (Stephen C. Meyer, “The origin of biological information and the higher taxonomic categories,” Proceedings of the Biological Society of Washington, 117(2):213-239 (2004).)

    The story of the Monkey Shakespeare Simulator Project
    Excerpt: Starting with 100 virtual monkeys typing, and doubling the population every few days, it put together random strings of characters. It then checked them against the archived works of Shakespeare. Before it was scrapped, the site came up with 10^35 number of pages, all typed up. Any matches?
    Not many. It matched two words, “now faire,” and a partial name from A Midsummer Night’s Dream, and three words and a comma, “Let fame, that,” from Love’s Labour’s Lost. The record, achieved suitably randomly at the beginning of the site’s run in 2004, was 23 characters long, including breaks and spaces.
    http://io9.com/5809583/the-sto.....or-project

    Book Review – Meyer, Stephen C. Signature in the Cell. New York: HarperCollins, 2009.
    Excerpt: As early as the 1960s, those who approached the problem of the origin of life from the standpoint of information theory and combinatorics observed that something was terribly amiss. Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created. Now of course, elementary particles aren’t chemical laboratories, nor does peptide synthesis take place where most of the baryonic mass of the universe resides: in stars or interstellar and intergalactic clouds. If you look at the chemistry, it gets even worse—almost indescribably so: the precursor molecules of many of these macromolecular structures cannot form under the same prebiotic conditions—they must be catalysed by enzymes created only by preexisting living cells, and the reactions required to assemble them into the molecules of biology will only go when mediated by other enzymes, assembled in the cell by precisely specified information in the genome.
    So, it comes down to this: Where did that information come from? The simplest known free living organism (although you may quibble about this, given that it’s a parasite) has a genome of 582,970 base pairs, or about one megabit (assuming two bits of information for each nucleotide, of which there are four possibilities). Now, if you go back to the universe of elementary particle Planck time chemical labs and work the numbers, you find that in the finite time our universe has existed, you could have produced about 500 bits of structured, functional information by random search. Yet here we have a minimal information string which is (if you understand combinatorics) so indescribably improbable to have originated by chance that adjectives fail.
    http://www.fourmilab.ch/docume.....k_726.html

    And KN, here we have you generating far in excess of 500 bits of structured, functional information trying with all your obfuscating might to tell us that material processes can generate functional information and that a conscious intelligent mind is not needed. Do you see the problem here between what you claim and what science is telling us KN?

    As if the preceding was not bad enough for your preferred atheistic worldview KN, it is now found, through quantum teleportation experiments, that material reduces to quantum ‘information’:

    The ‘Top Down’ Theistic Structure Of The Universe and Of The Human Body
    https://docs.google.com/document/d/1NhA4hiQnYiyCTiqG5GelcSJjy69e1DT3OHpqlx6rACs/edit

    Thus KN, here we have you insisting against all common sense and empirical evidence that the purely material processes of your body can generate and comprehend information. But honestly KN, what should we believe you or our own eyes? i.e. Your philosophy is ‘not even wrong’ KN!

    “There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy.”
    William Shakespeare – Hamlet

    John 1:1
    In the beginning was the Word, and the Word was with God, and the Word was God.

    As ‘the whole is not explained by the parts’,

    John Michael Talbot & Michael Card – One Faith – music
    http://www.youtube.com/watch?v=AgYguIi7fMI

  69. semi related: Young Children Have Grammar and Chimpanzees Don’t – Apr. 10, 2013
    Excerpt: “When you compare what children should say if they follow grammar against what children do say, you find it to almost indistinguishable,” Yang said. “If you simulate the expected diversity when a child is only repeating what adults say, it produces a diversity much lower than what children actually say.”
    As a comparison, Yang applied the same predictive models to the set of Nim Chimpsky’s signed phrases, the only data set of spontaneous animal language usage publicly available. He found further evidence for what many scientists, including Nim’s own trainers, have contended about Nim: that the sequences of signs Nim put together did not follow from rules like those in human language.
    Nim’s signs show significantly lower diversity than what is expected under a systematic grammar and were similar to the level expected with memorization.
    This suggests that true language learning is — so far — a uniquely human trait, and that it is present very early in development.
    “The idea that children are only imitating adults’ language is very intuitive, so it’s seen a revival over the last few years,” Yang said. “But this is strong statistical evidence in favor of the idea that children actually know a lot about abstract grammar from an early age.”
    http://www.sciencedaily.com/re.....131327.htm

  70. LOL: Written by Chance?
    Excerpt: “You might think that someone wrote this article. But of course, you would be mistaken. Articles are not written by people. They are the result of chance. Every intelligent person knows it. There might be some people who want you to think that articles are written by people. But this view is totally unscientific. After all, we cannot see the person who allegedly wrote the article. We cannot detect him or her in any way. The claim that this article has an author cannot be empirically verified, and therefore it must be rejected. All we have is the article itself, and we must find a scientific explanation for its origin. ,,,”
    http://www.youroriginsmatter.c.....chance/142

  71. 71
    Kantian Naturalist

    I wouldn’t say that evolutionary processes select for survival — I would say that natural selection tends, over the long term, to eliminate those species that have non-satisficing traits or behaviors. (“Satisficing” is ‘good enough’ rather than ‘perfect’ or ‘ideal’ — as long as a species is well-adapted enough to avoid extinction, one generation at a time, then it will persist.)

    But what is selected against depends on how the organism in question makes its living. Microbes and oak trees do perfectly well without having any discernible cognitive processes at all. The Churchlands’ claim is that if an organism makes a living by doing any cognitive mapping of its environment, then those maps will tend to be reliable, in that they will be at least roughly accurate maps of parts of its environment — namely, those parts of the environment that the organism needs to be sensitive to in order to survive and flourish.

    That much is quite clear. The more difficult questions are, does this kind of representation deserve to be called semantic content? (If it does, then we have a good theory of why semantic content is part of the causal nexus.) And, what is the relation between neurophysiological representations and what we call “beliefs”?

    Now, I’ve put it forth as a highly speculative claim that what we call “true beliefs” just are the synaptically-encoded feature-domain mappings of the brain of an animal that has acquired a shared language. That might not be right, and it might not be my most considered view after I’ve pondered a bit more, but for the moment I don’t see anything wrong about it.

    (Note 1: this is part of an account of true belief — justification will require a different treatment — and for that matter I don’t think that language can be given a fully satisfactory ‘naturalistic’ treatment, because I don’t think that normative fact can be translated ‘without remainder’ into a set of naturalistic facts.)

    (Note 2: for that matter, given that I don’t think that organisms themselves can be fully explained in mechanistic terms, it’s not really clear to me what counts as “natural” or “naturalistic”.)

  72. Microbes and oak trees do perfectly well without having any discernible cognitive processes at all.

    Evolutionists also seem to do perfectly well without having any discernible cognitive processes at all.

  73. The Churchlands’ claim is that if an organism makes a living by doing any cognitive mapping of its environment, then those maps will tend to be reliable, in that they will be at least roughly accurate maps of parts of its environment — namely, those parts of the environment that the organism needs to be sensitive to in order to survive and flourish.

    And how is the accuracy of a ‘roughly accurate map’ measured? Reading Churchland, it seems to be exactly what the way I said: it’s accurate if it’s a net benefit to survival. And insofar as that’s the case, it happens to be a case that Plantinga already seems willing to grant. The EAAN does not deny the ability of organisms, even complicated organisms, to somehow survive.

    You say that the organism will ‘need to be sensitive’ to certain parts of the environment to survive and flourish. I have examples, ones you seem to admit to, of species surviving and flourishing with zero beliefs, no cognitive awareness to speak of. Plantinga and others can easily cite examples of species with wrong beliefs, etc, surviving and flourishing with wrong beliefs. The sort of ‘map’ Churchland is relying on here is a map that doesn’t threaten the EAAN.

    Now, I’ve put it forth as a highly speculative claim that what we call “true beliefs” just are the synaptically-encoded feature-domain mappings of the brain of an animal that has acquired a shared language. That might not be right, and it might not be my most considered view after I’ve pondered a bit more, but for the moment I don’t see anything wrong about it.

    You’d need a lot more than this, since a synaptically-encoded feature-domain mapping of an animal brain that has acquired a shared language – putting aside all the problems this is going to have for naturalism anyway – doesn’t have to be a true belief. It can just be a belief.

    Are you sure you aren’t mixing yourself up here, trying to find some way to envision the way a belief can cohere in the brain, while subconsciously putting the EAAN itself aside?

  74. 74
    Kantian Naturalist

    But the accuracy isn’t defined in terms of net benefit to survival — the accuracy is defined in terms of how strong or weak the homomorphic relation is. Net benefit to survival is a stable indication that the organism’s nervous system instantiates a reliable cognitive mapping, but the net benefit to survival doesn’t constitute that reliability. I think that’s an important difference between what Churchland is stressing and what Plantinga concedes.

    More to the point, though, there’s this difference as well. The EAAN construes N&E as a defeater for R because, given N&E, there are four logically possible scenarios in which semantic content relates to behavior, and semantic content is causally related to behavior in only one of those scenarios. So this assigns .25 probability to R.

    But, one can easily construe Churchland as saying, “Plantinga, this scenario relies entirely on one’s pre-theoretic intuitions about what semantic content. In light of a good scientific theory about how semantic content is realized in neurobiological processes, the probability of R is much higher.”

    Now, Churchland doesn’t quite say that because he construes beliefs as the sorts of things that only discursive animals have — if an animal doesn’t have a language, then it doesn’t have beliefs. I find this too restrictive — I’m more willing to say that non-discursive animals do have beliefs, though of very rudimentary sorts.

    But, I do think that Churchland makes a good point when he poses the question, in effect, “should we construe R — the reliability of cognitive capacities — in terms of ‘producing mostly true beliefs’ or in terms of ‘producing feature-space mappings that homomorphically resemble regions of the organism’s environment?”

    Now, I think it would be a bad answer to Churchland to just say, “but by ‘reliable cognition’ I don’t mean the latter, I mean the former!” And that’s because the latter could very well be a much better explanation of “reliable cognition,” once we’ve got a good, workable theory of what reliable cognition really amounts to in rerum natura. (Maybe one doesn’t care about how things stand in rerum natura, but then it’ll be harder to generate an argument as to how evolutionary naturalism is self-undermining.)

    Now, I happily concede that many animals can flourish without any beliefs at all, even though they clearly have some sort of very limited cognitive mappings going on in their nervous systems. I confess that I find it only a slight difficulty to attribute beliefs to deer and wolves, considerably more difficult to attribute beliefs to birds and frogs, and I have no idea what it would be to attribute beliefs to oysters or leeches — yet even oysters and leeches have nervous systems! And of course many living things are able to fare perfectly well without any cognitive activity at all — unless oak trees and daffodils are cognizing in ways that I am unable to understand.

    What is much less clear to me is that any species, the members of which do have beliefs at all, can survive and flourish over the long run and yet also have mostly false beliefs. Plantinga gives us merely logically possible scenarios (e.g. “Paul the Hominid”), which swing free of any theory about what semantic content looks like in rerum natura. Take such theories into account — Churchland’s being the one I know best, but also Millikan and Dretske have developed such theories — and I don’t think “Paul the Hominid” has the relevance that Plantinga takes it to have.

    One could say that those theories aren’t about semantic content, but then we’d need an argument as to why, and one would need an argument as to why saying that synaptically-encoded feature-space mapping don’t explain true beliefs isn’t analogous to saying, “but molecular motion can’t explain heat, because ‘molecular motion’ doesn’t mean ‘heat’, and I don’t care about molecular motion — I want to know what heat is!”

  75. KN,

    But the accuracy isn’t defined in terms of net benefit to survival — the accuracy is defined in terms of how strong or weak the homomorphic relation is.

    And natural selection does not select for stronger homomorphic relations, but for benefit to survival. Meanwhile, there can be some amount of homomorphic relation (since it’s an issue of degree) even in creatures that utterly lack cognizance – and there can be a creature who is selected for, even though their homomorphic relation is less accurate, or not accurate at all.

    This is one reason I keep saying that your argument here looks as if it’s tackling a completely different problem than Plantinga is raising. You seem to want to make this an argument about what constitutes a belief on the neurological level, but that really isn’t something Plantinga cares about unless it’s going to impact what the EAAN is advancing about reliability specifically given E&N.

    The EAAN construes N&E as a defeater for R because, given N&E, there are four logically possible scenarios in which semantic content relates to behavior, and semantic content is causally related to behavior in only one of those scenarios. So this assigns .25 probability to R.

    I think you’ve misunderstood Plantinga on this point. I’d like to see you quote where he says that there’s a .25 probability of R.

    Now, Churchland doesn’t quite say that because he construes beliefs as the sorts of things that only discursive animals have

    He’s eliminative about beliefs.

    What is much less clear to me is that any species, the members of which do have beliefs at all, can survive and flourish over the long run and yet also have mostly false beliefs. Plantinga gives us merely logically possible scenarios (e.g. “Paul the Hominid”), which swing free of any theory about what semantic content looks like in rerum natura.

    The problem is, again, the evidence seems to support Plantinga here. You’re willing to concede that populations can survive and flourish even when they altogether lack beliefs about their environment, or possibly altogether. That’s a pretty considerable concession. You don’t dispute that they can survive and flourish even if they have false beliefs. But somehow, the moment a population of organisms (or even a single organism?) is capable of belief formation (and again, Churchland is eliminative about beliefs), now they’re guaranteed that most of their beliefs are accurate? No, that doesn’t wash.

    One could say that those theories aren’t about semantic content, but then we’d need an argument as to why, and one would need an argument as to why saying that synaptically-encoded feature-space mapping don’t explain true beliefs isn’t analogous to saying, “but molecular motion can’t explain heat, because ‘molecular motion’ doesn’t mean ‘heat’, and I don’t care about molecular motion — I want to know what heat is!”

    The funny thing is, if you don’t subtract all the secondary qualities from heat – those things get regarded as all things mental – then no, you haven’t explained heat after all. Not completely.

  76. 76
    Kantian Naturalist

    It’s true that if an adaptation doesn’t contribute to inclusive fitness, then that adaptation will tend to be lost, but it seems clear to me that better cognitive mapping does and would contribute to inclusive fitness.

    What doesn’t make sense to me is the thought that an organism could survive and flourish even if its cognitive mappings of its environment were utterly unreliable. And what I’m denying, really, is that “tending to produce true beliefs” is the right way to think about what “reliable cognition” really amounts to.

    Hence my conditional: if a living thing carries out cognitive processes, then those processes will tend to be reliable, because unreliable cognitive processes prevent organisms from correlating their perceptual input and motor output in ways that are necessary for accomplishing the organism’s practical goals — including reproduction.

    It is true that I’m not so much trying to respond to the EAAN from within, but rather reject the EAAN because it rests upon an inadequate notion of what counts as “reliable cognition.”

    That said, I do accept the point above that Churchland is an eliminativist about beliefs. I am not. Where I disagree with Churchland is that I don’t think that “folk psychology” (as he calls it) is an empirical theory to begin with. I think it’s a transcendental presupposition of discursive subjectivity and moral agency.

    (Note: Churchland doesn’t like the transcendental/empirical distinction because he thinks that Quine’s rejection of the analytic/synthetic distinction dispenses with it, whereas I think that Quine’s rejection of the analytic/synthetic distinction is thoroughly mistaken. For that matter, I think that the a priori/a posteriori distinction is of paramount importance. So if by ‘naturalism’ one means ‘the rejection of a priori knowledge’, then I am not a naturalist.)

  77. It’s true that if an adaptation doesn’t contribute to inclusive fitness, then that adaptation will tend to be lost, but it seems clear to me that better cognitive mapping does and would contribute to inclusive fitness.

    On what grounds? Vague intuition? You’ve already granted that having absolutely no cognitive mapping is still compatible with processes that contribute to inclusive fitness. You’ve likewise granted that false beliefs can contribute to inclusive fitness. So where are you getting this?

    If at the end of the day what you have is a feeling, alright. But then all this talk about Churchland and alternative schemas for belief is a sideshow – it’s not really doing any work, because the same problems obtain. The intuition does the work.

    What doesn’t make sense to me is the thought that an organism could survive and flourish even if its cognitive mappings of its environment were utterly unreliable. And what I’m denying, really, is that “tending to produce true beliefs” is the right way to think about what “reliable cognition” really amounts to.

    Again, you have no problem with the idea of an organism surviving and flourishing with zero cognizance. If you want to get technical, even Plantinga doesn’t argue that every single belief of an organism must be utterly wrong given E&N. ‘Low or inscrutable.’

    In fact, I can bulk this up more. Can a sentient organism have subconscious or non-conscious neurobiological processes? Can these impact behavior? Can these be positive in terms of selection? If so, well, then the problem just got a whole lot bigger.

    Hence my conditional: if a living thing carries out cognitive processes, then those processes will tend to be reliable, because unreliable cognitive processes prevent organisms from correlating their perceptual input and motor output in ways that are necessary for accomplishing the organism’s practical goals — including reproduction.

    That’s not logically required by your own admission. Plantinga has given counterexamples of beliefs that are false yet are nevertheless adaptive – and really, that’s not some novel contribution on his part. Plenty of people recognize that possibility. Speaking in terms of E&N, the ‘unreliability’ of the cognitive process – the lack of its truth-tracking – will only cause harm if it results in negative actions… but that’s not necessarily the case.

    There’s that Churchland quote again: ‘Improvements in sensorimotor control confer an evolutionary advantage: a fancier style of representing [the world] is advantageous so long as it is geared to the organism’s way of life and enhances the organism’s chances of survival. Truth, whatever that is, definitely takes the hindmost.’

    It is true that I’m not so much trying to respond to the EAAN from within, but rather reject the EAAN because it rests upon an inadequate notion of what counts as “reliable cognition.”

    And I’ve pointed out how the problem is still going to be present even if you start talking about maps instead of beliefs. I also don’t think you can reject the EAAN (due to Churchland’s suggestion) and still talk about beliefs.

    That said, I do accept the point above that Churchland is an eliminativist about beliefs. I am not. Where I disagree with Churchland is that I don’t think that “folk psychology” (as he calls it) is an empirical theory to begin with. I think it’s a transcendental presupposition of discursive subjectivity and moral agency.

    The Churchlands aren’t all that crazy about subjectivity either.

    So if by ‘naturalism’ one means ‘the rejection of a priori knowledge’, then I am not a naturalist.

    I’m not so sure about the rejection of a priori knowledge, but a transcendantal presupposition of moral agency and subjectivity may seal the deal.

  78. 78
    Kantian Naturalist

    You’ve already granted that having absolutely no cognitive mapping is still compatible with processes that contribute to inclusive fitness. You’ve likewise granted that false beliefs can contribute to inclusive fitness. So where are you getting this?

    I granted the first point, but not the second. In saying that “having absolutely no cognitive mapping is still compatible with processes that contribute to inclusive fitness”, I had in mind those living things that don’t carry out any cognitive processes at all — like plants, fungi, prokaryotes, single-celled eukaryotes, and so on. In my conception of things, cognition kicks in only once there’s a layer of neurons between sensory neurons and motor neurons. In other words, if an organism has some neurons mediating between its sensory neurons and motor neurons, then it counts as a very rudimentary cognizer. And it could, for all that, lack consciousness. I have no firm or settled opinions about what generates consciousness or when it arose. I have a slightly firmer grasp on what I think generates rationality and when that arose.

    Can a sentient organism have subconscious or non-conscious neurobiological processes? Can these impact behavior? Can these be positive in terms of selection? If so, well, then the problem just got a whole lot bigger.

    I would answer “yes” to the first two questions, and “I don’t know” to the third. In what ways did the problem become bigger?

    Plantinga has given counterexamples of beliefs that are false yet are nevertheless adaptive – and really, that’s not some novel contribution on his part. Plenty of people recognize that possibility. Speaking in terms of E&N, the ‘unreliability’ of the cognitive process – the lack of its truth-tracking – will only cause harm if it results in negative actions… but that’s not necessarily the case.

    The mere possibility isn’t terribly interesting here, because showing that P is possible only shows us that it is not necessary that ~P. So, supposing it is logically possible that an organism could have reliable cognitive mapping and yet also have false beliefs — well, OK, I’m not really sure if this is possible or not, but I’ll grant that it’s not obviously impossible, the way “square circle” is. That means that the relation between “having reliable cognitive processes” and “having (mostly) true beliefs” is not analytic, doesn’t hold across all possible worlds, etc.

    But that’s OK — it’s still an attractive candidate for a good theory about what true beliefs look like in rerum natura. (But, I must immediately add, how things are in rerum natura is not the only conceptual framework we have, and for many purposes, not even the most important one.)

    I’m not so sure about the rejection of a priori knowledge, but a transcendental presupposition of moral agency and subjectivity may seal the deal.

    Quite possibly, yes. And if I were fully convinced of that, then I’d happily cease regarding myself as a naturalist. The reality of moral agency is much more important to me than writing a blank check made out to scientific realism. Still, I’d like to have my cake and eat it, too, if I can. (Don’t we all?)

  79. KN,

    I granted the first point, but not the second.

    You’re going to deny that a false belief can contribute to fitness? This seems so obvious that I’m not sure you’re really saying it.

    I would answer “yes” to the first two questions, and “I don’t know” to the third. In what ways did the problem become bigger?

    Again, I’m pretty surprised it’s number three that’s making you hesitant. Really?

    The problem becomes bigger for a few reasons. First, because it drives home the point that the same live situation at work with the non-sapient organisms is at work with the sapient ones. For some reason you’re saying you’re supremely skeptical that it’s even possible for a false belief to be beneficial to an organism, but you’re granting that organisms which totally lack beliefs can engage in beneficial behaviors. Okay, but hybrid possibilities are live too – you can have an organism whose behaviors are partly mediated by beliefs, partly mediated by those subconscious/nonconscious processes – and said processes can also seep in and affect the conscious ones.

    The mere possibility isn’t terribly interesting here, because showing that P is possible only shows us that it is not necessary that ~P.

    I think it’s more damaging in this context unless an argument can be made that it’s a priori unlikely. I actually think arguing that it IS a priori unlikely is even more damaging to naturalism than just accepting the EAAN, biting the bullet, and trying to straddle some kind of super skeptical/pragmatic compromise.

    Quite possibly, yes. And if I were fully convinced of that, then I’d happily cease regarding myself as a naturalist. The reality of moral agency is much more important to me than writing a blank check made out to scientific realism. Still, I’d like to have my cake and eat it, too, if I can. (Don’t we all?)

    Why? What’s the cake there? What’s so important about thinking of yourself as a naturalist anyway?

  80. semi related to KN’s rut with naturalism:,, even though the researchers in this following study found evidence directly contradicting what they had expected to find, they were/are so wedded to the materialistic/naturalistic view of reality,, the view of “I’ am my body”,, that it seems sadly impossible for them to even conceive of the fact that they may be wrong in their naturalistic presuppositions, and to even admit to the possibility of the reality/truth of the soul, i.e. of the “I’ am a soul distinct from my body” view of reality.

    ‘Afterlife’ feels ‘even more real than real,’ researcher says By Ben Brumfield, CNN – Wed April 10, 2013
    Excerpt: “If you use this questionnaire … if the memory is real, it’s richer, and if the memory is recent, it’s richer,” he said.
    The coma scientists weren’t expecting what the tests revealed.
    “To our surprise, NDEs were much richer than any imagined event or any real event of these coma survivors,” Laureys reported.
    The memories of these experiences beat all other memories, hands down, for their vivid sense of reality. “The difference was so vast,” he said with a sense of astonishment.
    Even if the patient had the experience a long time ago, its memory was as rich “as though it was yesterday,” Laureys said.
    http://www.cnn.com/2013/04/09/.....periences/

  81. 81
    Kantian Naturalist

    As a matter of mere logical possibility, I suppose that a false belief could contribute to inclusive fitness — Plantinga’s “Paul the Hominid” example — but I would need an account of just how it is that the false belief contributes to overall fitness before I regarded this as more than a mere logical possibility. So, sure, a false belief can contribute to inclusive fitness — but only in the thinnest and most uninteresting sense of “can.” Better?

    The problem becomes bigger for a few reasons. First, because it drives home the point that the same live situation at work with the non-sapient organisms is at work with the sapient ones. For some reason you’re saying you’re supremely skeptical that it’s even possible for a false belief to be beneficial to an organism, but you’re granting that organisms which totally lack beliefs can engage in beneficial behaviors. Okay, but hybrid possibilities are live too – you can have an organism whose behaviors are partly mediated by beliefs, partly mediated by those subconscious/nonconscious processes – and said processes can also seep in and affect the conscious ones.

    In the Brandom-influenced discourse I’m using, I’ve been using “sapient” to mean “being able to play the game of giving and asking for reasons; responsiveness to reasons as such”. Is that how you’re using “sapient” here? I want to make sure we at least have the same basic vocabulary before commenting further.

    Another point, though, to clarify further what I’m trying to do here. I embrace both semantic and epistemic holism — though I’m aware that there are criticisms of those views that I haven’t really worked through in much detail, so one might think that I’m not really entitled to hold those views.

    Be that as it may — the view that I’m interested in defending is a two-tiered (or maybe “dual-aspect”?) model of epistemic/semantic holism: there’s the level of the holistically interconnected synaptically-encoded domain-space mappings, and then there’s the level of the holistically interconnected beliefs, thoughts, desires, and so forth.

    And what I’ve been denying is that the two levels can be completely divorced from one another, such that an animal could have practically reliable cognitive mappings of its environment, and yet also have systematically false beliefs. This denial isn’t based on logical considerations; rather, I think that, on the best theory we presently of what semantic content looks like in rerum natura, that’s just not how it works. Maybe elsewhere in the universe, or in some possible world, but not here on Earth.

    I actually think arguing that it IS a priori unlikely is even more damaging to naturalism than just accepting the EAAN, biting the bullet, and trying to straddle some kind of super skeptical/pragmatic compromise.

    That’s an interesting suggestion. What do you have in mind?

    I don’t accept the EAAN because I reject Plantinga’s conceptualization of “reliable cognition.” If I accepted his way of framing the problem, then I’d find the EAAN far more compelling. Rejecting his conceptualization of “reliable cognition” in favor of Churchland’s neurosemantics allows for a much tighter connection between behaviors and beliefs than what Plantinga is willing to allow on a priori grounds alone.

    Why? What’s the cake there? What’s so important about thinking of yourself as a naturalist anyway?

    Well, I’m a scientific realist, and I think that a scientific metaphysics is clearly the right way to go. At present I see no reason to believe that scientific approach to metaphysics yields metaphysical naturalism. For me, the interesting question is how to be both a scientific realist and a moral realist. It’s commonly assumed that moral realism has anti-naturalist presuppositions or implications, such that scientific realism and moral realism are incompatible if SR commits us to metaphysical naturalism.

    But I think that assumption is mistaken; I think that ethical norms, though sui generis in a sense, are nevertheless grounded in biological norms. For that matter, I have no problems at all in saying that some animals are themselves moral agents — as argued here — and not just as moral patients. The really important thing to do here would be liberate our concept of “nature” from the tyranny of the causal-mechanistic conception of nature, from Epicureanism.

  82. 82
    Kantian Naturalist

    At present I see no reason to believe that scientific approach to metaphysics yields metaphysical naturalism.

    Whoops!! That should have read,

    At present I see no reason not to believe that a scientific approach to metaphysics yields metaphysical naturalism.

    (I suppose my saying so will elicit the usual response from BornAgain77.)

  83. One wonders, if the “inclusivity fitness” of a belief mattered in relationship to whether or not it was true, why have most humans believed false things throughout their history? False, when one considers that humans believe and have believed many mutually exclusive things.

    One might argue that the “farther out” one’s beliefs stray form their immediate range of physical interaction, the less fit such beliefs need be. Thus, while almost everyone believes that a fall from a great height will likely kill you, you have fundamental, mutually exclusive disagreements about whether or not there is a god, and whether or not morality is objective or just some sort of social arrangement. Apparently, the more esoteric (far away from physical action and response) thoughts stray, the more likely it is that such beliefs are erroneous – because they have no clear effect one way or the other on the immediate-response beliefs.

    So, all one could look to are the distribution of populations in the world; what esoteric, or “farther away” beliefs are dominant in the world? Since that is all we really have to go on, it is clear that belief in god and objective morality have won the day to date.

    It is interesting that KN keeps championing a view that has virtually no inherent value in his “fitness” (or “usefulness”) landscape. It’s not like there are more than a handful of people holding such a view. Obviously, it’s an evolutionary dead end – untrue, by any naturalist meaning of the concept of what “true” means.

    Why bother espousing it?

  84. 84
    Kantian Naturalist

    In re: 83, the fact that most human cultures have had false beliefs is no objection to evolutionary naturalism — quite the opposite, since evolutionary naturalism actually explains why that is so:

    this justly deflationary estimation of our human cognitive credentials leads us to predict that typical human theories — about the origins of mankind, about the structure of the heavens, about the origins of the universe, about the nature of disease, about the causes of motion, and about the nature of life —- will be hopelessly parochial, culturally various, and strictly false. And so they have been. The compulsive Animism that dominated primitive human cultures; the celebrated Seven Days of Creation at the hands of a Great God embraced by a more recent culture; the Garden-of-Eden account of human origins; the flat, immobile earth enclosed in a Star-flecked Sphere that rotates daily; the Invading-Demon theory of disease; the Eternal Reward/Punishment account of the authority of moral imperatives; the Vital Spirit theory of Living Creatures; all of these, and countless other cognitive embarrassments, typically advanced by and celebrated in the world’s popular religions, are just the sorts of benighted stories that you would expect of brains originally selected primarily for their capacity to engage in reproductively successful behaviors within their enveloping environmental niche. So far then, the predictions of Evolutionary Naturalism are nicely in accord with the (often embarrassing) facts of historical human cognition. There is no conflict with the empirical facts here. Just the reverse.

    And yet, there is an upside to the evolutionary story as well, whose outline will serve to bring this essay to a close. I begin by inviting the reader to consider a broader conception of representation, and of successful representation, than that embodied in the familiar framework of broadly sentence-like representations, and of their truth. There are many motives for broadening our conception here, but the most immediately relevant in the present context is that the vast majority of biological creatures throughout the long history of life on Earth have had no capacity whatever for expressing or manipulating representational vehicles even remotely like sentences, and hence no capacity for ever achieving the peculiarly sentential virtue of truth. They have been using other representational schemes entirely, schemes that display dimensions of success and failure quite different from the familiar dichotomy of truth vs. falsity.

    Cognitive Neurobiology has already given us an opening grip on what those more primitive, pre-linguaformal schemes of representation consist in, and of how they can embody information about any creature’s immediate sensory and practical environment. The suggestion, currently under vigorous development, is that, in response to the ongoing statistical profiles of their complex sensory inputs, nervous systems (even very simple ones) typically develop a high-dimensional map of the difference-and-similarity structure of the abstract features typically instanced in and encountered in their sensory environments. The development of such maps is typically achieved by post-natal processes such as Hebbian learning, which has long been known to sculpt any creature’s synaptic connections, and hence its acquired internal maps of feature-spaces, in accordance with the finegrained statistical structures of the creature’s sensory inputs.

    The take-home point for the present discussion is that the dominant scheme of representation in biological creatures generally, from the Ordovician to the present, is the internal map of a range of possible types of sensorily accessible environmental features. Not a sentence, or a system of them, but a map. Now a map, of course, achieves its representational successes by displaying some sort of homomorphism between its own internal structure and the structure of the objective domain that it purports to portray. And unlike the strictly binary nature of sentential success (a sentence is either truth or it’s false), maps can display many different degrees of success and failure, and can do so in many distinct dimensions of possible ‘faithfulness,’ some of which will be relevant to the creature’s practical (and reproductive) success, and many of which will not.

    The point of this brief excursion into Cognitive Neurobiology is that, if we broaden our conception of representational activity, in biological nervous systems, to embrace the synaptically embodied feature-space map, then we can find a perspective from which the reproductively-focused selection pressures on the creatures that develop them will exert at least an indirect pressure in favor of the capacity for generating accurate cognitive maps, for it is precisely those maps that subsequently govern the creature’s practical behaviors, including its reproductive behaviors. On the whole, good maps will serve the creature better than will poor ones. Accordingly, Evolutionary Naturalism suggests that there will be a strong tendency for living creatures to develop cognitive feature-maps that are at least roughly accurate partial portrayals of the practical environment in which the creature must make its way. This presumption falls well short of heralding Truth for such representations. But it does serve to explain how pre-human creatures can achieve penetrating internal representations of remarkable intricacy and accuracy, at least on some accountings of accuracy, all within a purely naturalistic universe.

    It may also explain how humans, too, mostly manage to do it, for the great bulk of human cognition is sublinguaformal as well. And on the negative side, it may also explain why our theories about domains that are far removed from our immediate practical experience and control are typically so benighted. The explanation is that, in such domains, our native cognitive mechanisms are plainly “in over their heads.” To achieve cognitive success in those more rarefied domains, we need the additional armamentarium of the institutions of modern science, and most especially, their vital means for transcending our native sensory and manipulative limitations. Only then will we have a reasonable chance at cognitive success of any sort, whether accurate maps or true theories. (Churchland, “Is Evolutionary Naturalism Self-Defeating?” Philo 12(2), 2009, pp. 139-40. All emphasis original.)

  85. So, sure, a false belief can contribute to inclusive fitness — but only in the thinnest and most uninteresting sense of “can.” Better?

    Not really. Still amazed.

    Paul the Hominid is a pretty fun example, but at the end of the day we’re just talking about a false belief linked to beneficial behavior. “Smoking tobacco is an offense to the aliens who live on Mars, and they will wreak retribution on those who engage in this practice.”

    I want to clarify further. Are you suggesting that Plantinga is the one who thought up the idea that false beliefs can be linked to beneficial behavior? I mean, you’re aware that this isn’t something novel on his part, right?

    And what I’ve been denying is that the two levels can be completely divorced from one another, such that an animal could have practically reliable cognitive mappings of its environment, and yet also have systematically false beliefs. This denial isn’t based on logical considerations; rather, I think that, on the best theory we presently of what semantic content looks like in rerum natura, that’s just not how it works. Maybe elsewhere in the universe, or in some possible world, but not here on Earth.

    Well, then there’s more problems here.

    One is that I understand Plantinga’s argument to be an priori argument, not a posteriori. The giveaway is that it’s not just an argument about humans, but about organisms given E&N generally. In that sense, I don’t think a reply to Plantinga based on what you consider to be an at-present tendentiously held a posteriori theory of humanity specifically is really a valid reply.

    Second, you say ‘systematically false’. But Plantinga’s argument doesn’t require ‘systematically false’. It just requires low or inscrutable. So that reply doesn’t seem to work either, since it seems to require Plantinga deny organisms can EVER have, one way or another, a true belief. That’s not required.

    That’s an interesting suggestion. What do you have in mind?

    I don’t accept the EAAN because I reject Plantinga’s conceptualization of “reliable cognition.” If I accepted his way of framing the problem, then I’d find the EAAN far more compelling. Rejecting his conceptualization of “reliable cognition” in favor of Churchland’s neurosemantics allows for a much tighter connection between behaviors and beliefs than what Plantinga is willing to allow on a priori grounds alone.

    You keep going back to Churchland, but Churchland is eliminative about beliefs altogether. Meanwhile Churchland’s neurosemantics, while obscuring the question a bit, still doesn’t really reply to the ‘low or inscrutable’ charge. What he tries to do is make that entire line of questioning invalid by denying ‘beliefs’ to begin with. But once you’re accepting ‘belief’ talk, you don’t even get the benefit of Churchland’s attempt at a dodge – and you still have the difficulties I mention. I keep saying I think you’re raising this as a reply to a question Plantinga doesn’t care about, and I stick by that.

    As for the suggestion – this isn’t something Plantinga himself said, but it’s something I think falls out in an interesting way from his argument. I think ultimately, anyone attempting to defend themselves against the EAAN is ultimately going to either have to make the hyper-skeptical but pragmatic move (‘Yes, we can’t trust our beliefs, yes, we should be skeptical of everything – let’s sacrifice either E or N, or try to cook up some pragmatic ad hoc reason to still function in daily life.’), or – in the process of arguing that R given E&N is great – commit themselves to so much teleology in the evolutionary process that they’re pretty well sacrificing their naturalism anyway.

    At that point, on their own account, minds are incredible special in the universe. Not just our particular universe (remember, Plantinga’s argument is an a priori argument), but in universes generally, minds are such that the moment they start being capable of forming beliefs, the system is fundamentally geared towards truth and reliable cognition. Suddenly nature doesn’t seem to care just about ‘fitness’ but also about ‘truth’, and in turn about minds and the mental. Now, that fits nicely with the Logos or any other number of non-naturalistic conceptions of the world. Traditional naturalism? Not so much.

    Well, I’m a scientific realist, and I think that a scientific metaphysics is clearly the right way to go.

    Sounds like a contradiction in terms – science is limited in ways metaphysics simply isn’t. It’s especially funny you should say that since you keep bringing up Churchland, yet it’s pretty hard to square scientific realism with eliminative materialism. (It’s hard to square many things with it, but SR is among the number.) Likewise, scientific realism isn’t a problem on non-naturalism anyway.

    This goes double when you start talking about speculative ‘how will the future turn out’ science, because just about everything is up for grabs when it comes to science – and science itself has radically changed its fundamental commitments more than once.

    So, why not switch? You can ditch naturalism, have scientific realism, and no longer feel any constraints with regards to moral agency, etc. Especially when you say this:

    The really important thing to do here would be liberate our concept of “nature” from the tyranny of the causal-mechanistic conception of nature, from Epicureanism.

    By many views, rejecting the mechanistic conception of nature is to reject naturalism on the spot. It doesn’t make you a theist. But a naturalist? Is the word really THAT meaningless now?

  86. In re: 83, the fact that most human cultures have had false beliefs is no objection to evolutionary naturalism — quite the opposite, since evolutionary naturalism actually explains why that is so:

    ‘Evolutionary naturalism’ can explain anything. So can theism.

    But more than that – if you believe religions are all false, yet also believe that following a religion can be adaptive, well… you wanted more examples of false belief contributing to fitness. You’ve just gotten some.

  87. 87
    Kantian Naturalist

    A few minor comments on the above:

    (1) Yes, there’s a real “content problem” with naturalism. (I once wrote a paper on this, but after it was rejected from two journals, it dawned on me that the paper is not really that good.) For the purposes of our discussion here, if “naturalism” means “accepting the causal-mechanistic conception of nature that began with Epicurean metaphysics and achieved cultural-political dominance through the Scientific Revolution”, then no, I’m not a naturalist. On the other hand, if one can be a “naturalist” by virtue of accepting a more Romantic conception of nature (Schelling, Hegel, Dewey, Merleau-Ponty), then I am naturalist — especially since I think that recent work in dynamical systems theory puts the Romantic conception of nature on an empirical basis. (If I had to put a label on what I really think, I’d probably go with “evolutionary pantheism”.)

    (2) If Plantinga’s argument is a priori, that weakens it considerably — especially if one thinks that everything a priori is analytic, and only those wacky Kantians think otherwise. So a priori claims hold across all possible worlds (if they are necessary) or none (if they are impossible). But evolutionary naturalism, as construed here, isn’t a claim about what holds in all logically possible worlds — it’s a claim about what holds in this world. So we can conceive of a world in which animals have reliable cognitive processes and yet have false beliefs — so what? The “low or inscrutable” claim only kicks in if we cannot tell whether or not the actual world is like that. But we can — not a priori, of course, but a posteriori, by trying to figure out exactly how brains represent their environments.

    (3) There are lots of cultures that have held worldviews which are false (by our lights), but of course those people had all sorts of practical knowledge that was perfectly true — beliefs about how to build shelters, about what sorts of plants and animals were safe to eat, about which plants could be used for medicines or psychotropic drugs, about human psychology, etc. I can see the point that having a worldview contributes to inclusive fitness, if the worldview is manifested through behaviors that promote, say, group cohesion (“we are the Wolf Tribe”) — whereas the scientific worldview has a better claim on truth because technology amplifies our practical cognitive capacities, which are reality-tracking in ways that our narratives are not.

  88. For the purposes of our discussion here, if “naturalism” means “accepting the causal-mechanistic conception of nature that began with Epicurean metaphysics and achieved cultural-political dominance through the Scientific Revolution”, then no, I’m not a naturalist.

    Great. So why not abandon naturalism? You know there’s a content problem. You seem to recognize that naturalism has been tied to the mechanistic view of nature. Again, you say you want to have your cake and eat it too. What in the world is the cake here? Some vague sense of camaraderie with other people calling themselves naturalists?

    The “low or inscrutable” claim only kicks in if we cannot tell whether or not the actual world is like that. But we can — not a priori, of course, but a posteriori, by trying to figure out exactly how brains represent their environments.

    No, the ‘low or inscrutable’ claim is going to hold on the basis of the argument, if counterarguments on the same terms don’t succeed. Plantinga’s EAAN is arguing that E&N undermines one’s rationality claims before any a posteriori considerations get under way – it’s calling into question the capability of properly assessing those a posteriori claims to begin with.

    What’s more, the EAAN doesn’t really concern itself with ‘how brains represent their environments’ – any way it can represent the environment will also be a way it can be wrong. This is one reason why I keep wondering if you’re not having a completely different argument than Plantinga is.

    (3) There are lots of cultures that have held worldviews which are false (by our lights), but of course those people had all sorts of practical knowledge that was perfectly true

    The practical knowledge is irrelevant. It’s enough to show that having false, incorrect views was still correlated with survival-promoting behavior on the population. And even there, we don’t need the historical examples – hypothetical ones will do fine. In this discussion it’s just easier to point at such.

    whereas the scientific worldview has a better claim on truth because

    There is no ‘scientific worldview’, full stop. Any given worldview is going to rely, explicitly or implicitly, on metaphysics and ‘narratives’.

    This gets worse when you look at the actual history of science. It’s one long list of very confident proclamations based on contemporary interpretations of the data, often with observations at the time ‘amplified by technology’ (a very low bar to jump), that got discarded later on.

    What’s more, talking about the ‘scientific worldview’ when you reject a key component (the mechanistic concept of nature) embraced by most of the people who try to take up that imaginary mantle – is just bizarre. I asked it earlier in this comment, but again I ask – what exactly is the cake you’re after here?

  89. The point of this brief excursion into Cognitive Neurobiology is that, if we broaden our conception of representational activity, in biological nervous systems, to embrace the synaptically embodied feature-space map, then we can find a perspective from which the reproductively-focused selection pressures on the creatures that develop them will exert at least an indirect pressure in favor of the capacity for generating accurate cognitive maps, for it is precisely those maps that subsequently govern the creature’s practical behaviors, including its reproductive behaviors. On the whole, good maps will serve the creature better than will poor ones. Accordingly, Evolutionary Naturalism suggests that there will be a strong tendency for living creatures to develop cognitive feature-maps that are at least roughly accurate partial portrayals of the practical environment in which the creature must make its way. This presumption falls well short of heralding Truth for such representations. But it does serve to explain how pre-human creatures can achieve penetrating internal representations of remarkable intricacy and accuracy, at least on some accountings of accuracy, all within a purely naturalistic universe.

    No, it doesn’t. All of the above is nothing more than a convenient narrative in support of the conclusion, lacking any significant historical evidence. Even if one assumes generally common local accuracy, there is no reason to believe that successful, broader metaphysics will be anything approaching true representations. This is similar to the misguided Darwinian idea that accumulations of microevolution can build successful macroevolutionary features.

    It is historically true that anti-naturalism metaphysics have generated the most success in the world when it comes to reproduction and long-lasting cultures. While base local interaction may be enough for non-conceptual brutes under your view, when a being becomes capable of generating coherent meta-maps and explanations, there is more going on than just the brute meeting of practical needs, because practical needs have expanded beyond the physical for such organisms to be viable.

    Such beings as humans would require a whole host of psychological (if not spiritual) support from their broader belief system that give them reasons to live, to pursue goals, to maintain a sense of worth. For such beings, life usually must mean something – give them a purpose and quite often, faith even in what appears to be highly unlikely or even impossible.

    And so, under naturalism, it may be more likely that, in order for humans to survive, false metaphysical beliefs are required to satisfy their emotional/psychological needs to keep working, to keep trying, to bond together and feel that their existence has true worth and deep meaning, and that naturalist explanations are just not up to this task.

    Strip away the self-serving narrative, and there is simply no good reason to think that naturalism (which, under your argument, is the “true” map) is a map that humans need to survive, and there is plenty of historical evidence that theism (or some form of supernaturalism) is the better metaphysical mapping system, providing as good local, physical cognition as any other, but also providing for the psychological and emotional needs higher-sentient beings require to persevere in what is often a terribly harsh world.

    Unlike plants and microbes, humans don’t just require a good map; they require a good reason to use the map to accomplish anything – even their own survival.

  90. 90
    Kantian Naturalist

    Nullasalus — let’s not forget about the secret handshake. And the secrets of all worldly power when I’m initiated into the 33rd Degree of the Order of Naturalistic Metaphysicians.

    As I said above, being a “naturalist” is not all that important to me — I want to figure out what the most plausible view is, and then worry about what label I want to attach to it. If it turns out to be a sub-variety of naturalism, fine; if not, that’s fine, too.

    William, it seems to me that you’re raising a slightly different issue here — a question of whether a wholly naturalistic world-view can satisfy our need for existential significance and orientation. That’s an important question, of course — perhaps more important than the narrowly epistemological questions at stake in the Plantinga-Churchland debate! — but it’s also a different question.

  91. Kantian Naturalist #87: For the purposes of our discussion here, if “naturalism” means “accepting the causal-mechanistic conception of nature (…) ”, then no, I’m not a naturalist.

    You have made this statement many times. Maybe it’s time to get more clarity on this issue. Allow me some questions and remarks. To be clear, do you contest the (causal-mechanistic) laws of nature in any way? And secondly, are you proposing a substitution for a causal-mechanistic conception of nature?

    Kantian Naturalist #87: On the other hand, if one can be a “naturalist” by virtue of accepting a more Romantic conception of nature (Schelling, Hegel, Dewey, Merleau-Ponty), then I am naturalist (…)

    Those romantic guys had a real beef with science:

    Wiki: In contrast to Enlightenment mechanistic natural philosophy, European scientists of the Romantic period held that observing nature implied understanding the self, and that knowledge of nature “should not be obtained by force.” They felt that the Enlightenment had encouraged the abuse of the sciences, (…)

    About the whole and its parts … :

    Wiki: Romanticism advanced a number of themes: it promoted anti-reductionism (the whole was more valuable than the parts alone) (…).
    It was also in this way that Romanticism was very anti-reductionist: they did not believe that inorganic sciences were at the top of the hierarchy but at the bottom, with life sciences next and psychology placed even higher. This hierarchy reflected Romantic ideals of science because the whole organism takes more precedence over inorganic matter, and the intricacies of the human mind take even more precedence since the human intellect was sacred and necessary to understanding nature around it and reuniting with it.

    What strikes me is that Romanticism seems to conflate dead and living nature. All of nature, including us, should be regarded as a whole. This non-distinction is essential; there are wholes in living nature in contrast with the absence of wholes in dead nature. A distinction that KN also rejects.

    Kantian Naturalist #87: (…) especially since I think that recent work in dynamical systems theory puts the Romantic conception of nature on an empirical basis. (If I had to put a label on what I really think, I’d probably go with “evolutionary pantheism”.)

    Why do you believe that dynamical systems theory puts the Romantic conception of nature on an empirical basis?

  92. 92
    Kantian Naturalist

    There are different ways of responding to the causal-mechanistic conception of nature, and I haven’t figured out which one I want to align myself with the most.

    For example, Kaufman proposes a “fourth law of thermodynamics” to explain how teleological systems, such as living things, arise from non-teleological systems, such as geochemical cycles (or whatever). It’s tempting. As a non-scientist with a interest in popularizations of science, I don’t feel competent to judge it on its empirical or theoretical merit. But speaking philosophically, I can only say that something like that has to be right, if we are to attain a satisfying metaphysical picture that unifies the sciences of the non-living and the sciences of the living.

    I’m not (yet) suggesting to replace the causal-mechanistic conception of nature with anything else, but just to supplement it — that the causal-mechanistic conception of nature is not the whole truth about nature. So one need not go to anything ‘beyond’ the natural in order to contest the dominance of that conception.

    The Romantics, it seems to me, thought that they had to contest “science” itself because, in their conception, science itself was Newtonian: reductionistic, quantitative, life-less and life-denying. But our science today is not wedded to the Newtonian vision, in large part because we know how to play with much more complex systems than Newton did. And because of work by people like Varela and Kaufman, acknowledging the reality of teleology is not obviously “un-scientific”.

    More on this later — off to work!

  93. 93
    Kantian Naturalist

    One interesting question here is, just where is the relevant line to be drawn?

    John McDowell, for example, draws a distinction between “the realm of law” and “the space of reasons” (taking his cue from Sellars there). And McDowell insists that “the space of reasons” is sui generis with regard to “the realm of law.” But he doesn’t want to treat this as a metaphysical distinction — he wants to be a relaxed, liberal naturalist, and it is crucial to his project that the space of reasons be natural. So he has to insist the realm of law is not the whole truth about nature.

    McDowell’s work is pretty complicated, and I have some complicated attitudes towards it, but I’m willing to go this far, at any rate: (1) the basic distinction between “the realm of law” and “the space of reasons” is a very important distinction — though McDowell might be wrong in thinking that just making this distinction is adequate; (2) insisting on this distinction means that one isn’t a naturalist, unless one is already committed to the view that the causal-mechanical conception of nature-as-law just is the whole truth about nature. And why should one be?

  94. It strikes me that it might be useful to be a theistic naturalist, where “useful” is measured in terms of having a philosophy that is flexible enough to allow me to believe whatever I want while at the same time enabling me to slip away from potentially hairy philosophical challenges. (Having my cake and eating it too?)

    If anyone tries to pin me down with the problems inherent in a naturalistic worldview, I can explain that I’m not that kind of naturalist. Objective morality? Not a problem. You see, I’m a theistic naturalist. I’ve got morality covered. The problem of evil? Well, I don’t really need a theodicy, since I’m not that kind of theist. After all, I’m a naturalist, you know?

Leave a Reply