Uncommon Descent Serving The Intelligent Design Community

ID Foundations, 17: Stephen C. Meyer’s summary of the positive inductive logic case for design as best explanation of the FSCO/I* in DNA

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

(Prev. : No 16
F/N: 17a, here)

*NB: For those new to UD, FSCO/I means: Functionally Specific Complex Organisation and/or associated Information

From time to time, we need to refocus our attention on foundational issues relating to the positive case for inferring design as best explanation for certain phenomena connected to origins of the cosmos, life and body plans. It is therefore worth the while to excerpt an addition I just made to the IOSE Introduction and Summary page, HT CR, by way of an excerpt from Meyer’s reply to Falk’s hostile review of Signature in the Cell.

In addition, given all too commonly seen basic problems with first principles of right reasoning among objectors to design theory [–> cf. here and here at UD recently . . . ], it will help to add immediately following remarks from the IOSE, on Newton’s four rules of inductive, scientific reasoning, and Avi Sion’s observations on inductive logic.

I trust the below will therefore help to correct many of the widely circulated strawman caricatures of the core reasoning and warrant behind the theory of intelligent design and its pivotal design inference. At least, for those willing to heed duties of care to accuracy, truth, fairness and responsible comment:

________________

>> ID thinker Stephen Meyer argues in his response to a hostile review of his key 2009 Design Theory book, Signature in the Cell:

The central argument of my book is that intelligent design—the activity of a conscious and rational deliberative agent—best explains the origin of the information necessary to produce the first living cell. I argue this because of two things that we know from our uniform and repeated experience, which following Charles Darwin I take to be the basis of all scientific reasoning about the past. First, intelligent agents have demonstrated the capacity to produce large amounts of functionally specified information (especially in a digital form). Second, no undirected chemical process has demonstrated this power. Hence, intelligent design provides the best—most causally adequate—explanation for the origin of the information necessary to produce the first life from simpler non-living chemicals. In other words, intelligent design is the only explanation that cites a cause known to have the capacity to produce the key effect in question . . . .  In order to [[scientifically refute this inductive conclusion]  Falk would need to show that some undirected material cause has [[empirically] demonstrated the power to produce functional biological information apart from the guidance or activity a designing mind. Neither Falk, nor anyone working in origin-of-life biology, has succeeded in doing this . . . .

The central problem facing origin-of-life researchers is neither the synthesis of pre-biotic building blocks (which Sutherland’s work addresses) or even the synthesis of a self-replicating RNA molecule (the plausibility of which Joyce and Tracey’s work seeks to establish, albeit unsuccessfully . . . [[Meyer gives details in the linked page]). Instead, the fundamental problem is getting the chemical building blocks to arrange themselves into the large information-bearing molecules (whether DNA or RNA) . . . .

For nearly sixty years origin-of-life researchers have attempted to use pre-biotic simulation experiments to find a plausible pathway by which life might have arisen from simpler non-living chemicals, thereby providing support for chemical evolutionary theory.  While these experiments have occasionally yielded interesting insights about the conditions under which certain reactions will or won’t produce the various small molecule constituents of larger bio-macromolecules, they have shed no light on how the information in these larger macromolecules (particularly in DNA and RNA) could have arisen.  Nor should this be surprising in light of what we have long known about the chemical structure of DNA and RNA.  As I show in Signature in the Cell, the chemical structures of DNA and RNA allow them to store information precisely because chemical affinities between their smaller molecular subunits do not determine the specific arrangements of the bases in the DNA and RNA molecules.  Instead, the same type of chemical bond (an N-glycosidic bond) forms between the backbone and each one of the four bases, allowing any one of the bases to attach at any site along the backbone, in turn allowing an innumerable variety of different sequences.  This chemical indeterminacy is precisely what permits DNA and RNA to function as information carriers.  It also dooms attempts to account for the origin of the information—the precise sequencing of the bases—in these molecules as the result of deterministic chemical interactions . . . .

[[W]e now have a wealth of experience showing that what I call specified or functional information (especially if encoded in digital form) does not arise from purely physical or chemical antecedents [[–> i.e. by blind, undirected forces of chance and necessity].  Indeed, the ribozyme engineering and pre-biotic simulation experiments that Professor Falk commends to my attention actually lend additional inductive support to this generalization.  On the other hand, we do know of a cause—a type of cause—that has demonstrated the power to produce functionally-specified information.  That cause is intelligence or conscious rational deliberation.  As the pioneering information theorist Henry Quastler once observed, “the creation of information is habitually associated with conscious activity.” And, of course, he was right. Whenever we find information—whether embedded in a radio signal, carved in a stone monument, written in a book or etched on a magnetic disc—and we trace it back to its source, invariably we come to mind, not merely a material process.  Thus, the discovery of functionally specified, digitally encoded information along the spine of DNA, provides compelling positive evidence of the activity of a prior designing intelligence.  This conclusion is not based upon what we don’t know.  It is based upon what we do know from our uniform experience about the cause and effect structure of the world—specifically, what we know about what does, and does not, have the power to produce large amounts of specified information . . . .

[[In conclusion,] it needs to be noted that the [[now commonly asserted and imposed limiting rule on scientific knowledge, the] principle of methodological naturalism [[ that scientific explanations may only infer to “natural[[istic] causes”] is an arbitrary philosophical assumption, not a principle that can be established or justified by scientific observation itself.  Others of us, having long ago seen the pattern in pre-biotic simulation experiments, to say nothing of the clear testimony of thousands of years of human experience, have decided to move on.  We see in the information-rich structure of life a clear indicator of intelligent activity and have begun to investigate living systems accordingly. If, by Professor Falk’s definition, that makes us philosophers rather than scientists, then so be it.  But I suspect that the shoe is now, instead, firmly on the other foot. [[Meyer, Stephen C: Response to Darrel Falk’s Review of Signature in the Cell, SITC web site, 2009. (Emphases and parentheses added.)]

Thus, in the context of a pivotal example — the functionally specific, complex information stored in the well-known genetic code — we see laid out the inductive logic and empirical basis for design theory as a legitimate (albeit obviously controversial) scientific investigation and conclusion.

It is worth the pause to (courtesy the US NIH) lay out a diagram of what is at stake here:

Fig I.0: DNA as a stored code exhibiting functionally specific complex digital information (HT: NIH)

In this context — to understand the kind of scientific reasoning involved and its history, it is also worth pausing to excerpt Newton’s Rules of [[Inductive] Reasoning in [[Natural] Philosophy which he used to introduce the Universal Law of Gravitation. In turn, this — then controversial (action at a distance? why? . . . ) — law was in effect generalised from the falling of apples on Earth and the deduced rule that also explained the orbital force of the Moon, and thence Kepler’s mathematically stated empirical laws of planetary motion. So, Newton needed to render plausible how he projected universality:

Rule I [[–> adequacy and simplicity]

We are to admit no more causes of natural things than such as are both true [[–> it is probably best to take this liberally as meaning “potentially and plausibly true”] and sufficient to explain their appearances.

To this purpose the philosophers say that Nature does nothing in vain, and more is in vain when less will serve; for Nature is pleased with simplicity, and affects not the pomp of superfluous causes.

Rule II [[–> uniformity of causes: “like forces cause like effects”]

Therefore to the same natural effects we must, as far as possible, assign the same causes.

As to respiration in a man and in a beast; the descent of stones in Europe and in America; the light of our culinary fire and of the sun; the reflection of light in the earth, and in the planets.

Rule III [[–> confident universality]

The qualities of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.

For since the qualities of bodies are only known to us by experiments, we are to hold for universal all such as universally agree with experiments; and such as are not liable to diminution can never be quite taken away. We are certainly not to relinquish the evidence of experiments for the sake of dreams and vain fictions of our own devising; nor are we to recede from the analogy of Nature, which is wont to be simple, and always consonant to [398/399] itself . . . .

Rule IV [[–> provisionality and primacy of induction]

In experimental philosophy we are to look upon propositions inferred by general induction from phenomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.

This rule we must follow, that the arguments of induction may not be evaded by [[speculative] hypotheses.

In effect Newton advocated for provisional, empirically tested, reliable and adequate inductive principles resting on “simple” summaries or explanatory constructs.  These were to be as accurate to reality as we experience it, as we can get it, i.e. a scientific theory seeks to be true to our world, provisional though it must be. They rest on induction from patterns of observed phenomena and through Rule II — on “like causes like” — were to be confidently projected to cases where we do not observe directly, subject to correction on further observations, not impositions of speculative metaphysical notions.

Since inductive reasoning that leads to provisionally inferred general patterns itself is now being deemed suspect in some quarters, it may help to note as follows from Avi Sion, on what he descriptively calls the principle of universality:

We might . . . ask – can there be a world without any ‘uniformities’? A world of universal difference, with no two things the same in any respect whatever is unthinkable. Why? Because to so characterize the world would itself be an appeal to uniformity. A uniformly non-uniform world is a contradiction in terms.

Therefore, we must admit some uniformity to exist in the world.

The world need not be uniform throughout, for the principle of uniformity to apply. It suffices that some uniformity occurs.

Given this degree of uniformity, however small, we logically can and must talk about generalization and particularization. There happens to be some ‘uniformities’; therefore, we have to take them into consideration in our construction of knowledge. The principle of uniformity is thus not a wacky notion, as Hume seems to imply . . . .

The uniformity principle is not a generalization of generalization; it is not a statement guilty of circularity, as some critics contend. So what is it? Simply this: when we come upon some uniformity in our experience or thought, we may readily assume that uniformity to continue onward until and unless we find some evidence or reason that sets a limit to it. Why? Because in such case the assumption of uniformity already has a basis, whereas the contrary assumption of difference has not or not yet been found to have any. The generalization has some justification; whereas the particularization has none at all, it is an arbitrary assertion.

It cannot be argued that we may equally assume the contrary assumption (i.e. the proposed particularization) on the basis that in past events of induction other contrary assumptions have turned out to be true (i.e. for which experiences or reasons have indeed been adduced) – for the simple reason that such a generalization from diverse past inductions is formally excluded by the fact that we know of many cases [[of inferred generalisations; try: “we can make mistakes in inductive generalisation . . . “] that have not been found worthy of particularization to date . . . .

If we follow such sober inductive logic, devoid of irrational acts, we can be confident to have the best available conclusions in the present context of knowledge. We generalize when the facts allow it, and particularize when the facts necessitate it. We do not particularize out of context, or generalize against the evidence or when this would give rise to contradictions . . .[[Logical and Spiritual Reflections, BK I Hume’s Problems with Induction, Ch 2 The principle of induction.]>>

_________________

In short, there is a definite positive case for design, and it pivots on what I have descriptively termed functionally specific complex organisation and associated information (FSCO/I) — and no, the concept is plainly not just an idiosyncratic notion of a no-account bloggist, but has demonstrable roots tracing to OOL researchers such as Wicken, Orgel and Hoyle across the 1970’s and into the early 1980’s; i.e. before design theory surfaced in response to such findings (another strawman bites  the dust . . . ) — and its only known causally adequate source.

In addition, that design inference can be summarised in a flowchart of the scientific investigatory procedure, as for instance was discussed as the very first post in the ID Foundations series at UD, over two years ago:

The per aspect explanatory filter that shows how design may be inferred on empirically tested, reliable sign

The result of this can also be summarised in a quantitative expression, as has been repeatedly highlighted, here again excerpting IOSE:

xix: Later on (2005), Dembski provided a slightly more complex formula, that we can quote and simplify, showing that it boils down to a “bits from a zone of interest [[in a wider field of possibilities] beyond a reasonable threshold of complexity” metric:

χ = – log2[10^120 ·ϕS(T)·P(T|H)].

–> χ is “chi” and ϕ is “phi”

xx: To simplify and build a more “practical” mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally:

Ip = – log p, in bits if the base is 2. That is where the now familiar unit, the bit, comes from. Where we may observe from say — as just one of many examples of a standard result — Principles of Comm Systems, 2nd edn, Taub and Schilling (McGraw Hill, 1986), p. 512, Sect. 13.2:

Let us consider a communication system in which the allowable messages are m1, m2, . . ., with probabilities of occurrence p1, p2, . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [[–> My nb: i.e. the a posteriori probability in my online discussion here is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by

I_k = (def) log_2  1/p_k   (13.2-1)

xxi: So, since 10^120 ~ 2^398, we may “boil down” the Dembski metric using some algebra — i.e. substituting and simplifying the three terms in order — as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p):

Chi = – log2(2^398 * D2 * p), in bits,  and where also D2 = ϕS(T)
Chi = Ip – (398 + K2), where now: log2 (D2 ) = K
That is, chi is a metric of bits from a zone of interest, beyond a threshold of “sufficient complexity to not plausibly be the result of chance,”  (398 + K2).  So,
(a) since (398 + K2) tends to at most 500 bits on the gamut of our solar system [[our practical universe, for chemical interactions! ( . . . if you want , 1,000 bits would be a limit for the observable cosmos)] and
(b) as we can define and introduce a dummy variable for specificity, S, where
(c) S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T:

Chi =  Ip*S – 500, in bits beyond a “complex enough” threshold

  • NB: If S = 0, this locks us at Chi = – 500; and, if Ip is less than 500 bits, Chi will be negative even if S is positive.
  • E.g.: a string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will — unsurprisingly — be positive.
  • S goes to 1 when we have objective grounds — to be explained case by case — to assign that value.
  • That is, we need to justify why we think the observed cases E come from a narrow zone of interest, T, that is independently describable, not just a list of members E1, E2, E3 . . . ; in short, we must have a reasonable criterion that allows us to build or recognise cases Ei from T, without resorting to an arbitrary list.
  • A string at random is a list with one member, but if we pick it as a password, it is now a zone with one member.  (Where also, a lottery, is a sort of inverse password game where we pay for the privilege; and where the complexity has to be carefully managed to make it winnable. )
  • An obvious example of such a zone T, is code symbol strings of a given length that work in a programme or communicate meaningful statements in a language based on its grammar, vocabulary etc. This paragraph is a case in point, which can be contrasted with typical random strings ( . . . 68gsdesnmyw . . . ) or repetitive ones ( . . . ftftftft . . . ); where we can also see by this case how such a case can enfold random and repetitive sub-strings.
  • Arguably — and of course this is hotly disputed — DNA protein and regulatory codes are another. Design theorists argue that the only observed adequate cause for such is a process of intelligently directed configuration, i.e. of  design, so we are justified in taking such a case as a reliable sign of such a cause having been at work. (Thus, the sign then counts as evidence pointing to a perhaps otherwise unknown designer having been at work.)
  • So also, to overthrow the design inference, a valid counter example would be needed, a case where blind mechanical necessity and/or blind chance produces such functionally specific, complex information. (Points xiv – xvi above outline why that will be hard indeed to come up with. There are literally billions of cases where FSCI is observed to come from design.)

xxii: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was designed. The metric may be directly applied to biological cases:

Using Durston’s Fits values — functionally specific bits — from his Table 1, to quantify I, so also  accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold:
RecA: 242 AA, 832 fits, Chi: 332 bits beyond
SecY: 342 AA, 688 fits, Chi: 188 bits beyond
Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond

xxiii: And, this raises the controversial question that biological examples such as DNA — which in a living cell is much more complex than 500 bits — may be designed to carry out particular functions in the cell and the wider organism.

So, there is no good reason to pretend that there is no positive or cogent case that has been advanced for design theory, or that FSCO/I — however summarised — is not a key aspect of such a case, or that there is no proper way to quantify the case or express it as an operational scientific procedure.

And of course, lurking in the background is the now over six months old unanswered 6,000 word free- kick- at- goal Darwinist essay challenge on OOL and origin of major body plans.

However, with these now in play, let us hope (against all signs from the habitual patterns we have seen) that  a more positive discussion on the merits can now ensue. END

Comments
collinb:
Just read the texts. It’s all there. Mayr. Darwin. Fodor. Dawkins. Gould.
Read them. Nothing there that can be tested nor be considered a theory. More like a bunch of vague ideas and glossy narratives. They talk about the power of DNA yet speak in terms of anatomy when discussing the evolution of something. For example, the vision system- all about adding physical parts and nothing about the DNA that would do it. Fish to tetrapods- again all about the anatomy and nothing about the DNA changes that could bring it about. So saying accumulations of random mutations can do it, rings a little hollow when all examples neglect that part. But OK, thanks collinb. NextJoe
April 12, 2013
April
04
Apr
12
12
2013
10:22 AM
10
10
22
AM
PDT
Lizzie via Joe:
I do find it extraordinary that those who think they’ve found a flaw in evolutionary theory so regularly demonstrate just how little they understand the theory they are attempting to critique.
Actually, I for one, understand the theory quite well. When we strip away all the fancy terminology and the obfuscations, the theory is essentially this: particles bumping into each other over a long period of time can result in Mozart, Einstein and Tolstoy. It's not that we don't understand the theory. It's just that the theory is so preposterous that it is a joke. That's why we sometimes make fun of it and why it so often makes us laugh. :)Eric Anderson
April 12, 2013
April
04
Apr
12
12
2013
10:03 AM
10
10
03
AM
PDT
collinb, aside from all the hypothetical models you have listed for evolution, do you know of any actual empirical evidence, whatsoever, that purely material processes, however you may classify these chance and necessity material processes, can create any molecular machines?
Venter: Life Is Robotic Software - July 15, 2012 Excerpt: “All living cells that we know of on this planet are ‘DNA software’-driven biological machines comprised of hundreds of thousands of protein robots, coded for by the DNA, that carry out precise functions,” said (Craig) Venter. http://crev.info/2012/07/life-is-robotic-software/
You see collinb, in spite of the fact of finding molecular machines, and extremely complex integrated systems, permeating the simplest of bacterial life, there are no detailed Darwinian accounts for the evolution of even one such machine or system:
"There are no detailed Darwinian accounts for the evolution of any fundamental biochemical or cellular system only a variety of wishful speculations. It is remarkable that Darwinism is accepted as a satisfactory explanation of such a vast subject." James Shapiro - Molecular Biologist
The following expert doesn't even hide his very unscientific preconceived philosophical bias against intelligent design,,,
‘We should reject, as a matter of principle, the substitution of intelligent design for the dialogue of chance and necessity,,,
Yet at the same time the same expert readily admits that neo-Darwinism has ZERO evidence for the chance and necessity of material processes producing any cellular system whatsoever,,,
,,,we must concede that there are presently no detailed Darwinian accounts of the evolution of any biochemical or cellular system, only a variety of wishful speculations.’ Franklin M. Harold,* 2001. The way of the cell: molecules, organisms and the order of life, Oxford University Press, New York, p. 205. *Professor Emeritus of Biochemistry, Colorado State University, USA Michael Behe - No Scientific Literature For Evolution of Any Irreducibly Complex Molecular Machines http://www.metacafe.com/watch/5302950/ Dr. Michael Behe - "Grand Darwinian claims rest on undisciplined imagination" - video http://www.youtube.com/watch?feature=player_detailpage&v=s6XAXjiyRfM#t=1762s “The response I have received from repeating Behe's claim about the evolutionary literature, which simply brings out the point being made implicitly by many others, such as Chris Dutton and so on, is that I obviously have not read the right books. There are, I am sure, evolutionists who have described how the transitions in question could have occurred.” And he continues, “When I ask in which books I can find these discussions, however, I either get no answer or else some titles that, upon examination, do not, in fact, contain the promised accounts. That such accounts exist seems to be something that is widely known, but I have yet to encounter anyone who knows where they exist.” David Ray Griffin - retired professor of philosophy of religion and theology
of related note to the fact that Darwinists have ZERO empirical evidence of Darwinian processes EVER producing a molecular machine, here are several examples that intelligence can do as such:
(Man-Made) DNA nanorobot – video https://vimeo.com/36880067 Whether Lab or Cell, (If it's a molecular machine) It's Design - podcast http://intelligentdesign.podomatic.com/entry/2013-01-25T15_53_41-08_00
Thus collinb, while it may be fun for ones imagination to entertain various ways that evolutionary processes could create the complexity we see in life, the bottom line is that such musing are really, as far as empirical science itself is concerned, an exercise in futility if evolutionists can not produce any actual observational evidence that it does actually create complexity!
“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain. http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/
Michael Behe talks about the preceding paper on this podcast:
Michael Behe: Challenging Darwin, One Peer-Reviewed Paper at a Time - December 2010 http://intelligentdesign.podomatic.com/player/web/2010-12-23T11_53_46-08_00 The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.mdpi.com/1422-0067/10/1/247/pdf Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8 ) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work
Supplemental note: An Early Critique of Darwin Warned of a Lower Grade of Degradation - Cornelius Hunter - December 2012 Excerpt: And as for Darwin’s grand principle, natural selection, (Adam Sedgwick asked Charles Darwin) “what is it but a secondary consequence of supposed, or known, primary facts?.” Yet Darwin had smuggled in teleological language to avoid the absurdity and make it acceptable. For Darwin had written of natural selection “as if it were done consciously by the selecting agent.” Yet again, this criticism is cogent today. Teleological language is rampant in the evolutionary literature. http://darwins-god.blogspot.com/2012/12/an-early-critique-of-darwin-warned-of.html bornagain77
April 12, 2013
April
04
Apr
12
12
2013
08:53 AM
8
08
53
AM
PDT
Just read the texts. It's all there. Mayr. Darwin. Fodor. Dawkins. Gould. Trolling with inane questions is unbecoming.collinb
April 12, 2013
April
04
Apr
12
12
2013
08:16 AM
8
08
16
AM
PDT
Is this alleged theory structure in any peer-reviewed journals? Einstein had his theory in a peer-reviewed journalJoe
April 12, 2013
April
04
Apr
12
12
2013
07:03 AM
7
07
03
AM
PDT
I don't understand your question. There is a theory structure for naturalistic evolution. There are actually several functional models. There is first classic Darwinian adaptationism, then early neo-Darwinian gene-driven models, followed by Mayr's synthesis which involves groups and isolation to develop traits (alopatric). The mechanism may be behavior-driven (Darwin), gene-drive, or a combination of the two (Fodor). One advantage we can take to challenge the naturalistic evolutionist is to ask which is correct and why it is correct and the others are in error. Few wish to struggle with the question. "Evolution" has become orthodoxy and goes without challenge. But there are enough good challenges that may be raised.collinb
April 12, 2013
April
04
Apr
12
12
2013
06:25 AM
6
06
25
AM
PDT
Hi collinb- Could you please reference this alleged "evolutionary theory"?Joe
April 12, 2013
April
04
Apr
12
12
2013
05:34 AM
5
05
34
AM
PDT
In the presentation of evolutionary theory there are two characteristics to point out. And they are closely related to comments here. #1 The first is that there is but one evolutionary model. No -- there are several models, all of which take divergent positions about how the process works. And that leads to #2 - Is there such a thing as "blind chance" when directionality demands that species get better and negative traits are necessarily jettisoned (Coyne)? ID (of which I am not an adherent due to lack of time to investigate) seems to be playing off the question of directionality -- things came to be for a reason and purpose. The response to ID seems to be an appeal to blind chance (probably better understood as "contingency" to the philosopher) and thus creating an alternative evolutionary mechanism to what is held by many esteemed scientists. So which is it?collinb
April 12, 2013
April
04
Apr
12
12
2013
05:25 AM
5
05
25
AM
PDT
And Richie cupcake Hughes chimes in:
Please. I don’t want to attack a strawman, I’d like to see what an ID proponent regards as a “rigorously defined” and “calculated handily” example, an “exhaustive explanation”, if you will.
We have been asking evos to show us what they have that is “rigorously defined” and “calculated handily” example, an “exhaustive explanation”, so we will know what you will accept. However Patrick, aka MathGrrl, ran away without providing that as asked. The point being is if we tell you all you will do is what patty did- say "Nope, that doesn't do it", like the little baby that you are. So ante up so we will know what it is you accept as “rigorously defined” and “calculated handily” example, an “exhaustive explanation”. Or shut up already.Joe
April 12, 2013
April
04
Apr
12
12
2013
04:01 AM
4
04
01
AM
PDT
F/N: maybe this "in plain English" remark from the next section, NFL, 3.8 (on what is now termed search for a search, S4S), may help those objectors to the FSCO/I concept who are at least open to correction:
. . . natural causes are characterized by necessity, chance or a combination of the two . . . the combination of chance and necessity is conceived in terms of | nondeterministic natural laws (cf. natural selection acting on random variation) . . . . [To describe these processes] stochastic processes constitute the most general mathematical formalism (by zeroing out the stochastic element one recovers a nonstochastic function and therefore necessity; by focusing purely on the stochastic element one recovers random sampling from a probability distribution and therefore pure chance) . . . [However, it can be shown that] neither nonstochastic functions nor random sampling form a probability distribution nor stochastic processes can do better than transmit already existing complex specified information . . . . The important thing is that functions map one set of items to another set of items and in so doing map a given item to one and only one item. Thus for a natural cause to "generate" CSI would mean for a function to map some item to an item that exhibits CSI. But that means that he complexity and specification in the item that got mapped onto gets pushed back to the item that got mapped. In other words, natural causes just push the problem back from the effect to the cause, which now in turn needs to be explained. It is like explaining a pencil in terms of a pencil-making machine. Explaining the pencil-making machine is as difficult as explaining the pencil. In fact the problem typically gets worse as one back-tracks CSI.| Stephen Meyer makes this point beautifully for DNA . . . any natural cause that brings about CSI in DNA must admit at least as much freedom as is in the DNA sequencing possibilities [--> NB: We can already see this with mRNA and its templating off of DNA with the aid of molecular nanomachines in the cell . . . ] (if not, DNA sequencing possibilities would be constrained by physico-chemical laws, which we know [--> per the "standard" Sugar-Phosphate bonds on the "spine" of DNA] they are not). Consequently, any CSI in DNA tracks back via natural causes to CSI in the antecedent circumstances responsible for the sequencing of DNA. [--> Where, we know that the needle in haystack or monkeys at keyboards search space challenge to find special zones T in vast spaces of configurations W, is such that blind chance and necessity are maximally implausible as viable sources of the CSI . . . ] To claim that natural causes have "generated" CSI is therefore totally misleading -- natural causes have merely shuffled around pre-existing CSI . . . . [For instance, [w]hen deterministic natural laws are represented as functions, the domain comprises initial and boundary conditions and the range comprises subsequent physical states at times t [i.e. a trajectory in phase space is determined by the acting forces and constraints in light of initial circumstance, hence Laplace's physicist demon who on knowing initial conditions would be able to predict the Newtonian universe's future] . . . [--> thus, we see the back-chaining effect directly. Similarly, in a stochastic case, under a given initial circumstance E0, the effect of combinations of chance with necessity would be to inject an additional, stochastic distribution on top of the trajectory in phase space. The effect is to make the distribution more fuzzy and unpredictable in the long term, but in neither case have we effectively addressed the pivotal issue of finding narrow target zones in large spaces of possibilities. if we find the target,the best explanation for that is, that we were set up to do so, i.e. design as opposed to blind search. In short, once we acknowledge the significance and relevance of deeply isolated and unrepresentative islands T in large config spaces W, both "gibberish in, gibberish out" and smarts in smarts out obtain. Natural forces are a conduit for CSI, not an explanation thereof. In more modern terms, on average, a blindly chosen search algorithm is as likely to lead you away from zones T as to get you there. So, the next search is the search for the search. And S4S then addresses a much wider space and a regress of such, once we are in a domain of specified complexity, especially functionally specific complexity. For, such specificity naturally sharply constrains the number of acceptable arrangements of parts, and leads to narrow and unrepresentative zones being acceptable. To reach such, infinite regress of ever growing challenges is not good enough, getting THAT lucky at any one level is not a reasonable option, and going in circles is not even on the cards. The best explanation is the obvious and empirically backed up one, design. ] [NFL, pp. 149 - 151, with added notes.]
the point is, that if we see a successful "search," we can be morally certain that at some point, smarts were injected, probably inadvertently or in a way that was overlooked. That is, if a search drastically outperforms what we would expect from random searches in a given relevant case [the search step requires 500 - 1,000 or more bits of info], the best explanation is that the S4S was solved intelligently. In general, evolutionary, hill-climbing algorithms START IN TARGET ZONES. They embed algorithms that provide oracles -- did EL's multiplication and comparison algors come from blind chance and mechanical necessity? patently, not! They use fitness functions that generally give good incremental hill-climbing properties, instead of the sort of find the shoreline of an island of function challenge that the FSCO/I issue is looking at. And such well-behaved functions are themselves deeply informationally rich and are created by intelligent designers who integrate them into their equally intelligently created algorithms. In short, there is active, search success enhancing information all over the place in such cases. Active info that comes in every case examined to date from the obvious source: intelligence. In short, cases of intelligent design are being trotted out to try to overturn the very thing that hey exemplify: the empirically warranted adequate cause of FSCO/I is design. of course, this pushes back the debate to the issue that objectors will want to claim that there is a continent of incrementally linked functional states, C, spannable by a Darwinist tree, not narrow and isolated zones of function T in the overall space W. That is an interesting speculation, but runs into the challenge, first how do we get (on good empirically grounded warrant) to the shores of the continent to begin with -- OOL. No answer. Next, what is the empirical evidence for C? Is it in a vast and dominant pattern of incremental forms among fossils and living creatures, whether in gross anatomy or in molecules? The evidence is in after 150 years, and it confirms Darwin's challenge: there are systematic gaps from top to bottom of the record of life, molecular, fossil and living world. Despite the occasional screaming headlines on allegedly found missing links and the significantly misleading sequences and museum displays, there simply is not any good empirically grounded reason to say that we have shown incrementalism as the dominant feature of the world of life. Which is what we should expect on the C-model, indeed, we should expect not a tree but a tangled bush that is like a Peano space-filling curve if that were true, a multidimensional continuum of life not a hierarchy. Just the opposite is so. It is therefore worth noting the following revealing 2006 remark in the journal, PNAS, by W. Ford Doolittle and Eric Bapteste:
Darwin claimed that a unique inclusively hierarchical pattern of relationships between all organisms based on their similarities and differences [the Tree of Life (TOL)] was a fact of nature, for which evolution, and in particular a branching process of descent with modification, was the explanation. However, there is no independent evidence that the natural order is an inclusive hierarchy, and incorporation of prokaryotes into the TOL is especially problematic. The only data sets from which we might construct a universal hierarchy including prokaryotes, the sequences of genes, often disagree and can seldom be proven to agree. Hierarchical structure can always be imposed on or extracted from such data sets by algorithms designed to do so, but at its base the universal TOL rests on an unproven assumption about pattern that, given what we know about process, is unlikely to be broadly true. This is not to say that similarities and differences between organisms are not to be accounted for by evolutionary mechanisms, but descent with modification is only one of these mechanisms, and a single tree-like pattern is not the necessary (or expected) result of their collective operation . . . [[Abstract, "Pattern pluralism and the Tree of Life hypothesis," PNAS February 13, 2007 vol. 104 no. 7 2043-2049.]
While these researchers do try to suggest a new evolutionary alternative (much as the punctuated equilibria advocates did in the 1970's), the basic message is plain. Model C just ain't so. We have to address good old Model T. We are in a different world. As a first try, why not look at intelligently directed evolution, maybe using something like viri to inject new innovations to trigger new branches or some similar form of front loading? Wouldn't that be within plausible reach of a molecular nanotech lab some generations beyond Venter et al? KFkairosfocus
April 12, 2013
April
04
Apr
12
12
2013
12:42 AM
12
12
42
AM
PDT
Joe, WJM & EA: That need to first and foremost explain in details with observational evidential warrant, the blind chance and necessity origin of the encapsulated, gated, metabolising automaton with a coded information using von Neumann self-replicator facility, is pivotal. It also happens to be the challenge of finding and showing the root of the Darwinist tree of life. That is why the still unanswered Darwinist essay challenge starts with that issue. (Oh, how the folks at TSZ and elsewhere would like to change the subject or twist the matter into pretzels of confusion and polarisation, anything other than to straightforwardly answer it. And, this is a free kick at goal that has gone un-taken for over six months now, the seventh is coming up.) The persistent LACK of such a well-grounded answer puts the Darwinist tree of life front and centre as the first -- it is the ONLY illustration in the Origin as Darwin wrote it, and even there it conspicuously lacks a root -- and the foremost Darwinist Icon of Evolution, and the one that is most spectacularly busted and broken. For, for 150 years, there has not been a serious and empirically well grounded answer: what is the root. We are in every position to simply challenge, first, no roots, no shoots and no branches and twigs, so the conspicuous absence of sound empirical evidence for key major connectives in the TOL in those positions is telling. There is but one known, empirically grounded and analytically plausible source for the FSCO/I involved in the vNSR and other features of the required root organism of a tree of life: design. Timelines don't matter, the issue is roots. And once design sits at the table as best current empirically and analytically warranted explanation of the root of the TOL, we are in a position to argue that it is again the best explanation for the onward features, and that whether or not the main trunk and branch points are filled in empirically across future time, the explanation for the TOL as a whole is design. The only empirically warranted forms of evolution are micro-, well within the limits of existing body plans. That is, I am pointing out that body plan origin is just as much of a challenge. Again, timelines don't matter, indeed degree of actual common descent does not matter [where modern YEC's often accept that heir "kinds" are perhaps comparable to taxonomic families], the pivotal issues are origin of cell based life and of major body plans. That is why Meyer is so plainly right in his reply to Falk's critical review of SiC, as is cited in the OP:
The central argument of my book is that intelligent design—the activity of a conscious and rational deliberative agent—best explains the origin of the information necessary to produce the first living cell. I argue this because of two things that we know from our uniform and repeated experience, which following Charles Darwin I take to be the basis of all scientific reasoning about the past. First, intelligent agents have demonstrated the capacity to produce large amounts of functionally specified information (especially in a digital form). Second, no undirected chemical process has demonstrated this power. Hence, intelligent design provides the best—most causally adequate—explanation for the origin of the information necessary to produce the first life from simpler non-living chemicals. In other words, intelligent design is the only explanation that cites a cause known to have the capacity to produce the key effect in question . . . . In order to [[scientifically refute this inductive conclusion] Falk would need to show that some undirected material cause has [[empirically] demonstrated the power to produce functional biological information apart from the guidance or activity a designing mind. Neither Falk, nor anyone working in origin-of-life biology, has succeeded in doing this . . .
Where also, underlying this, we have excellent reason to understand that FSCO/I is real, it is observable, it is associated with tight constraints on relevant configs from the space of possibilities, that this therefore results in a pattern of deeply isolated islands of function in the space of possible configs, and that blind search based on chance + necessity, on the gamut of the solar system or the observed cosmos, is not a plausible explanation for it. And BTW, despite the many attempts at continued misrepresentation, the FSCO/I concept was put on the table by leading OOL researchers Wicken and Orgel across the 1970's, and the best explanation was put on the table by 1981-2 by Hoyle. (Easily shown fact, cf here on. And yes that is in the intro and summary module for the despised origins science 101, IOSE. In short, those who are pushing continued misrepresentations know or should know better. I have had occasion to highlight those facts here at UD many times. Which, means that continuing misrepresentation in the teeth of what one knows or should know is a sobering responsibility.) Design theory as a school in science responds to these challenges. Responds with the only known adequate causal force to create FSCO/I, design. So either answer decisively by showing an empirically warranted case that OOL and OO body Plans are indeed adequately explained on blind chance and necessity, or else stand aside and allow the empirically grounded best explanation to sit at the table as of right not sufferance. The time for definitional gerrymandering and imposition of a priori Lewontinian materialism on science is over. Long since over. Philip Johnson is right, plainly right:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
KFkairosfocus
April 11, 2013
April
04
Apr
11
11
2013
11:12 PM
11
11
12
PM
PDT
Lizzie via Joe @55: Thanks for the great laugh! I needed some humor this evening.Eric Anderson
April 11, 2013
April
04
Apr
11
11
2013
07:44 PM
7
07
44
PM
PDT
Uh-oh, Lizzie is in a tizzie:
I do find it extraordinary that those who think they’ve found a flaw in evolutionary theory so regularly demonstrate just how little they understand the theory they are attempting to critique.
I find it extraordinary that people speak of an "evolutionary theory" yet cannot reference said theory. :razz: And in another post Lizzie sez:
I generate exactly what Dembski specifies.
Then write to Dembski and see if he agrees. You say-so is meaningless to those who know much more about CSI then you ever will. And finally:
On the contrary, the whole attempt demonstrated very nicely what Dembski has in any case conceded, that evolutionary algorithms are perfectly good at finding Targets.
Yes, because they are DESIGNED TO DO SO, duh. Intelligent Design Evolution at work Lizzie. Not darwinian. Time to pull your head out and see it for what it is.Joe
April 11, 2013
April
04
Apr
11
11
2013
06:38 PM
6
06
38
PM
PDT
Umm, you guys are missing the obvious- she starts out with a replicator, the very thing that needs to be explained in the first place: The next quotes are from No Free Lunch pages 148-49
Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems. Darwinist Richard Dawkins cashes out biological specification in terms of the reproduction of genes. Thus, in The Blind Watchmaker Dawkins writes, “Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone. In the case of living things, the quality is specified in advance is…the ability to propagate genes in reproduction.” (bold added)
And as I said, Lizzie wasn't having any of that.
The central problem of biology is therefore not simply the origin of information but the origin of complex specified information. Paul Davies emphasized this point in his recent book The Fifth Miracle where he summarizes the current state of origin-of-life research: “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity.” The problem of specified complexity has dogged origin-of-life research now for decades. Leslie Orgel recognized the problem in the early 1970s: “Living organisms are distinguished by their specified complexity. Crystals such as granite fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.” Where, then, does complex specified information or CSI come from, and where is it incapable of coming from? According to Manfred Eigen, CSI comes from algorithms and natural laws. As he puts it, “Our task is to find an algorithm, a natural law that leads to the origin of [complex specified] information.” The only question for Eigen is which algorithms and natural laws explain the origin of CSI. The logically prior question of whether algorithms and natural laws are even in principle capable of explaining the origin of CSI is one he ignores. And yet it is this very question that undermines the entire project of naturalistic origins-of-life research. Algorithms and natural laws are in principle incapable of explaining the origin of CSI. To be sure, algorithms and natural laws can explain the flow of CSI. Indeed, algorithms and natural laws are ideally suited for transmitting already existing CSI. As we shall see next, what they cannot do is explain its origin.
Those last two paragraphs make it clear that CSI is an ORIGINs issue. And again, Lizzie wasn't having any of that. It is all my opinion. :roll:Joe
April 11, 2013
April
04
Apr
11
11
2013
05:58 PM
5
05
58
PM
PDT
William,
She smuggled information about the search goal (high products) into the landscape (natural selection) by deliberately choosing a selective process (what she should cull) in order to aid her particular search.
I couldn't agree more. The search landscape of the natural selection goal had information smuggled into it, using CSI. Or was it the natural selection landscape that was smuggled in by CSI using the search goal. Whatever. Either way, CSI was involved, which means there was an intelligent designer, which means her demonstration only proves that you need an intelligent designer to insert CSI into a search landscape goal.lastyearon
April 11, 2013
April
04
Apr
11
11
2013
02:29 PM
2
02
29
PM
PDT
...And they really aren't creating CSI, because the goal is always *implied* in the algorithm. Liddle's "demonstration" is no different than Dawkin's weasel program. The way it is setup, it cannot fail to reach the goal, because of the implications of the algorithm.CentralScrutinizer
April 11, 2013
April
04
Apr
11
11
2013
01:49 PM
1
01
49
PM
PDT
...Every time these characters try to demonstrate that "natural selection" can produce CSI they demonstrate that artificial selection (carefully setup by an intelligent designer) can create CSI. Duh.CentralScrutinizer
April 11, 2013
April
04
Apr
11
11
2013
01:46 PM
1
01
46
PM
PDT
WJM: She smuggled information about the search goal (high products) into the landscape (natural selection) by deliberately choosing a selective process (what she should cull) in order to aid her particular search.
Exactly. She calls it "natural" selection, but it's artificial selection.CentralScrutinizer
April 11, 2013
April
04
Apr
11
11
2013
01:38 PM
1
01
38
PM
PDT
Eric, Because she generates a pattern, she thinks she has generated CSI according Dembski's updated version in his paper "Specification:The Pattern That Signifies Intelligence". And BTW, obvioulsy you don't know nuthin' bout information. It has nuthin' to do with no stinkin' substance or conveying- No kidding- read all about it: Information and Intelligent Design Information and Intelligent Design… again The funny part is in another post he uses the word "information" to equal meaning: Standardized Testing He has banned me because I have exposed him as a poseur on too many occasions. But as you can see it isn't too difficult to point out his errors. He also thinks that just because ID is about the design and not the desogner, that ID somehow prevents people from trying to find out who the designer is. Even though I have explained it to him many times that is not the case. But anyway, for your enjoyment...Joe
April 11, 2013
April
04
Apr
11
11
2013
12:55 PM
12
12
55
PM
PDT
Joe @46: Re: Lizzie's generation of CSI: Not only is she not generating anything that even approaches what we are talking about in terms of complex specified information -- things like code, language, semiotics, etc. -- she is also making the same mistake Dawkins made with his Weasel nonsense. To be sure, she isn't targeting a specific phrase, but she is rewarding in a way to move the sequences toward her "House Jackpot" target. Everyone knows that if you have a target (and, please folks, it does not matter one whit whether that target is a specific sequence or a stochastic distribution) and run iterations, selecting those that converge toward the target, then -- surprise surprise -- you start to converge on the target. It is an exercise in irrelevance. The "solution" is not found by natural selection; it is smuggled into the initial programming. To use more technical terminology, the information that bounds and confines the search space came from some prior knowledge. There has been a conservation of information backstream to the ultimate source (the programmer in this case). Furthermore, whatever she has generated has no substance. The generated sequence doesn't mean anything, it doesn't do anything, it doesn't convey any information. It is just a bunch of random numbers, that by happenstance multiply up to some arbitrary target threshold. It bears no resemblance to what we find in biology. Thus the whole attempt is an exercise in irrelevancy. Both in approach and in substance.Eric Anderson
April 11, 2013
April
04
Apr
11
11
2013
12:03 PM
12
12
03
PM
PDT
This line right here, Joe:
However, starting with a randomly generated population of, say 100 series, I propose to subject them to random point mutations and natural selection, whereby I will cull the 50 series with the lowest products, and produce “offspring”, with random point mutations from each of the survivors, and repeat this over many generations.
She smuggled information about the search goal (high products) into the landscape (natural selection) by deliberately choosing a selective process (what she should cull) in order to aid her particular search.William J Murray
April 11, 2013
April
04
Apr
11
11
2013
12:00 PM
12
12
00
PM
PDT
OK Lizzie is at it again. She is still claiming to have demonstrated natural selection producing CSI and references her thread: Creating CSI with NS I have already told her that her demonstration does no such thing but she ain't having none of that. And she is too chicken to send an email of it to Dembski for his consideration. So perhaps some other UD regular could tell her that she has failed and why. Not that she's going to believe you...Joe
April 11, 2013
April
04
Apr
11
11
2013
11:00 AM
11
11
00
AM
PDT
The only difficulty that I had with the piece was the step from Contingent to Complex. That seemed a bit of a leap for me. What I appreciate, and wish it were clarified further (an upcoming project of my own), is that the Darwinian concept of Directionality requires a direction set by something abstract. That abstraction is Information. Then those who argue against the DNA/RNA data is being Information would have to break from or seriously redefine directionality, at which point the Darwinian system suffers a serious breakdown.collinb
April 11, 2013
April
04
Apr
11
11
2013
09:27 AM
9
09
27
AM
PDT
BD: Pardon, I have a moment. I just note, first that the view is self referential inherently because it includes you in the set of the referred to. This is always a tricky situation as it opens up both circularity and possibly incoherence. Second, that the problem of the nth vs the n+1th view still obtains. Sorry to be so short just now. KFkairosfocus
April 10, 2013
April
04
Apr
10
10
2013
05:22 PM
5
05
22
PM
PDT
KF, re #42:
Coming back to your own view so far as I have made it out, there is a similar position. For, your view reduces the world of our experiences of external reality to in effect a Plato’s Cave delusion.
I disagree. In my view the world we perceive, our perceptions, are given to us through the intimate connection between our minds and God's mind. There is nothing self-referential about it, as I have already explained to you. The regularities we perceive in our perceptions are simply the "rules" which govern the behavior of this world that God creates for us in which to play, work, learn, and grow. Your philosophy, dualism, however, contains a major, in my view insurmountable, obstacle to acceptance: the mind body problem. Please explain to me how it is that mind (intentions, thoughts, etc.) can possibly effect inert, physical matter (our brains), and how our brains are capable of producing any kind of effect in our minds (sensations, emotions, etc.). Until you can give a satisfactory answer to that question, your philosophy has a major, serious, hole in it. It is a defeater as far as I'm concerned. Answer my question, KF. You have been studiously avoiding it until now.Bruce David
April 10, 2013
April
04
Apr
10
10
2013
03:02 PM
3
03
02
PM
PDT
BD: In the universe of discourse we must address, the question of grounding the human mind as a reasonably effective cognitive system does arise. For, we have a persistent evolutionary materialism that seeks to pin mind down to brain and CNS in action. In that setting, the following from Leibniz's Monadology, i.e. the analogy of the mill [HT: Frosty], is quite apt:
14. The passing condition which involves and represents a multiplicity in the unity, or in the simple substance, is nothing else than what is called perception. This should be carefully distinguished from apperception or consciousness . . . . 16. We, ourselves, experience a multiplicity in a simple substance, when we find that the most trifling thought of which we are conscious involves a variety in the object. Therefore all those who acknowledge that the soul is a simple substance ought to grant this multiplicity in the monad . . . . 17. It must be confessed, however, that perception, and that which depends upon it, are inexplicable by mechanical causes, that is to say, by figures and motions. Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill. Now, on going into it he would find only pieces working upon one another, but never would he find anything to explain perception. It is accordingly in the simple substance, and not in the compound nor in a machine that the perception is to be sought. Furthermore, there is nothing besides perceptions and their changes to be found in the simple substance. And it is in these alone that all the internal activities of the simple substance can consist.
We may bring this up to date by making reference to more modern views of elements and atoms, through an example from chemistry. For instance, once we understand that ions may form and can pack themselves into a crystal, we can see how salts with their distinct physical and chemical properties emerge from atoms like Na and Cl, etc. per natural regularities (and, of course, how the compounds so formed may be destroyed by breaking apart their constituents!). However, the real issue evolutionary materialists face is how to get to mental properties that accurately and intelligibly address and bridge the external world and the inner world of ideas. This, relative to a worldview that accepts only physical components and must therefore arrive at other things by composition of elementary material components and their interactions per the natural regularities and chance processes of our observed cosmos. Now, obviously, if the view is true, it will be possible; but if it is false, then it may overlook other possible elementary constituents of reality and their inner properties. Which is precisely what Leibniz was getting at. Moreover, as C S Lewis aptly put it (cf. Reppert's discussion here), we can see that the physical relationship between cause and effect is utterly distinct from the conceptual and logical one between ground and consequent, and thus we have no good reason to trust the deliverances of the first to have anything credible to say about the second. Or, as Reppert aptly brings out:
. . . let us suppose that brain state A, which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.
That is the naturalist's dilemma: he must use his mind to reason and must trust his capability to perceive accurately, and to know, but in his scheme of things, the ground on which such must stand is undercut by the frame of the system itself. His scheme becomes self referentially incoherent and I daresay, absurd. In many ways and by many paths, often projected to opponents of the schemes of the likes of a Crick or a Marx or Freud or a Skinner or a Dawkins etc, but on reflection the self referentiality, it is apparent that he same knife cuts both ways. Coming back to your own view so far as I have made it out, there is a similar position. For, your view reduces the world of our experiences of external reality to in effect a Plato's Cave delusion. No scheme that does that escapes self-referentiality and an explosively self-defeating spiral of challenges: why should we accept the credibility of perceptions, beliefs and arguments at level n +1 if those of levels 1 to n have fallen to the acid of doubt and dismissal? Instead, it seems much wiser to me to accept that the consensus of our senses, experiences and insights is capturing something real, however prone we are to err. Where indeed, that fact of reality itself turns into a pivot, a point of undeniably certain truth and warranted self-evident knowledge [to deny that error exists entails that error exists]. Thus schemes of thought that deny external reality as what is there to be in error about, or deny truth as that which accurately refers to reality, or dismiss knowledge as that which warrants beliefs concerning reality, in some cases to undeniable certainty, etc etc, all fail. In particular, the notion that suddenly, because we construct a system -- one dominated by a priori materialism often wearing the lab coat of scientism and its notions that ideologically materialist science embraces and reveals knowledge whilst metaphysics, epistemology, philosophy and "theology" can be derided and dismissed across the board as outdated and dubious speculations, ends up in question-begging and self-referential incoherence. The artificial construct, institutional science dominated by scientism and unexamined materialism (let's not fool ourselves), stand on a fatally cracked foundation. And, given just how widespread such schemes are in our day, that analysis on the implications of the undeniable reality that error exists therefore cuts a wide mowing swath indeed across the contemporary scene in the marketplace of ideas and values. Back to basics -- first principles of right reason, self evident truths, the possibility of real knowledge etc -- and a much more serious respect for old fashioned common good sense. It is time to notice that the chains of mental slavery have been snapped, and that we are no longer tied to the post in the cave of shifting shadow shows and manipulation power games that stage these. So, let us step up into the sunshine, and step out of the shade. For, this is one time that we can get a breakthrough to truth and to liberation thereby: you shall know the Truth, and the Truth shall make ye free. But, that requires understanding why the same Worthy, in that same context, warned his interlocutors, that they were in a situation where because he spoke the truth, they were unable to hear and understand what he had to say; indeed, were violently inclined to object and oppose. As he said in his famous Sermon on the Mount, the eyes are the lamp of the body, so if our eyes are good, we are full of light. but if they are bad, so bad that what we think is light is in fact darkness, how great is our darkness. I think Jesus knew exactly what the Greek thought on enlightenment so decisively shaped by Plato's parable of the Cave was all about, and the spreading influence of such ideas, e.g. from Sepphoris, a major Gentile centre in Galilee. So, he spoke at several levels, some corrective to Hebrew caves, and some to Greek, Roman and wider gentile ones. The gospel is light. Our problem is, that light has come but too often we choose darkness instead of light, as our deeds are evil and for fear that our addiction to evil will be exposed. Indeed, we are often confused by light, and even angered by it. Sometimes to the point of murderous rage. As, happened to him. But, that was Friday, Sunday was a-coming. Sunday has come, with the duly prophesied resurrection power (of which we have good warrant), so let us be as the one Jesus spoke of who lives by the truth -- yes, in the teeth of a day that derides and dismisses truth itself -- and so will walk into the light so that it may be manifest that what he does is done through the grace and redemption of God. And, so, let us restore our civilisation to light, rather than surrendering it to the ever advancing darkness. KFkairosfocus
April 10, 2013
April
04
Apr
10
10
2013
01:05 AM
1
01
05
AM
PDT
KF, re. 39:
...the further topic runs into the same challenge, as cognition cannot be trustworthy if constrained and controlled by material factors and forces.
I agree. However, that discussion is also properly within the domain of philosophy, not science.
On self-referential incoherence. We do not know that cognition is a physical process, whether or not it uses physical processes. Cf Leibniz’s mill and the problems of badly set up computer processors. KF
I'm not sure what you're getting at here, but Berkeley's and my philosophy contains only mind (ours and God's), so the question of the trustworthiness of the physical brain does not arise.Bruce David
April 9, 2013
April
04
Apr
9
09
2013
08:25 AM
8
08
25
AM
PDT
Box, re. #38:
Bruce David, don’t you think that ID is fine with the alien embodied designer as a candidate? I’m not sure but if I remember correctly S.Meyer said something along that line.
Technically and strictly speaking, yes. However, that answer sidesteps the fundamental question, which is what is the source of the FSCI found in living things, because then the source would be the brain (or equivalent organ) in the embodied alien, and we would need to discover the source of that to answer the question. So it merely pushes the question back in time. And if one goes back far enough, you get to the brain that started the sequence, because eventually there is not time enough since the Big Bang for a brain to have come into being. And for the source of that, the only two valid answers are "We don't know," or "It was designed and built by some non-embodied intelligence or intelligences." Which one you choose depends on your answer to the non-scientific question, "Is it possible for there to be non-embodied intelligence acting in the physical universe?"Bruce David
April 9, 2013
April
04
Apr
9
09
2013
08:13 AM
8
08
13
AM
PDT
BD: That's another topic, but the further topic runs into the same challenge, as cognition cannot be trustworthy if constrained and controlled by material factors and forces. On self-referential incoherence. We do not know that cognition is a physical process, whether or not it uses physical processes. Cf Leibniz's mill and the problems of badly set up computer processors. KFkairosfocus
April 9, 2013
April
04
Apr
9
09
2013
02:33 AM
2
02
33
AM
PDT
Bruce David, don't you think that ID is fine with the alien embodied designer as a candidate? I'm not sure but if I remember correctly S.Meyer said something along that line. I do agree that there are obvious problems connected to this candidate.Box
April 8, 2013
April
04
Apr
8
08
2013
07:55 PM
7
07
55
PM
PDT
1 2 3

Leave a Reply