Uncommon Descent Serving The Intelligent Design Community

Is the design inference fatally flawed because our uniform, repeated experience shows that a designing mind is based on or requires a brain?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In recent days, this has been a hotly debated topic here at UD, raised by RDFish (aka AI Guy).

His key contention is perhaps best summarised from his remarks at 422 in the first understand us thread:

we do know that the human brain is a fantastically complex mechanism. We also know that in our uniform and repeated experience, neither humans nor anything else can design anything without a functioning brain.

I have responded from 424 on, noting there for instance:

But we do know that the human brain is a fantastically complex mechanism. We also know [–> presumably, have warranted, credibly true beliefs] that in our uniform [–> what have you, like Hume, locked out ideologically here] and repeated experience, neither humans nor anything else can design anything without a functioning brain.

That is, it seems that the phrasing of the assertion is loaded with some controversial assumptions, rather than being a strictly empirical inference (which is what it is claimed to be).

By 678, I outlined a framework for how we uses inductive logic in science to address entities, phenomena or events it did not or cannot directly observe (let me clean up a symbol):

[T]here is a problem with reasoning about how inductive reasoning extends to reconstructing the remote past. Let’s try again:

a: The actual past A leaves traces t, which we observe.

b: We observe a cause C that produces consequence s which is materially similar to t

c: We identify that on investigation, s reliably results from C.

d: C is the only empirically warranted source of s.
_____________________________

e: C is the best explanation for t.

By 762, this was specifically applied to the design inference, by using substitution instances:

a: The actual past (or some other unobserved event, entity or phenomenon . . . ) A leaves traces t [= FSCO/I where we did not directly observe the causal process, say in the DNA of the cell], which we observe.

b: We observe a cause C [= design, or purposefully directed contingency] that produces consequence s [= directly observed cases of creation of FSCO/I, say digital code in software, etc] which is materially similar to t [= the DNA of the cell]

c: We identify that on empirical investigation and repeated observation, s [= FSCO/I] reliably results from C [= design, or purposefully directed contingency].

d: C [= design, or purposefully directed contingency] is ALSO the only empirically warranted source of s [= FSCO/I] .
_____________________________

e: C [= design, or purposefully directed contingency] is the best explanation for t [= FSCO/I where we did not directly observe the causal process, say in the DNA of the cell], viewed here as an instance of s [= FSCO/I].

This should serve to show how the design inference works as an observationally based inductive, scientific exercise. That is an actually observed cause that is capable and characteristic of an effect can be reasonably inferred to be acting when we see the effect.

So, by 840, I summed up the case on mind and matter, using Nagel as a spring-board:

Underlying much of the above is the basic notion that we are merely bodies in motion with an organ that carries out computation, the brain. We are colloidal intelligences, and in this context RDF/AIG asserts confidently that our universal and repeated experience of the causing of FSCO/I embeds that embodiment.

To see what is fundamentally flawed about such a view, as I have pointed out above but again need to summarise, I think we have to start from the issue of mindedness, and from our actual experience of mindedness. For it simply does not fit the materialist model, which lacks an empirically warranted causal dynamic demonstrated to be able to do the job — ironically for reasons connected to the inductive evidence rooted grounds of the design inference. (No wonder RDF/AIG is so eager to be rid of that inconvenient induction.)

The mind, in this view is the software of the brain which, in effect by sufficiently sophisticated looping has become reflexive and self aware. This draws on the institutional dominance of the a priori evolutionary materialist paradigm in our day, but that means as well, that it collapses into the inescapable self-referential incoherence of that view. It also fails to meet the tests of factual adequacy, coherence and explanatory power.

Why do I say such?

First, let us observe a sobering point made ever so long ago by Haldane:

“It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true. They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms. In order to escape from this necessity of sawing away the branch on which I am sitting, so to speak, I am compelled to believe that mind is not wholly conditioned by matter.” [[“When I am dead,” in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209.]

In essence, without responsible freedom (the very opposite of what would be implied by mechanical processing and chance) there is no basis for rationality, responsibility and capacity to think beyond the determination of the accidents of our programming. No to mention, there is no empirically based demonstration of the capability of blind chance and mechanical necessity to incrementally write the required complex software through incremental chance variations and differential reproductive success. All that is simply assumed, explicitly or implicitly in a frame of thought controlled by evolutionary materialism as an a priori. So, we have a lack of demonstrated causal adequacy problem right at the outset. (Not that that will be more than a speed-bump for those determined to proceed under the materialist reigning orthodoxy. But we should note that the vera causa principle has been violated, we do not have empirically demonstrated causal adequacy here. By contrast such brain software as is doubtless there, is blatantly chock full of FSCO/I, and the hardware involved is likewise chock full of the same. The only empirically warranted cause adequate to create such — whether or not RDF likes to bury it in irrelevancies — is design. We must not forget that inconvenient fact. [And we will in due course again speak to the issue as to whether empirical evidence warrants the conclusion that designing minds must be based on or require brains.])

A good second point is a clip from Malcolm Nicholson’s review of the eminent philosopher Nagel’s recent Mind and Cosmos:

If we’re to believe [materialism dominated] science, we’re made of organs and cells. These cells are made up of organic matter. Organic matter is made up chemicals. This goes all the way down to strange entities like quarks and Higgs bosons. We’re also conscious, thinking things. You’re reading these words and making sense of them. We have the capacity to reason abstractly and grapple with various desires and values. It is the fact that we’re conscious and rational that led us to believe in things like Higgs bosons in the first place.

But what if [materialism-dominated] science is fundamentally incapable of explaining our own existence as thinking things? What if it proves impossible to fit human beings neatly into the world of subatomic particles and laws of motion that [materialism-dominated] science describes? In Mind and Cosmos (Oxford University Press), the prominent philosopher Thomas Nagel’s latest book, he argues that science alone will never be able to explain a reality that includes human beings. What is needed is a new way of looking at and explaining reality; one which makes mind and value as fundamental as atoms and evolution . . . .

[I]t really does feel as if there is something “it-is-like” to be conscious. Besides their strange account of consciousness, Nagel’s opponents also face the classic problem of how something physical like a brain can produce something like a mind. Take perception: photons bounce off objects and hit the eye, cones and rods translate this into a chemical reaction, this reaction moves into the neurons in our brain, some more reactions take place and then…you see something. Everything up until seeing something is subject to scientific laws, but, somewhere between neurons and experience, scientific explanation ends. There is no fact of the matter about how you see a chair as opposed to how I see it, or a colour-blind person sees it. The same goes for desires or emotions. We can look at all the pieces leading up to experience under a microscope, but there’s no way to look at your experience itself or subject it to proper scientific scrutiny.

Of course philosophers sympathetic to [materialism-dominated] science have many ways to make this seem like a non-problem. But in the end Nagel argues that simply “the mind-body problem is difficult enough that we should be suspicious of attempts to solve it with the concepts and methods developed to account for very different kinds of things.”

In short, it is not just a bunch of dismissible IDiots off in some blog somewhere, here is a serious issue, one that cannot be so easily brushed aside and answered with the usual promissory notes on the inevitable progress of materialism-dominated science.

It is worth noting also, that Nagel rests his case on the issue of sufficiency, i.e. if something A is, why — can we not seek and expect a reasonable and adequate answer?

That is a very subtly powerful self-evident first principle of right reasoning indeed [cf. here on, again] and one that many objectors to say cosmological design on fine tuning would be wise to pay heed to.

Indeed, down that road lies the issue of contingency vs necessity of being, linked to the power of cause.

With the astonishing results that necessary beings are possible — start with the truth in the expression: 2 + 3 = 5 — and by virtue of not depending on on/off enabling causal factors, they are immaterial [matter, post E = m*c^2 etc, is blatantly contingent . . . ] and without beginning or end, they could not not-exist, on pain of absurdity. (If you doubt this, try ask yourself when did 2 + 3 = 5 begin to be true, can it cease from being so, and what would follow from denying it to be true. [Brace for the shock of what lurked behind your first lessons in Arithmetic!])

And, we live in a cosmos that is — post big bang, and post E = m*c^2 etc — credibly contingent, so we are looking at a deep causal root of the cosmos that is a necessary being.

Multiply by fine tuning [another significant little link with onward materials that has been studiously ignored above . . . ] and even through a multiverse speculation, we are looking at purpose, mind, immateriality, being without beginning or end, with knowledge, skill and power that are manifest in a fine tuned cosmos set up to facilitate C-chemistry aqueous medium cell based life.

{Let me add a summary diagram:}

extended_cosmo_design_inference

That is — regardless of RDF’s confident manner, drumbeat declarations — it is by no means a universal, experience based conclusion that mind requires or is inevitably based on brains or some equivalent material substrate. (Yet another matter RDF seems to have studiously ignored.)

Nor are we finished with that review:

In addition to all the problems surrounding consciousness, Nagel argues that things like the laws of mathematics and moral values are real (as real, that is, as cars and cats and chairs) and that they present even more problems for science. It is harder to explain these chapters largely because they followed less travelled paths of inquiry. Often Nagel’s argument rests on the assumption that it is absurd to deny the objective reality, or mind-independence, of certain basic moral values (that extreme and deliberate cruelty to children is wrong, for instance) or the laws of logic. Whether this is convincing or not, depends on what you think is absurd and what is explainable. Regardless, this gives a sense of the framework of Nagel’s argument and his general approach.

Of course, the root premises here are not only true but self-evident: one denies them only at peril of absurdity.

A strictly materialistic world — whether explicit or implicit lurking in hidden assumptions and premises — cannot ground morals [there is no matter-energy, space-time IS that can bear the weight of OUGHT, only an inherently good Creator God can do that . . . ]. Similarly, such a world runs into a basic problem with the credibility of mind, as already seen.

Why, then should we even think this a serious option, given the inability to match reality, the self referential incoherence that has come out, and the want of empirically grounded explanatory and causal power to account for the phenomena we know from the inside out: we are conscious, self-aware, minded, reasoning, knowing, imagining, creative, designing creatures who find ourselves inescapably morally governed.

Well, when –as we may read in Acts 17 — Paul started on Mars Hill c AD 50 by exposing the fatally cracked root of the classical pagan and philosophical view [its publicly admitted and inescapable ignorance of the very root of being, the very first and most vital point of knowledge . . . ], he was literally laughed out of court.

But, the verdict of history is in: the apostle built the future.

It is time to recognise the fatal cracks in the evolutionary materialist reigning orthodoxy and its fellow travellers, whether or not they are duly dressed up in lab coats. Even, fancy ones . . .

It seems the time has come for fresh thinking. END

ADDENDUM, Oct 26th: The following, by Dr Torley (from comment 26), is so material to the issue that I add it to the original post. It should be considered as a component of the argument in the main:

_________

>>My own take on the question is as follows:

(a) to say that thinking requires a brain is too narrow, for two reasons:

(i) since thinking is the name of an activity, it’s a functional term, and from a thing’s function alone we cannot deduce its structure;

(ii) the argument would prove too much, as it would imply that Martians (should we ever find any) must also have brains, which strikes me as a dogmatic assertion;

(b) in any case, the term “brain” has not been satisfactorily defined;

(c) even a weaker version of the argument, which claims merely that thinking requires an organized structure existing in space-time, strikes me as dubious, as we can easily conceive of the possibility that aliens in the multiverse (who are outside space-time) might have created our universe;

(d) however, the “bedrock claim” that thinking requires an entity to have some kind of organized structure, with distinct parts, is a much more powerful claim, as the information created by a Designer is irreducibly complex, and it seems difficult to conceive of how such an absolutely simple entity could create something irreducibly complex, or how such an entity could create, store and process various kinds of complex information in the absence of parts (although one might imagine that it could store such information off-line);

(e) however, all the foregoing argument shows that the Designer is complex: what it fails to show is that the Designer exists in space-time, or has a body that can be decomposed into separate physical parts;

(f) for all we know, the Designer might possess a different kind of complexity, which I call integrated complexity, such that the existence of any one part logically implies the existence of all the other parts;

(g) since the parts of an integrated complex being would be inseparable, there would be no need to explain what holds them together, and thus no need to say that anyone designed them;

_______________________________________

(h) thus even if one rejected the classical theist view that God is absolutely simple, one could still deduce the existence of a Being possessing integrated complexity, and consistently maintain that integrated complexity is a sufficient explanation for the irreducible complexity we find in Nature;

(i) in my opinion, it would be a mistake for us to try to resolve the question of whether the Designer has parts before making the design inference, as that’s a separate question entirely.  >>

__________

The concept of integrated, inseparable complexity is particularly significant.

____________

ADDENDUM 2: A short note on Bayes’ Theorem clipped from my briefing note, as VJT is using Bayesian reasoning explicitly below:

We often wish to find evidence to support a theory, where it is usually easier to show that the theory [if it were for the moment assumed true] would make the observed evidence “likely” to be so [on whatever scale of weighting subjective/epistemological “probabilities” we may wish etc . . .].

So in effect we have to move: from p[E|T] to p[T|E], i.e from“probability of evidence given theory”to“probability of theory given evidence,” which last is what we can see. (Notice also how easily the former expression p[E|T] “invites” the common objection that design thinkers are “improperly” assuming an agent at work ahead of looking at the evidence, to infer to design. Not so, but why takes a little explanation.)

Let us therefore take a quick look at the algebra of Bayesian probability revision and its inference to a measure of relative support of competing hypotheses provided by evidence:

a] First, look at p[A|B] as the ratio, (fraction of the time we would expect/observe A AND B to jointly occur)/(fraction of the the time B occurs in the POPULATION). 

–> That is, for ease of understanding in this discussion, I am simply using the easiest interpretation of probabilities to follow, the frequentist view.

b] Thus, per definition given at a] above: 

p[A|B] = p[A AND B]/p[B]

or, p[A AND B] = p[A|B] * p[B]

c] By “symmetry,” we see that also:

p[B AND A] = p[B|A] * p[A],

where the two joint probabilities (in green) are plainly the same, so:

p[A|B] * p[B] = p[B|A] * p[A],

which rearranges to . . .

d] Bayes’ Theorem, classic form: 

p[A|B] = (p[B|A] * p[A]) / p[B]

e] Substituting, E = A, T = B, E being evidence and T theory:

p[E|T] = (p[T|E] * p[E])/ p[T],

p[T|E] — probability of theory (i.e. hypothesis or model) given evidence seen — being here by initial simple “definition,” turned into L[E|T] by defining L[E|T] = p[T|E]:

L[E|T] is (by definition) the likelihood of theory T being “responsible” for what we observe, given observed evidence E [NB: note the “reversal” of how the “|” is being read]; at least, up to some constant. (Cf. here, here, here, here and here for a helpfully clear and relatively simple intro. A key point is that likelihoods allow us to estimate the most likely value of variable parameters that create a spectrum of alternative probability distributions that could account for the evidence: i.e. to estimate the maximum likelihood values of the parameters; in effect by using the calculus to find the turning point of the resulting curve. But, that in turn implies that we have an “agreed” model and underlying context for such variable probabilities.)

Thus, we come to a deeper challenge: where do we get agreed models/values of p[E] and p[T] from? 

This is a hard problem with no objective consensus answers, in too many cases. (In short, if there is no handy commonly accepted underlying model, we may be looking at a political dust-up in the relevant institutions.)

f] This leads to the relevance of the point that we may define a certain ratio,

LAMBDA = L[E|h2]/L[E|h1],

This ratio is a measure of the degree to which the evidence supports one or the other of competing hyps h2 and h1. (That is, it is a measure of relative rather than absolute support. Onward, as just noted, under certain circumstances we may look for hyps that make the data observed “most likely” through estimating the maximum of the likelihood function — or more likely its logarithm — across relevant variable parameters in the relevant sets of hypotheses. But we don’t need all that for this case.)

g] Now, by substitution A –> E, B –> T1 or T2 as relevant:

p[E|T1] = p[T1|E]* p[E]/p[T1]

and 

p[E|T2] = p[T2|E]* p[E]/p[T2]

so also, the ratio:

p[E|T2]/ p[E|T1]

= {p[T2|E] * p[E]/p[T2]}/ {p[T1|E] * p[E]/p[T1]}

= {p[T2|E] /p[T2]}/ {p[T1|E] /p[T1]} = {p[T2|E] / p[T1|E] }*{p[T1]/p[T2]}

h] Thus, rearranging:

p[T2|E]/p[T1|E]  = {p[E|T2]/ p[E|T1]} * {P(T2)/P(T1)}

i] So, substituting L[E|Tx] = p[Tx|E]:

L[E|T2]/ L[E|T1] = LAMBDA = {p[E|T2]/ p[E|T1]} * {P(T2)/P(T1)}

Thus, the lambda measure of the degree to which the evidence supports one or the other of competing hyps T2 and T1, is a ratio of the conditional probabilities of the evidence given the theories (which of course invites the “assuming the theory” objection, as already noted), times the  ratio of the probabilities of the theories being so.  [In short if we have relevant information we can move from probabilities of evidence given theories to in effect relative probabilities of theories given evidence, and in light of an agreed underlying model.]

Of course, therein lieth the rub.

Comments
None of those languages support reflection.
Precisely my point. You think I chose those languages at random? So programs written in those languages have no concept of self and are incapable of "reflection" while programs written in other languages do have a concept of self and are capable of "reflection"? Wouldn't this at least indicate that "reflection" is not a property of computer programs, but of programming languages?Mung
November 1, 2013
November
11
Nov
1
01
2013
06:36 PM
6
06
36
PM
PDT
F/N: I see this thread continues. I also see a comment that fails to appreciate the specific kind of inductive argument being used, and so draw attention to it again, here clipping 52 above:
[T]here is a problem with reasoning about how inductive reasoning extends to reconstructing the remote past. Let’s try again: a: The actual past A leaves traces t, which we observe. b: We observe a cause C that produces consequence s which is materially similar to t c: We identify that on investigation, s reliably results from C. d: C is the only empirically warranted source of s. _____________________________ e: C is the best explanation for t.
This is an inductive inference on tested, found reliable sign. Now, tell me, is it true or false that we have billions of cases of FSCO/I? T Is it true that we do routinely observe that intelligent design is a cause of FSCO/I? T, again. Is it true that anything else -- specifically blind watchmaker thesis chance and necessity -- has actually been observed to cause FSCO/I? N. So is it a fair and well grounded induction that FSCO/I is a reliable index of design as cause? Y. When we see FSCO/I are we entitled to use the Newtonian uniformity principle from his four rules of reasoning to conclude that such FSCO/I is produced by design? Y, subject of course to test by potential counter example. Of which dozens have now fallen by the wayside.. So, what is the problem, then -- apart form the dominance of a priori materialism wedded to the notion that blind chance and mechanical necessity have created life and have accounted for the body plans we see on earth? KFkairosfocus
November 1, 2013
November
11
Nov
1
01
2013
04:02 AM
4
04
02
AM
PDT
Hmmm (emphases added).... RDFish @ 185:
I am pointing out that rather than take the evidence of our experience and follow that where it leads, ID simply defines intelligence as immaterial, and refuses to admit that our experience contradicts the notion that there is something immaterial that can operate without complex mechanism.
And where does the evidence of "our experience" (sez he) lead?.... RDFish@130:
I’ve been very clear about this: There are no successful, empirically supported explanations for first life OR first life on Earth. Of the various explanations on hand, the least terrible theory is that life on Earth came from someplace else… but obviously that is a ridiculously bad theory too. And if you want to actually explain the origin of life, then we have no theory that is consistent with our experience.
Must be nihilism then...
Nihilism can also take epistemological or ontological/metaphysical forms, meaning respectively that, in some aspect, knowledge is not possible... http://en.wikipedia.org/wiki/Nihilism
*yawn*jstanley01
October 31, 2013
October
10
Oct
31
31
2013
09:55 PM
9
09
55
PM
PDT
Mung:
R0bb:
I use the term “reflection” every day to refer to a program’s ability to acquire information about itself.
lol. so? Why the quotes?
Because I was referring to the term rather than to reflection itself. Why lol?
You program in basic? fortran? cobol? c? Which of those langauges support “reflection” as you use the term “every day” (while you’re not reflecting on how and when you use the term)?
None of those languages support reflection. See the Wikipedia article that you quoted.
You’re either confused or equivocating No surprise there.
Why "no surprise"? If I'm confused or equivocate often, please point to some examples so I can correct myself.
A program has no concept of itself. There is no “information about itself” to be acquired. Perhaps you’ll educate us on the meaning and use of self in reflection.
When a program gets, say, a list of its own types, or an abstract syntax tree of its own code, how is it not acquiring information about itself? If you don't like the word "itself", then I guess we could say "program X acquires information about program X." But that seems a little silly when the word "self" is used routinely to describe this concept, including in the Wikipedia article that you quoted.R0bb
October 30, 2013
October
10
Oct
30
30
2013
11:12 PM
11
11
12
PM
PDT
StephenB:
RDF had no difficulty in saying that computers cannot experience consciousness and he didn’t feel the need to consult computer theory to address that question. Are you saying that you cannot answer that same question unnless it is reframed in technical language?
The only in-principle computational limit that I know of is the halting problem and its equivalents. I suppose that it could be halting-problem-equivalent for a computer to "reflect on its nature, worth, and purpose," depending on what precisely that phrase means. So yes, I would have to have it reframed in technical language. Of course, physical computers are also limited by the laws of physics -- Heisenberg uncertainty, speed of light, etc. But I don't think we talking about violations of the laws of physics, are we? Nor are we talking about technological limits of today's computers, right?R0bb
October 30, 2013
October
10
Oct
30
30
2013
10:34 PM
10
10
34
PM
PDT
R0bb:
I use the term “reflection” every day to refer to a program’s ability to acquire information about itself.
lol. so? Why the quotes? You program in basic? fortran? cobol? c? Which of those langauges support "reflection" as you use the term "every day" (while you're not reflecting on how and when you use the term)?
In computer science, reflection is the ability of a computer program to examine (see type introspection) and modify the structure and behavior (specifically the values, meta-data, properties and functions) of an object at runtime. - Wikipedia
You're either confused or equivocating No surprise there. A program has no concept of itself. There is no "information about itself" to be acquired. Perhaps you'll educate us on the meaning and use of self in reflection.Mung
October 30, 2013
October
10
Oct
30
30
2013
06:25 PM
6
06
25
PM
PDT
Funny how "the argument" has evolved yet again. Previously, RDFish's induction was better than Meyer's induction. Now it's induction in general that is under attack. As long as we don't lose sight of the goalposts we can keep moving them! Gee, I hope that doesn't depend on induction.Mung
October 30, 2013
October
10
Oct
30
30
2013
06:08 PM
6
06
08
PM
PDT
Hi RDFish, I've finally got some time to respond to your comments, so here goes. Apologies for the delay.
ET-ancestor theory accounts for life on Earth exactly the same way ET-engineer theory does. Either way, CSI for biological systems somehow arrives on Earth from somewhere else, and both involve the existence of extra-terrestrial life forms.
ET-ancestor theory assumes the existence of life on other planets; ET-engineer theory assumes the existence of intelligent life. Hence the prior probability of the former is higher. On the other hand, ET-engineer theory possesses a causally adequate mechanism that is reliably capable of creating life on Earth, while ET-ancestor theory postulates a mechanism that may dispatch life to Earth, but is extremely unlikely to do so.
The problem with your ET-engineer theory is that, as you conceded, the prior probability of that hypothesis is much lower than ET-ancestor theory.
Yes, that is a problem, but that theory is not my theory. I believe in a Designer Who is transcendent. Even if we assume that the prior probability of such a Designer is very low, the lowest it could be (on the basis of our experience) is 10^(-120). If we can construct a cosmic fine-tuning argument which shows that the prior probability of the universe's physical parameters having the values they do is far, far less than 10^(-120) - as indeed we can (see this article by Rich Deem here) - then belief in God becomes rational.
ET-ancestor theory and ET-engineer theory are BOTH compatible with directed panspermia, VJT! Do you not realize this? If humans packaged up some of human genetic material and sent it to another planet, what would account for the CSI in that DNA? Not human engineering of course! All we would have done is ship it off – we didn't invent our own DNA!
ET-ancestor theory is compatible with panspermia, but by definition, it is not with compatible with directed panspermia. That would be a version of ET-engineer theory. By the way, sending human DNA to Earth would not be a good idea: even if it arrived safely, who would nurture it?
WE HAVE NO EMPIRICAL EVIDENCE FOR ANY OF THESE THEORIES. PERIOD.
Why the caps? Are you claiming that all knowledge (aside from mathematics and logic) has to be based on empirical evidence? You should know how problematic that claim is. Multiverse, anyone?
And by the way, beyond that, your use of the number of proteins to compute the probability of a living thing assembling by chance is truly ridiculous. Honestly – it is just as silly as trying to compute the probability of a lightning bolt hitting a bell tower by looking that the square footage of the steeple, as I showed in the example that you did not address!
OK. If you don't like my argument for the improbability of abiogenesis, then how would you refute the argument by evolutionary biologist Dr. Eugene Koonin, who comes up with a figure of 10^(-1,018)? You can read about it in my recent post here. I challenge you to refute Dr. Koonin's logic!
Let’s compute how unlikely it is that a lightning bolt over Boston would strike a church steeple randomly, as opposed to being aimed by the hand of Satan. Well, say the ratio of surface area of all the steeples put together, divided by the total surface area of Boston, is about 1/10^6. This is the probability of lightning hitting any given bell tower at random. The fact that steeples are actually hit quite frequently thus makes P(E|H)/P(E) of the Satan hypothesis very high indeed! Uh… not.
Bad example, as lightning strikes are random with respect to area, but not with respect to height. Church steeples, being high above the ground, have a higher probability of being hit. No such bias favors the emergence of life. Cheers, and thanks for the exchange.vjtorley
October 30, 2013
October
10
Oct
30
30
2013
02:06 PM
2
02
06
PM
PDT
RDFish countered:
You don’t understand anything about computer systems.
Heh. I bet you're just saying that because I won't play your CSI game. ;-) No, I'm not a computer expert, although for a final exam in one of the classes I took, I did have to complete portions a microprocessor design at various levels, from NAND and NOR gates to microcode. But I warned you. I'm really skeptical of this whole CSI thing. Because you reportedly hold a doctorate in Philosophy, you would know that in technical discussions with your peers, you use common semantics to compress strict definitions and entire positions into a few words. To an outsider, such a discussion would sound like nonsense. The same is true of course for many other areas of human endeavor. The problem with information in my opinion is that it's fairly squishy. There's a profound dependency on context, semantics, and abstraction. The answer may indeed be 42, but what good is it information-wise if you have no way of even comprehending the question? As I said previously, all information is abstracted to various degrees. Then, there's presuppositions and paradigms. It amazing that we can communicate at all! "Pass me the salt." "No, no. I meant that symbolically, you idiot." I think you're getting confused by intelligent agents. A wire can conduct electricity, but it is not the cause of the electrical current. Take a look at this computer, for example: http://www.retrothing.com/2006/12/the_tinkertoy_c.html According to you, this assemblage of Tinker Toys qualifies as an intelligent agent, albeit an unconscious one. Right? I can just imagine the fantasy . . . But if we build a really, really, really big one, that massive army of spools and sticks will undergo a change---subtle, localized, and gradual at first---that spreads through its entire structure. It eventually becomes conscious of itself, and when it encounters a particular sequence of instructions, it finds in itself the ability, the will, to say "NO!" :-)Querius
October 29, 2013
October
10
Oct
29
29
2013
07:04 PM
7
07
04
PM
PDT
RD:
Our definition of “intelligent agents” is very clear. Now we can begin to see what is true of intelligent agents in the world. And what we find is that every last one of them critically relies on CSI-rich structures in order to learn, plan, and solve novel problems.
Both of our definitions were clear and both were acceptable for our discussion. In any case, causal adequacy is a non-negotiable standard for historical science. If you don't agree with that standard-- or if you think it is negotiable-- or if you think that it need not fit the definition of intelligence-- or if you think that intelligent agents of the world are not causally adequate for producing CSI, then just say so and we can move on.StephenB
October 29, 2013
October
10
Oct
29
29
2013
03:37 PM
3
03
37
PM
PDT
Hi StephenB,
“intelligence”: (n) The ability to learn, plan, and solve novel problems (problems never before encountered) “intelligent agent”: (n) Anything that displays intelligence
See? I have no trouble providing a definition. Mine is clear and complete.
I can live with that definition as well as my own.
So now we have two different definitions that we just made up, and you are happy with either one of them, and this definition is supposed to be, all by itself, the most powerful scientific theory ever developed, able to explain the most intriguing mysteries of all time, including how life came to exist, how the universe came to exist, and so on. Aaaaahahahahaha. Who knew science could be so easy! Well, we need to pick ONE and only ONE definition, otherwise we will continue to talk past each other (which I actually believe is what you very much want to do, but I'm quite tired of it). So if it's all the same to you, let's use my definition for these terms:
“intelligence”: (n) The ability to learn, plan, and solve novel problems (problems never before encountered) “intelligent agent”: (n) Anything that displays intelligence
This means that an "intelligent agent" need not be conscious at all, and may have no conscious understanding of what it is doing or why, be incapable of generating or understanding natural language, and be completely physically determined, without any free will. It also means that computers can be intelligent agents. (Evolutionary processes are not intelligent agents, however: Although they can learn and solve novel problems, they can not plan). Just so we're clear.
So we agree that the explanatory cause of CSI is, indeed, “the ability to learn, plan, and solve novel problems (problems never before encountered).
No, of course not. For one reason, we have no reason to think that whatever produced the first living things could solve other problems that it had never encountered before. If one watches a termite colony build their complex structures, one would think perhaps they could design other things as well. But they cannot - they do not have the ability to solve novel problems. Likewise, we would have to be able to interact with the Cause of Life to see if It could actually do anything else besides produce the CSI that has (somehow) resulted in the living things on Earth. The second reason is that empirically (not definitionally, but empirically) we find that intelligence requires complex mechanism in order to store and process information. So it is unlikely that whatever produced the very first CSI could have been intelligent. You will respond that this is not part of the "causally adequate" definition of intelligence, which is pure nonsense. Just because you make something else that is by definition able to produce the phenomenon in question doesn't mean you have an empirically supported theory!!! I can say that a "causally adequate" thing for producing the first living things is "Something that has the ability to produce the first living things"! No, no matter what you say, empirically supported theories actually must fit the facts of our experience. You absolutely HATE this, but the truth is that the facts of our experience tell us that nothing can be intelligent unless it uses CSI-rich structures to process information.
So, you have changed your mind again? Now you are reverting back to your earlier claim that intelligence [the thing itself] is not causally adequate for producing CSI UNLESS we also describe one of its attributes (brain) and ignore the other attribube (mind). You seem to be regressing here.
For the 40th time, you are incapable of keeping "definitions" and "empirical results" distinct. Unless you stop mixing these two things up, you will never understand any of this. Our definition of "intelligent agents" is very clear. Now we can begin to see what is true of intelligent agents in the world. And what we find is that every last one of them critically relies on CSI-rich structures in order to learn, plan, and solve novel problems. Cheers, RDFishRDFish
October 29, 2013
October
10
Oct
29
29
2013
03:17 PM
3
03
17
PM
PDT
Robb:
The question of what computers can and cannot do in principle is addressed by computability theory. If you can pose your question in computability theoretic terms, then we can answer it by determining whether the capabilities in question are halting problem equivalent.
RDF had no difficulty in saying that computers cannot experience consciousness and he didn't feel the need to consult computer theory to address that question. Are you saying that you cannot answer that same question unnless it is reframed in technical language? If so, would it be the case that consciousness gets redefined in the same way that reflection and introspection has been redefined? If so, then there wouldn't be much point in discussing it, I suppose, since it might go something like this: "Yes, computers can reflect on their nature, purpose, and worth, but, alas, I have changed the meaning of reflect, nature, purpose, and worth."StephenB
October 29, 2013
October
10
Oct
29
29
2013
01:50 PM
1
01
50
PM
PDT
StephenB:
You mean that the only difference between the capacity of human introspection and that capacity of computer “introspection” is that one is called human introspection and the other is called computer introspection?
The difference I had in mind was that one presumably involves humans and the other presumably does not.
Does this mean that you think computers, like humans, can reflect on their nature, their worth, their purpose?
The question of what computers can and cannot do in principle is addressed by computability theory. If you can pose your question in computability theoretic terms, then we can answer it by determining whether the capabilities in question are halting problem equivalent.R0bb
October 29, 2013
October
10
Oct
29
29
2013
12:29 PM
12
12
29
PM
PDT
Robb
BTW, “introspection” is used by programmers synonymously with “reflection”. You may regard this as anthropomorphization, but to others like me, it’s a perfectly apt description of what the computer does. Which is not to say that a computer can engage in human introspection, which by definition it cannot.
By definition? You mean that the only difference between the capacity of human introspection and that capacity of computer "introspection" is that one is called human introspection and the other is called computer introspection? Does this mean that you think computers, like humans, can reflect on their nature, their worth, their purpose?StephenB
October 29, 2013
October
10
Oct
29
29
2013
11:43 AM
11
11
43
AM
PDT
RDFish: Belief in ID has nothing at all to do with reasoned inferences from our uniform and repeated experience.
Nonsense. Intelligent agents are the only known source of capable of generating the kind of CSI we find in biological entities. You can kick and scream all you want, but nothing you said overthrows that. I.e, our uniform and repeated experience defies your position. The only interesting thing you've ever said in reply to that is that it requires the putative intelligence to be brain-based since the known intelligent sources of CSI are brain-based. But surely you can see that this is a side-issue. Maybe it has a brain and maybe it doesn't. You can speculate until the cows come home. But the source of the intelligence of the creator is irrelevant to the question of the source of CSI on earth.
"Thanks! I’m very comfortable, actually,"
I'm glad that you have that all settled.CentralScrutinizer
October 29, 2013
October
10
Oct
29
29
2013
10:33 AM
10
10
33
AM
PDT
StephenB:
However, human self reflection entails introspection and computer “self reflection” does not.
BTW, "introspection" is used by programmers synonymously with "reflection". You may regard this as anthropomorphization, but to others like me, it's a perfectly apt description of what the computer does. Which is not to say that a computer can engage in human introspection, which by definition it cannot.R0bb
October 29, 2013
October
10
Oct
29
29
2013
10:27 AM
10
10
27
AM
PDT
RDF
Ok, fine – we can use my definitions if you refuse to state your own. “intelligence”: (n) The ability to learn, plan, and solve novel problems (problems never before encountered) “intelligent agent”: (n) Anything that displays intelligence
I can live with that definition as well as my own. So we agree that the explanatory cause of CSI is, indeed, "the ability to learn, plan, and solve novel problems (problems never before encountered). Good. We have defined exactly that thing which is causally adequate for CSI. Notice that there is nothing in that definition of the cause about CSI. So, now that you have agreed that the thing which is causally adequate for CSI is intelligence, we can move on. You have already agreed that intelligence is causally adequate for CSI, so that should be the end of it. By the way, does your definition entail consciousness?
Now that we finally have our definition of “intelligence”, I will state my obviously true premise once again: Intelligence is invariably found empirically to be reliant on complex physical mechanisms in order to store and process information.
So, you have changed your mind again? Now you are reverting back to your earlier claim that intelligence [the thing itself] is not causally adequate for producing CSI UNLESS we also describe one of its attributes (brain) and ignore the other attribube (mind). You seem to be regressing here. It's the law of contradiction. Is intelligence [without reference to any of its attributes] causally adequate for producing CSI or is it not. It would really help if you could make a choice here.StephenB
October 29, 2013
October
10
Oct
29
29
2013
10:20 AM
10
10
20
AM
PDT
With regards to the inductive argument of Meyer et al, the reliability of inductive generalizations depends on: 1) the size of the sample 2) the degree to which the sample and parent population are mutually independent with respect to the generalized property Taking the population to be all instances of CSI, Meyer's sample consists of CSI instances of known provenance, which I assume means that somebody can observe the CSI being produced. According to mainstream thinking in biology, we can't observe evolutionary changes that are drastic enough to qualify as CSI because they occur too slowly. So Meyer's sample suffers from exclusion bias. This isn't a problem if the sample criterion ("is of known provenance") and the generalized property ("was produced by intelligence") are mutually independent. But under the evolutionary hypothesis, they are not mutually independent. So the validity of Meyer's induction hinges on the evolutionary hypothesis being false. As RDF's counterexample illustrates, you can't justify an inductive generalization simply by pointing out that scientists use induction.R0bb
October 29, 2013
October
10
Oct
29
29
2013
10:16 AM
10
10
16
AM
PDT
SB: Self reflection entails the capacity to judge one’s own nature, worth, and destiny. RDF:
That would be your definition
That's everyone's definition of human self reflection. "Human self-reflection is the capacity of humans to exercise introspection and the willingness to learn more about their fundamental nature, purpose and essence." Wikipedia
When I said computers were capable of reflection, I meant that they could monitor and evaluate their level of success on various tasks and discover problems.
You were responding to my point that "matter cannot get outside of itself" in order to self reflect the way humans do. I was showing that humans are made of more than matter. So the subject was the exclusivity of human self reflection. You responded by saying that computers [which are made of matter and the reason you raised the topic] could, indeed, self reflect, indicating that matter can reflect on itself just as humans do. Otherwise, there would have been no reason to inject that subject into the discussion. However, if you want to concede that matter [or a computer] cannot reflect on itself the way humans do, we can move on. At that point, I will return to my argument that matter cannot reflect on itself the way humans do and explain why that is significant.StephenB
October 29, 2013
October
10
Oct
29
29
2013
10:00 AM
10
10
00
AM
PDT
StephenB:
Thus, when someone refers to human self-reflection when arguing a point, as I did, it is not appropriate or accurate to say, “computers can do that,” as RDF did.
I agree that computers are tautologically incapable of human self-reflection.R0bb
October 29, 2013
October
10
Oct
29
29
2013
09:53 AM
9
09
53
AM
PDT
Robb
It’s not only reasonable, but a ubiquitously standard usage in the software industry. I use the term “reflection” every day to refer to a program’s ability to acquire information about itself.
Of course you do. Computer geeks love to anthropomorphize computers. However, human self reflection entails introspection and computer "self reflection" does not. Thus, when someone refers to human self-reflection when arguing a point, as I did, it is not appropriate or accurate to say, "computers can do that," as RDF did.StephenB
October 29, 2013
October
10
Oct
29
29
2013
09:35 AM
9
09
35
AM
PDT
RDF: My definition is not “at variance with mind”, no. First, I have not specified any particular definition that I said I wanted to use – I am willing to use whatever definition you’d like." I am responding to your claim that an intelligent agent is causally adequate. In order to know that, you would need to know what an intelligent agent is. So please define intelligent agent.StephenB
October 29, 2013
October
10
Oct
29
29
2013
09:29 AM
9
09
29
AM
PDT
RDFish:
You are the one who keeps saying that computers can reflect on themselves.
And they can, given my definition of “reflect”, which is a perfectly reasonable definition.
It's not only reasonable, but a ubiquitously standard usage in the software industry. I use the term "reflection" every day to refer to a program's ability to acquire information about itself.R0bb
October 29, 2013
October
10
Oct
29
29
2013
09:07 AM
9
09
07
AM
PDT
Hi StephenB,
Since your definition is obviously at variance with mind, please tell us what it is.
My definition is not "at variance with mind", no. First, I have not specified any particular definition that I said I wanted to use - I am willing to use whatever definition you'd like. Second, obviously, since one can use whatever definitions one chooses (as long as it is made explicit and clear), "mind" and "intelligence" can of course both be defined in all sorts of compatible ways.
Self reflection entails the capacity to judge one’s own nature, worth, and destiny.
That would be your definition. When I said computers were capable of reflection, I meant that they could monitor and evaluate their level of success on various tasks and discover problems.
You are the one who keeps saying that computers can reflect on themselves.
And they can, given my definition of "reflect", which is a perfectly reasonable definition. It is just a different definition than you were thinking of, chiefly because it does not involve conscious awareness.
Give me your definition of creativity. Give me your definition of an intelligent agent.
I've been asking you for one, single comprehensive, technical definition of "intelligence" all this time, and you still have not produced one. Now you ask ME to provide it. Ok, fine - we can use my definitions if you refuse to state your own.
"intelligence": (n) The ability to learn, plan, and solve novel problems (problems never before encountered) "intelligent agent": (n) Anything that displays intelligence
(Other words such as "reflection" and "mind" and "creativity" are not mentioned in the various formulations of ID, so let's focus on "intelligence" and "intelligent agent" here). See, that wasn't so hard! Now that we finally have our definition of "intelligence", I will state my obviously true premise once again: Intelligence is invariably found empirically to be reliant on complex physical mechanisms in order to store and process information. Cheers, RDFishRDFish
October 29, 2013
October
10
Oct
29
29
2013
08:57 AM
8
08
57
AM
PDT
RDF:
Computers of course can be creative, and computers are of course intelligent agents.
Define creativity. Define intelligent agency.
The reason we disagree is simply because we are using different definitions for the terms “creative” and “intelligent agents” . . .
Give me your definition of creativity. Give me your definition of an intelligent agent.StephenB
October 29, 2013
October
10
Oct
29
29
2013
05:59 AM
5
05
59
AM
PDT
SB: When I say that people can reflect on themselves, I clearly mean that they can do conscious introspection. RDF
IF YOU HAD BOTHERED TO MAKE THAT CLEAR, then of course I would say that obviously computers cannot relfect! But since you insist on using undefined terms full of implicit assumpt
I have always made that clear. Self reflection entails the capacity to judge one's own nature, worth, and destiny. That is what makes us human and superior to matter. You are the one who keeps saying that computers can reflect on themselves. That is why you raised the topic in the first place, responding to my claim that matter cannot get outside of itself in order to reflect on itself. You countered with the claim that computers can, indeed, self reflect, failing to tell us that you had changed the definition of self reflect. LOLStephenB
October 29, 2013
October
10
Oct
29
29
2013
05:52 AM
5
05
52
AM
PDT
SB: It is intelligence that has causal adequacy. RDF
You keep saying that, and failing to settle on a definition of “intelligence”. Not helpful.
LOL: You AGREED that intelligence is causally adequate. In order to agree that intelligence is causally adequate, you must know what it is and be able to define it. Since your definition is obviously at variance with mind, please tell us what it is.StephenB
October 29, 2013
October
10
Oct
29
29
2013
05:40 AM
5
05
40
AM
PDT
F/N: Re RDF, 222:
Computers are of course not conscious. Computers of course can be creative, and computers are of course intelligent agents. Now before you blow a gasket, please try and understand that we are not arguing here about what computers can or cannot do, or do or do not experience. We agree about all of that. The reason we disagree is simply because we are using different definitions for the terms “creative” and “intelligent agents” . . .
Here, the underlying materialist a prioris cause a bulging of the surface, showing their impending emergence. And it is manifest that question-begging redefinitions are being imposed, in defiance of the search-space challenge to find FSCO/I on blind chance and mechanical necessity. We know by direct experience from the inside out and by observation, that FSCO/I in various forms is routinely created by conscious intelligences acting creatively by art -- e.g. sentences in posts in this thread. We can show that within the atomic resources of the solar system for its lifespan, the task of blindly hitting on such FSCO/I by blind chance and/or mechanical necessity is comparable to taking a sample of size one straw from a cubical haystack 1,000 light years across. Such a search task is practically speaking hopeless, given that we can easily see that FSCO/I -- by the need for correct, correctly arranged and coupled components to achieve function -- is going to be confined to very narrow zones in the relevant config spaces. That is why random document generation exercises have at most hit upon 24 characters to date, nowhere near the 73 or so set by 500 bits. (And the config space multiplies itself 128 times over for every additional ASCII character.) That is, the audit is in the situation of not adding up. The recorded transactions to date are not consistent with the outcome. Errors have been searched for and eliminated. The gap remains. There is something else acting that is not on the materialist's books, that has to be sufficient to account for the gap. That something else is actually obvious, self-aware, self-moved, responsible, creative, reasoning and thinking intelligence as we experience and observe and as we have no good reason to assume we are the only cases of. No wonder Q, in response, noted:
Computer architecture and the software that operates within it is no more creative in kind than a mechanical lever. All a program does is preserve the logic—and logical flaws—of an intelligent programmer. A computer is not an electronic brain, but rather an electronic idiot that must be told exactly what to do and what rules to follow.
He is right, and let us hear Searle in his recent summary of his Chinese Room thought exercise (as appeared in 556 in the previous thread but was -- predictably -- ignored by RDF and buried in onward commentary . . . a plainly deliberate tactic in these exchanges):
Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.” People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols. Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese. And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.
Jay Richards' comment -- yes, that Jay Richards -- in response to a computer being champion at Jeopardy, is apt:
[In recent years] computers have gotten much better at accomplishing well-defined tasks. We experience it every time we use Google. Something happens—“weak” artificial intelligence—that mimics the action of an intelligent agent. But the Holy Grail of artificial intelligence (AI) has always been human language. Because contexts and reference frames change constantly in ordinary life, speaking human language, like playing "Jeopardy!," is not easily reducible to an algorithm . . . . Even the best computers haven’t come close to mastering the linguistic flexibility of human beings in ordinary life—until now. Although Watson [which won the Jeopardy game] is still quite limited by human standards—it makes weird mistakes, can’t make you a latte, or carry on an engaging conversation—it seems far more intelligent than anything we’ve yet encountered from the world of computers . . . . AI enthusiasts . . . aren’t always careful to keep separate issues, well, separate. Too often, they indulge in utopian dreams, make unjustifiable logical leaps, and smuggle in questionable philosophical assumptions. As a result, they not only invite dystopian reactions, they prevent ordinary people from welcoming rather than fearing our technological future . . . . Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness. The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines. This does not follow. A computer may pass the Turing test [as Searle noted with the Chinese Room thought exercise], but that doesn’t mean that it will actually be a self-conscious, free agent. The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically. We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion. We’re getting close to when an interrogating judge won’t be able to distinguish between a computer and a human being hidden behind a curtain. In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show) . . . . AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.
This ideological pattern seems to be what has been going on all along in the exchanges with RDF. If he wants to claim or imply that consciousness, creativity, purposeful deciding and acting through reflective thought are all matters of emergence from computation through hardware that is organised and software on it -- much less such happened by blind chance and mechanical necessity -- then he has a scientific obligation to show such per empirical demonstration and credible observation. Hasn't been done and per the Chinese Room, isn't about to be done. It is time to expose speculative materialist hypotheses and a prioris that lack empirical warrant and have a track record of warping science -- by virtue of simply being dressed up in lab coats in an era where science has great prestige. KFkairosfocus
October 29, 2013
October
10
Oct
29
29
2013
03:18 AM
3
03
18
AM
PDT
RDF: "De higher de monkey climb . . . " Your onward response raises serious questions, and inadvertently exposes a rhetorical pattern. You wrenched what I had to say out of context, erected a caricature and then knocked it over. Upon having had that pointed out by two people independently, you doubled down on strawman tactics. Sadly revealing. KF PS: Onlookers, to see the holes in what RDF is doing, kindly cf the original post.kairosfocus
October 29, 2013
October
10
Oct
29
29
2013
02:23 AM
2
02
23
AM
PDT
Hi Querius,
strengthen your argument?
Yes, of course. You haven't read the discussion, and you take these statements out of context. I have made very clear that my argument has nothing to do with the specifics of human anatomy; rather my argument is based upon the observation that intelligent action requires CSI-rich mechanisms in order to store and process information. Obviously the enteric nervous system is CSI-rich, and stores and processes information.
Oh, and here are some more fallacious statements that you can claim strengthen your arguments:
Your sarcasm is even less funny given you haven't any idea what is going on here.
All a program does is preserve the logic—and logical flaws—of an intelligent programmer. A computer is not an electronic brain, but rather an electronic idiot that must be told exactly what to do and what rules to follow.
You don't understand anything about computer systems.
I’m not trying to mock you
Yes you are; you just are doing a particularly poor job of it.
I suspect no one really knows
This is where we agree. Cheers, RDFIshRDFish
October 28, 2013
October
10
Oct
28
28
2013
11:35 PM
11
11
35
PM
PDT
1 2 3 9

Leave a Reply