Uncommon Descent Serving The Intelligent Design Community

A challenge to strong Artificial Intelligence enthusiasts . . .

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

For some little while now, RDF/AIGuy has been advocating a strong AI claim here at UD.  In an exchange in the ongoing is ID fatally flawed thread, he has said:

222: Computers are of course not conscious. Computers of course can be creative, and computers are of course intelligent agents. Now before you blow a gasket, please try and understand that we are not arguing here about what computers can or cannot do, or do or do not experience. We agree about all of that. The reason we disagree is simply because we are using different definitions for the terms “creative” and “intelligent agents” . . .

This seems a little over the top, and I commented; but before we look at that that, let us get a little basic definition out of the way. First, the predictably enthusiastic Wikipedia:

Artificial intelligence (AI) is technology and a branch of computer science that studies and develops intelligent machines and software. Major AI researchers and textbooks define the field as “the study and design of intelligent agents”,[1] where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.[2] John McCarthy, who coined the term in 1955,[3] defines it as “the science and engineering of making intelligent machines”.[4]

It does make a few cautions:

The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[6] General intelligence (or “strong AI“) is still among the field’s long term goals.[7] . . . . The field was founded on the claim that a central ability of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine.[8] This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings, issues which have been addressed by myth, fiction and philosophy since antiquity.[9] Artificial intelligence has been the subject of tremendous optimism[10] but has also suffered stunning setbacks.[11]

The Stanford Encyclopedia of Philosophy, here, is predictably more cautious, in revealing ways:

Artificial Intelligence (which I’ll refer to hereafter by its nickname, “AI”) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent.[1] Most research in AI is devoted to fairly narrow applications, such as planning or speech-to-speech translation in limited, well defined task domains. But substantial interest remains in the long-range goal of building generally intelligent, autonomous agents.[2]

The IEP gives a little more backdrop, giving us explicit cautions on some of the philosophical problems that lurk:

[T]he scientific discipline and engineering enterprise of AI has been characterized as “the attempt to discover and implement the computational means” to make machines “behave in ways that would be called intelligent if a human were so behaving” (John McCarthy), or to make them do things that “would require intelligence if done by men” (Marvin Minsky). These standard formulations duck the question of whether deeds which indicate intelligence when done by humans truly indicate it when done by machines: that’s the philosophical question. So-called weak AI grants the fact (or prospect) of intelligent-acting machines; strong AI says these actions can be real intelligence. Strong AI says some artificial computation is thought. Computationalism says that all thought is computation. Though many strong AI advocates are computationalists, these are logically independent claims: some artificial computation being thought is consistent with some thought not being computation, contra computationalism. All thought being computation is consistent with some computation (and perhaps all artificial computation) not being thought.

{Adding . . . } While we are at it, let us remind ourselves of the Smith Model for an embodied agent with a two-tier controller, understanding that the supervisory controller imposes purposes etc on the lower one, and noting that there is no implicit or explicit commitment as to just what it can be or is for a given case:

smith_model_agent

 

It is worth noting as well on how the so-called hard problem of consciousness is often conceived, in an implicitly materialistic frame of thought:

The term . . . refers to the difficult problem of explaining why we have qualitative phenomenal experiences. It is contrasted with the “easy problems” of explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, their proposed solutions, regardless of how complex or poorly understood they may be, can be entirely consistent with the modern materialistic conception of natural phenomen[a]. Hard problems are distinct from this set because they “persist even when the performance of all the relevant functions is explained.”

Duly warned, let us see what is going on behind RDF’s enthusiastic and confident announcement. For that, let us look at my response to him, as I think this issue can and should spark an onward discussion:

__________

>> RDF, 222:

Computers are of course not conscious. Computers of course can be creative, and computers are of course intelligent agents. Now before you blow a gasket, please try and understand that we are not arguing here about what computers can or cannot do, or do or do not experience. We agree about all of that. The reason we disagree is simply because we are using different definitions for the terms “creative” and “intelligent agents” . . .

Here, the underlying materialist a prioris cause a bulging of the surface, showing their impending emergence. And it is manifest that question-begging redefinitions are being imposed, in defiance of the search-space challenge to find FSCO/I on blind chance and mechanical necessity.

We know by direct experience from the inside out and by observation, that FSCO/I in various forms is routinely created by conscious intelligences acting creatively by art — e.g. sentences in posts in this thread. We can show that within the atomic resources of the solar system for its lifespan, the task of blindly hitting on such FSCO/I by blind chance and/or mechanical necessity is comparable to taking a sample of size one straw from a cubical haystack 1,000 light years across.

Such a search task is practically speaking hopeless, given that we can easily see that FSCO/I — by the need for correct, correctly arranged and coupled components to achieve function — is going to be confined to very narrow zones in the relevant config spaces. That is why random document generation exercises have at most hit upon 24 characters to date, nowhere near the 73 or so set by 500 bits. (And the config space multiplies itself 128 times over for every additional ASCII character.)

That is, the audit is in the situation of not adding up. The recorded transactions to date are not consistent with the outcome. Errors have been searched for and eliminated.

The gap remains.

There is something else acting that is not on the materialist’s books, that has to be sufficient to account for the gap.

That something else is actually obvious, self-aware, self-moved, responsible, creative, reasoning and thinking intelligence as we experience and observe and as we have no good reason to assume we are the only cases of.

No wonder Q, in response, noted:

Computer architecture and the software that operates within it is no more creative in kind than a mechanical lever. All a program does is preserve the logic—and logical flaws—of an intelligent programmer. A computer is not an electronic brain, but rather an electronic idiot that must be told exactly what to do and what rules to follow.

He is right, and let us hear Searle in his recent summary of his Chinese Room thought exercise (as appeared in 556 in the previous thread but was — predictably — ignored by RDF and buried in onward commentary . . . a plainly deliberate tactic in these exchanges):

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

Jay Richards’ comment — yes, that Jay Richards — in response to a computer being champion at Jeopardy, is apt:

[In recent years] computers have gotten much better at accomplishing well-defined tasks. We experience it every time we use Google. Something happens—“weak” artificial intelligence—that mimics the action of an intelligent agent. But the Holy Grail of artificial intelligence (AI) has always been human language. Because contexts and reference frames change constantly in ordinary life, speaking human language, like playing “Jeopardy!,” is not easily reducible to an algorithm . . . .

Even the best computers haven’t come close to mastering the linguistic flexibility of human beings in ordinary life—until now. Although Watson [which won the Jeopardy game] is still quite limited by human standards—it makes weird mistakes, can’t make you a latte, or carry on an engaging conversation—it seems far more intelligent than anything we’ve yet encountered from the world of computers . . . .

AI enthusiasts . . . aren’t always careful to keep separate issues, well, separate. Too often, they indulge in utopian dreams, make unjustifiable logical leaps, and smuggle in questionable philosophical assumptions. As a result, they not only invite dystopian reactions, they prevent ordinary people from welcoming rather than fearing our technological future . . . .

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test [as Searle noted with the Chinese Room thought exercise], but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

We’re getting close to when an interrogating judge won’t be able to distinguish between a computer and a human being hidden behind a curtain.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show) . . . .

AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

This ideological pattern seems to be what has been going on all along in the exchanges with RDF.

If he wants to claim or imply that consciousness, creativity, purposeful deciding and acting through reflective thought are all matters of emergence from computation through hardware that is organised and software on it — much less such happened by blind chance and mechanical necessity — then he has a scientific obligation to show such per empirical demonstration and credible observation.

Hasn’t been done and per the Chinese Room, isn’t about to be done.

It is time to expose speculative materialist hypotheses and a prioris that lack empirical warrant and have a track record of warping science — by virtue of simply being dressed up in lab coats in an era where science has great prestige.>>

__________

So, let us reflect on whether RDF has scored a knockout, or is he being a tad over enthusiastic on a field of research? END

Comments
Kairos, thanks for an interesting article. I saved couple of great links from there for later reading. Computer are built of fast electronic logic gates but I cannot see how group of fast logic gates can become self aware. People are naturally impressed by the speed and processing power of electronic gates arranged into what we call the computer. We should keep in mind that in principle logic gates are just controlled switches. Almost anything can be arranged into logic gates, I was thinking about a few simple things… 1. Material domain is not important. We can arrange matter into logic gates using; 2. a. Electronic components ( normally done, cheap and fast) b. Vacuum tubes(expensive ,slow and power hungry) c. Electromechanical relays,(super slow and very power demanding) d. Air and fluid valves ( messy and slow) e. Gears and pulleys (inconvenient, big and extremely slow) f. Rivers and dams (for really ambitious rich people who have time to wait) g. Planets and moons (expensive, we should leave it for God) h. Chemical molecular arrangements (inside the cell) 2. Components that build gates must meet some criteria: a. Discrete (have boundary) b. Connectable (by mechanical, chemical, magnetic, etc means) c. Stable (be discrete and connectable at least for a period) 3. Size of components is not important. For ex. electric motor can be barn-house size or one molecule size . 4. Energy must be appropriate for the material domain. Energy is supplied by a natural force (ex. gravity) or specialized system for that purpose (ex. voltaic cell). Components will, by the virtue of their special arrangement be able to accept and control energy flow.Eugen
October 31, 2013
October
10
Oct
31
31
2013
07:27 AM
7
07
27
AM
PDT
KF: Ah! I am overwhelmed by the many objections and arguments of our enemies... :) Seriously, you are perfectly right. Any conscious experience presupposes the existence of the subject who perceives the various modifications (so called representations) in his consciousness. That is a fundamental intuition (I exist, because I am an I), and it is deeper than the so called "self-consciousness" (I observe my-self who perceives). Basic consciousness represents indeed a more essential and universal level (I perceive. Period.) In that sense, consciousness is "the mother of all reality", the substrate of any cognition and representation of the world. As I have debated often, many of the fundamental intuitions according to which we shape our whole map of reality (especially meaning, purpose and feeling) have no possible sense outside of conscious representations. Indeed, they are pure, fundamental modalities of consciousness, and cannot even be defined "objectively" (that is, outside of any reference to conscious experiences and to a conscious I). Therefore, what you say is perfectly true: "But even if we were say brains in vats imagining ourselves to be people in a world, we would correctly understand ourselves to be conscious."gpuccio
October 30, 2013
October
10
Oct
30
30
2013
02:01 PM
2
02
01
PM
PDT
GP: Always great to hear from you. I notice too that SB has weighed in. We are in agreement on basic issues, let us see whether the usual objectors are willing to stand by their guns. I do note that it seems to me that consciousness it the first fact, through which we access and reason about all other facts. I would go so far as to argue that while we may err in various ways about what we are etc, we cannot be in error that we are conscious. A rock has no dreams and cannot be deluded that it is conscious. But even if we were say brains in vats imagining ourselves to be people in a world, we would correctly understand ourselves to be conscious. Perhaps, we need to discuss our thoughts together, to refine them. KFkairosfocus
October 30, 2013
October
10
Oct
30
30
2013
02:49 AM
2
02
49
AM
PDT
KF: Wonderful post! Complete, clear and absolutely true. I wholeheartedly agree with you. Just a few personal statements: a) The hard problem of consciousness cannot be solved in materialistic terms, without accepting consciousness as an empirical reality, linked to a material interface (at least in humans), but not explained by it. b) The "easy" problem of consciousness cannot be completely and "easily" solved too. Some functional activities of conscious beings cannot be simulated algorithmically. For example, generating new original dFSCI can be done only by conscious beings who have the experience of meaning and purpose, and never by an algorithm. c) Computation is never thought. Thought is a conscious experience, by definition. Obviously, we can think of computations and be aware of them. I remain available to detail and defend each of the above statements.gpuccio
October 30, 2013
October
10
Oct
30
30
2013
01:42 AM
1
01
42
AM
PDT
The 60:1 hits to comments ratio so far is interesting. KFkairosfocus
October 29, 2013
October
10
Oct
29
29
2013
11:13 AM
11
11
13
AM
PDT
RDF:
Computers are of course not conscious. Computers of course can be creative, and computers are of course intelligent agents. Now before you blow a gasket, please try and understand that we are not arguing here about what computers can or cannot do, or do or do not experience. We agree about all of that. The reason we disagree is simply because we are using different definitions for the terms “creative” and “intelligent agents” . . .
Nonsense. We are, indeed, arguing about what computers can and cannot do. First, you claimed that computers can reflect on themselves, which is ridiculous unless you change the definition of self reflect from introspection about one's nature, purpose, and worth to a program of some kind, which of course, has no such capacity. Second, you argue that a computer can be creative and be an intelligent agent, which means that you think computers can do what humans can do in that context. So we are definitely arguing about what computers can and cannot do.
The reason we disagree is simply because we are using different definitions for the terms “creative” and “intelligent agents” . . .
What is your definition of creativity? What is your definition of an intelligent agent?StephenB
October 29, 2013
October
10
Oct
29
29
2013
06:23 AM
6
06
23
AM
PDT
F/N 2: NWE tosses in a few live grenades in its article on mind:
Mind is a concept developed by self-conscious humans trying to understand what is the self that is conscious and how does that self relate to its perceived world . . . Aspects of mind are also attributed to complex animals, which are commonly considered to be conscious. Studies in recent decades suggest strongly that the great apes have a level of self-consciousness as well. Philosophers have long sought to understand what is mind and its relationship to matter and the body . . . Based on his world model that the perceived world is only a shadow of the real world of ideal Forms, Plato, a dualist, conceived of mind (or reason) as the facet of the tripartite soul that can know the Forms. The soul existed independent of the body, and its highest aspect, mind, was immortal. Aristotle, apparently both a monist and a dualist, insisted in The Soul that soul was unitary, that soul and body are aspects of one living thing, and that soul extends into all living things. Yet in other writings from another period of his life, Aristotle expressed the dualistic view that the knowing function of the human soul, the mind, is distinctively immaterial and eternal. Saint Augustine adapted from the Neoplatonism of his time the dualist view of soul as being immaterial but acting through the body. He linked mind and soul closely in meaning. Some 900 years later, in an era of recovering the wisdom of Aristotle, Saint Thomas Aquinas identified the species, man, as being the composite substance of body and soul (or mind), with soul giving form to body, a monistic position somewhat similar to Aristotle's. Yet Aquinas also adopted a dualism regarding the rational soul, which he considered to be immortal. Christian views after Aquinas have diverged to cover a wide spectrum, but generally they tend to focus on soul instead of mind, with soul referring to an immaterial essence and core of human identity and to the seat of reason, will, conscience, and higher emotions. Rene Descartes established the clear mind-body dualism that has dominated the thought of the modern West. He introduced two assertions: First, that mind and soul are the same and that henceforth he would use the term mind and dispense with the term soul; Second, that mind and body were two distinct substances, one immaterial and one material, and the two existed independent of each other except for one point of interaction in the human brain. In the East, quite different theories related to mind were discussed and developed by Adi Shankara, Siddh?rtha Gautama, and other ancient Indian philosophers, as well as by Chinese scholars. As psychology became a science starting in the late nineteenth century and blossomed into a major scientific discipline in the twentieth century, the prevailing view in the scientific community came to be variants of physicalism with the assumption that all the functions attributed to mind are in one way or another derivative from activities of the brain. Countering this mainstream view, a small group of neuroscientists has persisted in searching for evidence suggesting the possibility of a human mind existing and operating apart from the brain. In the late twentieth century as diverse technologies related to studying the mind and body have been steadily improved, evidence has emerged suggesting such radical concepts as: the mind should be associated not only with the brain but with the whole body; and the heart may be a center of consciousness complementing the brain. [[New World Enc., article, Mind]
Food for thought -- as opposed to GIGO-controlled blindly mechanistic computation. KFkairosfocus
October 29, 2013
October
10
Oct
29
29
2013
05:42 AM
5
05
42
AM
PDT
F/N: Chomsky weighs in, in an Atlantic Monthly interview, e.g.:
[AM:} Well, we are bombarded with it [noisy data], it's one of Marr's examples, we are faced with noisy data all the time, from our retina to... Chomsky: That's true. But what he says is: Let's ask ourselves how the biological system is picking out of that noise things that are significant. The retina is not trying to duplicate the noise that comes in. It's saying I'm going to look for this, that and the other thing. And it's the same with say, language acquisition. The newborn infant is confronted with massive noise, what William James called "a blooming, buzzing confusion," just a mess. If say, an ape or a kitten or a bird or whatever is presented with that noise, that's where it ends. However, the human infants, somehow, instantaneously and reflexively, picks out of the noise some scattered subpart which is language-related. That's the first step. Well, how is it doing that? It's not doing it by statistical analysis, because the ape can do roughly the same probabilistic analysis. It's looking for particular things. So psycholinguists, neurolinguists, and others are trying to discover the particular parts of the computational system and of the neurophysiology that are somehow tuned to particular aspects of the environment. Well, it turns out that there actually are neural circuits which are reacting to particular kinds of rhythm, which happen to show up in language, like syllable length and so on. And there's some evidence that that's one of the first things that the infant brain is seeking -- rhythmic structures. And going back to Gallistel and Marr, its got some computational system inside which is saying "okay, here's what I do with these things" and say, by nine months, the typical infant has rejected -- eliminated from its repertoire -- the phonetic distinctions that aren't used in its own language. So initially of course, any infant is tuned to any language. But say, a Japanese kid at nine months won't react to the R-L distinction anymore, that's kind of weeded out. So the system seems to sort out lots of possibilities and restrict it to just ones that are part of the language, and there's a narrow set of those. You can make up a non-language in which the infant could never do it, and then you're looking for other things. For example, to get into a more abstract kind of language, there's substantial evidence by now that such a simple thing as linear order, what precedes what, doesn't enter into the syntactic and semantic computational systems, they're just not designed to look for linear order. So you find overwhelmingly that more abstract notions of distance are computed and not linear distance, and you can find some neurophysiological evidence for this, too. Like if artificial languages are invented and taught to people, which use linear order, like you negate a sentence by doing something to the third word. People can solve the puzzle, but apparently the standard language areas of the brain are not activated -- other areas are activated, so they're treating it as a puzzle not as a language problem. You need more work, but...
kairosfocus
October 29, 2013
October
10
Oct
29
29
2013
04:53 AM
4
04
53
AM
PDT

Leave a Reply