Home » Comp. Sci. / Eng., Intelligent Design » AI, Materialist Dodgeball and a Place at the Table

AI, Materialist Dodgeball and a Place at the Table

Ari N. Schulman, “Why Minds Are Not Like Computers,” The New Atlantis, Number 23, Winter 2009, pp. 46-68.
Article Review

“The problem, therefore, is not merely that science is being used illegitimately to promote a materialistic worldview, but that this worldview is actively undermining scientific inquiry.”—UncommonDescent

Read the entire article here.

Unless otherwise noted, all quotations from the article, “Why Minds Are Not Like Computers,” are italicized.

Mr. Schulman walks the tightrope of analysis and criticism, describing how a materialistic worldview actively undermines scientific inquiry in the area of Artificial Intelligence (AI). Analysis and (self-criticism) should be part of all scientific endeavor; the strict materialist does no such thing; instead, he plays dodgeball.

Much of the article, especially the discussions of the brain, computers, Turing Machines, the Turing Test, and the Chinese Room Problem were all helpful in understanding the state of affairs in AI for the layman. My comments are those of a such a layman, included that you might see what a layman might take from such an article. Never-the-less, questions remain . . .

. . . as to whether AI can survive while inurred by materialist thought. Can AI benefit from design-theoretic input (including the unappetizing job, if necessary, of informing AI folks that strong AI as conceived for digital computers is a dead end)? In the following, I chose to recap many of the early parts of the article. It is the latter part of the article, however, where the games begin.

I am not really interested in parsing every last detail of the article, “ . . .no, no , no in the Chinese Room Problem, the walls CAN think as long as the translator is in the room,” rather, I am interested in what place design theorists have at the “adult” table based on articles like this in fields that, from the point of view of many, are in disarray. I, for one, am tired of the “kid” table.

“When the mind is compared to a computer, just what is it being compared to? How does a computer work?”

Mr Schulman begins with a clear discussion of what a computer is, i.e. a performer of algorithms. His definitions of start and end states, and input and output are helpful in understanding the nature of the determinacy of any computer program, “an algorithm’s output for a given input will be the same every time it is executed (even so-called “randomized” algorithms are deterministic in practice.)

Continuing, we learn that in any algorithm, what should be done is broken down to how it should be done. We also learn of abstractions of objects within algorithms that are based on relevant properties of the objects in question. Mr. Schulman is adept in showing how such abstraction leads to the conclusion that computers expertly manipulate (by following the how) symbols but with no idea of what they are doing, “a computer is both extremely fast and exceedingly stupid.”

This leads to a detailed discussion of the manipulation of symbols, for example, “To do so, you must be able to represent the problem in terms that the computer can understand—but the computer only knows what numbers and memory slots are.” This is a standard, specific extension of the Turing Machine model.

And then this,
“. . . it is only partially correct to say that a computer performs arithmetic calculations. As a physical object, the computer does no such thing—no more than a ball performs physics calculations when you drop it. It is only when we consider the computer through the symbolic system of arithmetic, and the way we have encoded it in the computer, that we can say it performs arithmetic.” (my emphasis) Even at the level of arithmetic, Mr. Schulman recognizes that the computer is merely manipulating symbols – symbols that are given meaning by us.

Next, we encounter the black box problem in which we learn that the what of is specified for completion may be fundamentally different from how it is done. Of course that is done “behind the curtain” and different programmers can accomplish a task in many different ways. This leads to the idea of layers of abstraction which rest on Boolean logic which rely, at bottom, on transistors and other physical processors. Mr. Schulman writes that this nested hierarchy does not mean that any particular layer has more explanatory power than the others, only that each is an interpretation of what the computer does based on “a distinct set of symbolic representations and properties.”

I would add that although the modular nature of programming creates black boxes from the point of view of the casual end-user who may just want to read some email or play Pong, those black boxes are not entirely closed off and mysterious; they are known by someone. The how is known by the programmer.

Mr. Schulman: “Since the inception of the AI project, the use of computer analogies to try to describe, understand, and replicate mental processes has led to their widespread abuse. Typically, an exponent of AI will not just use a computer metaphor to describe the mind, but will also assert that such a description is a sufficient understanding of the mind—indeed, that mental processes can be understood entirely in computational terms. One of the most pervasive abuses has been the purely functional description of mental processes. The embrace of input-output mimicry as a standard traces back to Alan Turing’s famous “imitation game,” in which a computer program engages in a text-based conversation with a human interrogator, attempting to fool the person into believing that it, too, is human. The game, now popularly known as the Turing Test, is above all a statement of epistemological limitation—an admission of the impossibility of knowing with certainty that any other being is thinking, and an acknowledgement that conversation is one of the most important ways to assess a person’s intelligence. Thus Turing said that a computer that passes the test would be regarded as thinking, not that it actually is thinking, or that passing the test constitutes thinking. In fact, Turing specified at the outset that he devised the test because the “question ‛Can machines think?’ I believe to be too meaningless to deserve discussion.””(my emphasis)

It was refreshing to see Turing’s comments included at this stage of the article. The Turing Test, and its “Kurzweilian” visions of progress, gets a lot more airplay these days, it seems, than the Universal Turing Machine and its precise, even stringent, view of computers as physical embodiments of theoretical rule-following machines. Does this distinction of how things may be regarded versus how things are have analogs in the evolution/design debate? I’d say the answer is obvious. In fact, I can practically sense that keyboards are warming up as we come to draw battle lines around who “regards, as if it is” and who “regards what is.”

For those AI researchers interested in actually replicating the human mind, the two guiding questions have thus been (1) What organizational layer of the mind embodies its program? and (2) At what organizational layer of the brain will we find the basic functional unit necessary to run the mind-program? [AI researchers] aims and methods can be understood as a progression of attempts to answer these two questions. But when closely examined, the history of their efforts is revealed to be a sort of regression, as the layer targeted for replication has moved lower and lower.”

. . . Kudos again to Mr. Schulman for his concise summary of the current state of affairs of strong AI, he goes on to criticize the functionalist position. Here, I can’t help but think Daniel Dennett would be in the cross hairs, but I haven’t read him enough to know . . . any comments, UD people? I found it interesting that Mr. Dennett is one of the chief critics of Searle’s Chinese Room Problem; it just seems so obvious that he would be the one, more on that later.

Robots that mimic facial expressions are said to experience genuine emotions—and for more than half a century, researchers have commonly claimed that programs [robots mimicking facial expressions] that deliver “intelligent” results are actually thinking. . . Such statements reveal more than just questionable ethics—they indicate crucial errors in AI researchers’ understanding of both computers and minds. Suppose that the mind is in fact a computer program. . . So although behaviorists and functionalists have long sought to render irrelevant the truth of Descartes’ cogito, the canonization of the Turing Test has merely transformed I think therefore I am into I think you think therefore you are.”

I like that. . . “questionable ethics,” “crucial errors,” and “the canonization” of the Turing Test . . .

“Much artificial intelligence research has been based on the assumption that the mind has layers comparable to those of the computer. Under this assumption, the physical world, including the mind, is not merely understandable through sciences at increasing levels of complexity—physics, chemistry, biology, neurology, and psychology—but is actually organized into these levels. These assumptions underlie the notion that the mind is a “pattern” and the brain is its “substrate.”.

On the one hand, arguments against strong AI, both moral and technical, typically describe the highest levels of the mind—consciousness, emotion, and intelligence—in order to argue its non-mechanical nature. . . The implication is that the essence of human nature, and thus of the mind, is profound and unknowable; this belief underlies [Joseph] Weizenbaum’s extensive argument that the mind cannot be described in procedural or computational terms.”

Mr. Weizenbaum appears to have made it to the adult table. I am unacquainted with his work but would be interested in how it might be consonant with ID, if at all. On the other hand . . .

“. . . roboticist Rodney Brooks declares that “the body, this mass of biomolecules, is a machine that acts according to a set of specifiable rules,” and hence that “we, all of us, overanthropomorphize humans, who are after all mere machines.” The mind, then, must also be a machine, and thus must be describable in computational terms just as the brain supposedly is.”

It appears that Mr. Brooks also has made it to the adult table and why wouldn’t he seeing how his theory of AI computing is steeped in evolutionary thought. I find it interesting that those who play dodgeball are not forced to sit at the kid table.

Why do I say dodgeball? If we are merely machines and our brains only computers, then we are physical embodiments of Turing Machines, and if that is so, how is it that we are not bound by the Church-Turing Thesis? Answer: dodgeball. Mr. Brook’s claim that we tend to overenthropomorphize humans is quite a rhetorical leap — not only a leap, but a dodge. Turing Machines are limited in ways that human minds are not, but Brooks can get away with the statement, “[humans] are . . . machines,” because it fits with the functionalist approach of strict materialism. Mr. Schulman then logically adds that we, “must be describable in computational terms.” Pass the gravy, meat puppet.

An instructive example of this confusing conceptual gap can be found in the heated debate surrounding one of the most influential articles in the history of computer science. In a 1980 paper, the philosopher John R. Searle sketched out The Chinese Room Problem. Searle’s scenario is, of course, designed to be analogous to how an operating AI program works, and is thus supposedly a disproof of the claim that a computer operating a similar program could be said to “understand” Chinese or any other language—or indeed, anything at all.”

The most common rebuttals to the Chinese Room thought experiment invoke, in some way, the “systems reply”: although the man in the room does not understand Chinese, the whole system—the combination of the man, the instructions, and the room—indeed does understand Chinese. Searle’s response to this argument—that the “systems reply simply begs the question by insisting without argument that the system must understand Chinese”—is surely correct.”

But Searle himself, as AI enthusiast Ray Kurzweil put it in his 2005 book, The Singularity is Near, similarly just declares “ipso facto that [the room] isn’t conscious and that his conclusion is obvious.” Kurzweil is also correct, for the truth is somewhere in between: we cannot be sure that the system does or does not understand Chinese or possess consciousness.”

This seems to me to be an example of hyper-credulity on the part of those promoting a systems-have-consciousness response. I also believe that this credulity is driven by strict materialism.

One of the most befuddling sections of [Searle's] 1980 paper is this: OK, but could a digital computer think?” If by “digital computer” we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.”(Searle)”

More so even than the casual assertion that people are computer programs, this section of Searle’s paper is surprising in its contradiction of his own claim that computers cannot think. On Searle’s account, then, can computers think or not? The answer reveals just how confused is the common understanding of computer systems.”

But why is there confusion? I’d suggest it comes from the insistence of materialists who claim that the mind/brain is reducible to a computer.

As explained above, it is correct to explain computers in terms of separable layers, since that is how they are designed. Physical systems, on the other hand, are not designed at all. They exist prior to human intent. . . We rely on hierarchies to explain physical systems, but we actually engineer hierarchies into computers.” (my emphasis)

Mr. Schulman leaves unexplained how it is that physical systems are not designed and yet exhibit design. Computers are designed, right? Brains are, if anything, more complicated than computers, right? So much so that philosophers and scientists don’t even agree on what are the qualitative, and what are the quantitative, differences. Somehow out of that argument, the strict materialist finds room to claim that brains are not designed. That just seems like kid table stuff to me.

Every indication is that, rather than a neatly separable hierarchy like a computer, the mind is a tangled hierarchy of organization and causation. Changes in the mind cause changes in the brain, and vice versa. To successfully replicate the brain in order to simulate the mind, it will be necessary to replicate every level of the brain that affects and is affected by the mind.”

I find that this reasoning can only be supported in strictly materialist terms. Only a strict materialist would assert that replicating the brain will simulate a mind. Certain aspects of the mind, which may in fact be essential to not only an experience of consciousness but also engendering what it means to be a “self,” are not merely coded in the brain, awaiting the necessary technology to be replicated. If so, then even in a complete replication of the brain, meaning will not admit; without meaning, whence personhood?

Also, what could it possibly mean that “changes in the mind cause changes in the brain and vise versa” if a mind-brain unit is merely a computer? From a computer design standpoint, such mutual, innovative, meaningful, creative change is pure nonsense.

Intriguingly, some involved in the AI project have begun to theorize about replicating the mind not on digital computers but on some yet-to-be-invented machines. As Ray Kurzweil wrote in “The Singularity is Near”: Computers do not have to use only zero and one…. The nature of computing is not limited to manipulating logical symbols. Something is going on in the human brain, and there is nothing that prevents these biological processes from being reverse engineered and replicated in nonbiological entities. In principle, Kurzweil is correct: we have as yet no positive proof that his vision is impossible. But it must be acknowledged that the project he describes is entirely different from the original task of strong AI to replicate the mind on a digital computer.”(my emphasis)

Is Mr. Kurzweil trying to release us from the theoretical constraints of the Turing Machine? Am I being unfair in assuming that Kurzweil is committed to the brain as merely a sum of biological (read material) processes? The new direction may be computers that are not digital in the traditional sense, but how such new computers could instantiate the mind/brain is simply a check written for some future date.

If we achieve artificial intelligence without really understanding anything about intelligence itself—without separating it into layers, decomposing it into modules and subsystems—then we will have no idea how to control it.”

Furthermore, if intelligence has an attribute not decomposable into modules and subsystems, and we ignore that possibility, then we will not know what we have actually created, but it won’t be AI.

Can intelligent design advocates inform the state of affairs in AI from a solid theoretical basis? An objective reading of articles like this suggests our voice needs to be heard if only to add a measure of clarity to the discussion. John Searle, in “The Rediscovery of the Mind,” writes:

“What we find in the history of materialism is a recurring tension between the urge to give an account of reality that leaves out any reference to the special features of the mental, such as consciousness and subjectivity, and at the same time account for our “intuitions” about the mind. It is, of course, impossible to do these two things. So there are a series of attempts, almost neurotic in character, to cover over the fact that some crucial element about mental states is being left out. And when it is pointed out that some obvious truth is being denied by the materialist philosophy, the upholders of this view almost invariably resort to certain rhetorical strategies designed to show that materialism must be right, and that the philosopher who objects to materialism must be endorsing some version of dualism, mysticism, mysteriousness, or general anti-scientific bias.”(my emphasis)

It is such behavior that should get you sent to the kid table. Dodgeball, anyone?

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

12 Responses to AI, Materialist Dodgeball and a Place at the Table

  1. Nice review. One of the things that defeats the materialist agenda is the existence of a few rare humans who can remember everything in their lives, including their states of mind. There are not enough neurons and synapses in the brain to account for this amazing capacity. Not all of those who have this capability suffer from autism, by the way. Some lead rather normal lives.

    Another aspect of the mind that, in my opinion, escapes a materialist explanation is short term memory. We have the ability to instantly record an almost infinite number of possible event sequences (consisiting of about seven items at a time) for a short period. This would require that every memory neurons is connected to every other in order to establish a linked list of nodes for every possible sequence. This is not observed in the brain.

    Note that I am not saying that short term memory capacity is infinite, only that the number of possible distinct sequence combinations that we can record in STM is, for all intents and purposes, infinite.

    There is no doubt in my mind that the eventual construction of a true general purpose AI (e.g., one that understands a natural language as well as any human) will come from the non-materialist Christian world. The idea of a conscious computer is a pipe dream, however. Intelligence does not need consciousness. But then again, I know some materialists who believe that everything is already conscious to one degree or another, including rocks and thermostats. Go figure.

  2. “Why Minds Are Not Like Computers”

    Mr Schulman begins with a clear discussion of what a computer is, i.e. a performer of algorithms.

    Just a quick comment here -

    Its best to think of the mind as an algorithm. A computer is in fact an extremely simple device.
    There are a handful of formalisms inteded to convey the essence of what a computer is. All of them are very simple. For example a Unlimited Register Machine is a device that only understands 3 instructions. But it can execute any conceiveable algorithm in existence.

    All the complexity is in the algorithm. No AI researcher thinks the mind is like a computer. They think its like an algorithm.

  3. an algorithm’s output for a given input will be the same every time it is executed

    Someone who is unpredictable is essentially insane.

    An input for a human would be the things in the humans environment, what the human perceives via his sensory capability. Imagine that if in the exact same environmental conditions the human does unpredicatable things. So one moment he like chocolate cake. And the next moment he hates it.

    People have likes, dislikes. They have characteristic ways of expressing themselves

    Animals are certainly predictable in principle.

    Unpredictability is the same as randomness.

    The question is whether the mind is a complex physical object. If it is it can be characterized as a program.

    At the lowest level its all neurons and electrical impulses and chemical reactions. What does a nueron individually know? Its the collective behavior of these things that makes a human.

    A complex physical object with a complex changing internal state can have complex seemingly unpredicatable behavior. But its still predictable in principle.

  4. The author talks a lot about “black boxes” and how essential they are to computing. And he says it just means something that certain individuals don’t know how its working, but can still make use of it.

    That may be the correct definition.
    However, when I’ve used the term in this forum, I meant it as a synonym for nondeterministic – something that no one could even potentially know how it operates because it does not operate according to any sort of program.

  5. The author talks a lot about “black boxes” and essential they are to computing. And he says it just means something that certain individuals don’t know how its working, but can still make use of it.

    That may be the correct definition.
    However, when I’ve used the term in this forum, I meant it as a synonym for nondeterministic – something that no one could even potentially know how it operates because it does not operate according to any sort of program.

  6. I haven’t read the whole article, but if he implies that if something is algorithmic or determinstic then it cannot be interactive and extremely complex and difficult to predict, he is incorrect (obviously).

  7. Its possible that the equivalent of a human brain could not be fully implemented in metal – that the physical and chemical properties of the brain are essential to how the brain functions. But chemicals in general are not outside the scope of the computational paradigm. The computer program is the most rigorous conception of a description in existence. If a program cannot conceivably describe something, then that thing cannot be described at all.

    But on the subject of computational abstraction, medical people and biologists abstract the functions of other organs like stomachs or lungs to a series of inputs and outputs. A doctor doesn’t understand the function of some bodily organ down to the level of atoms. Maybe we should talk about strong biology and weak biology.

  8. JT, not only do doctors not understand bodily organs down to atoms, they don’t understand or accept the existence of the energy field (that resides not in any one organ of the body but is pervasive throughout the body) and how it interacts with the organs and systems of the body.

  9. So in order to come closer to mimicking human capabilities, AI researchers would have to find a way to detect and analyze the characteristics of this energy field to understand what role in plays in the human body.

    Again, they have to admit first that such an energy field exists.

  10. JT (2), thank you for restating the materialist view. You are correct in your assertion that [many] AI researchers consider the mind to be like an algorithm while the brain is more like the hardware; this was alluded to in the complete article.
    The problem arises when we note from the article, “an algorithm’s output for a given input will be the same every time it is executed.” JT moves forward logically and includes the materialist notion that we are nothing more than algorithms (minds) on hardware (brains) and ends up with the comment that, “Someone who is unpredictable is essentially insane.”

    His statement does follow logically in this sense:
    All algorithms produce ultimately predictable results.
    Our working minds are algorithms.
    Predictable results evidence working minds.
    Hence, unpredictability evidences insanity.

    G.K. Chesterton comes to nearly the exact opposite conclusion:

    If the madman could for an instant become careless, he would become sane. Every one who has had the misfortune to talk with people in the heart or on the edge of mental disorder, knows that their most sinister quality is a horrible clarity of detail; a connecting of one thing with another in a map more elaborate than a maze. If you argue with a madman, it is extremely probable that you will get the worst of it; for in many ways his mind moves all the quicker for not being delayed by the things that go with good judgment.
    He is not hampered by a sense of humour or by charity, or by the dumb certainties of experience. He is the more logical for losing certain sane affections. Indeed, the common phrase for insanity is in this respect a misleading one. The madman is not the man who has lost his reason. The madman is the man who has lost everything except his reason.

    Take first the more obvious case of materialism. As an explanation of the world, materialism has a sort of insane simplicity. It has just the quality of the madman’s argument; we have at once the sense of it covering everything and the sense of it leaving everything out. Contemplate some able and sincere materialist, as, for instance, Mr. McCabe, and you will have exactly this unique sensation. He understands everything, and everything does not seem worth understanding.
    His cosmos may be complete in every rivet and cog-wheel, but still his cosmos is smaller than our world. Somehow his scheme, like the lucid scheme of the madman, seems unconscious of the alien energies and the large indifference of the earth; it is not thinking of the real things of the earth, of fighting peoples or proud mothers, or first love or fear upon the sea. The earth is so very large, and the cosmos is so very small. The cosmos is about the smallest hole that a man can hide his head in. “Orthodoxy, Ch. 2”

    For some reason, call it intuition, I find Chesterton’s statements to be more broad, resonant, compelling and true.

    JT (6), I take it that you are saying that an algorithm’s determinacy would not abrogate its ability to be interactive, complex, and unpredictable. This, however, is exactly what is currently up for grabs according to the article. Algorithms may be extremely complex, so complex that they may be regarded as unpredictable, but is that unpredictability real in any sense, or is it just a lack of sophistication on the part of the human participant? Algorithms may be extremely complex, so complex that they may be regarded as interactive, but is that really interactivity or just extremely complete and subtle reactivity?
    The answers to these questions may lead us to discover that human minds are not explainable, even in theory, by materialist and/or functionalist theory. Or, design theory may lead us to reject strict materialism and answer those questions for us.
    Orasmus (9), I think you’ll find that AI already mimics human capabilities with little help from what is going on “inside.” But is mimicry intelligence? A heliotrope “knows” to follow the sun, but is that intelligence? That is the whole point; is mimicry (or reactivity, or logic gates, or strict stimulus/response) everything, even for humans?

  11. About predictability of algorithms…

    Although I am not an advocate for the materialist response, if I were to try to imagine the best available response to the issue of predictability of algorithms, I would think it would be that the mind is not a static algorithm but rather a dynamically changing, self-modifying algorithm. So if you give it the same inputs again, it might not respond the same because the algorithm has been modified.

    [Nevertheless, I believe the materialist's position of faith is doomed. Materialism is insufficient.]

  12. Regarding the larger question of the potential for contribution from an ID perspective…

    As a brain storming speculation, I think it would be interesting to explore whether there is any similarity between two distinct issues.

    On the one hand, there is the severe problem for materialism of the origin of symbolic processing in cells, which is a necessary step for the origin of life as we know it. On the other hand, there is the issue of the origin of symbolic processing within thinking human minds. What can we learn from the former that might be instructive for the latter?

    For the origin in cells of processing of symbolic, coded information, materialism is bankrupt. Mindless matter+energy could exist indefinitely in complete fulfillment of the laws of physics and chemistry without there ever existing symbolic, coded information. Furthermore, symbols cannot exist apart from the existence of a corresponding code. For the cell, this requires implemented decoding machinery to translate between symbols and what they represent. But that translation function is also useless without symbolic information to translate. And that symbolic information will not exist apart from some being encoded, requiring translation in the other direction (which in turn is pointless without the other parts of the information system).

    Materialism has no identifiable means to break clear of the interdependency of symbolic information processing in a way that a mindless, purposeless material process could conceivably implement. The only known source for such a system is intelligence, which can imagine, plan, and intentionally pursue distant goals with future benefits.

    Do we have any reason to think materialism will do better with the *origin* of mental symbolic processing? Maybe it can (I make no claim), but if so that is not obvious to me.

    For example, supposing that the mind is an algorithm, and supposing we grant that it didn’t just pop into existence whole, then we must suppose that the algorithm developed progressively.

    What parts of the algorithm to process symbolic information developed first? For what purpose or function (given that the remainder of the algorithm to utilize the first portion does not yet exist)? And so on.