Uncommon Descent Serving The Intelligent Design Community

The freedom/mind issue surfaces again

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

First, a happy thanksgiving.

Then, while digesting turkey etc, here is something to ponder.

One of the underlying issues surrounding the debates over the design inference is the question of responsible, rational freedom as a key facet of intelligent action, as opposed to blind chance and/or mechanical necessity. It has surfaced again, e.g. the WD400 thread.

Some time back, this is part of how I posed the issue, emphasising the difference between self-aware responsible freedom and blindly mechanical causal chains used in computing:

self_aware_or_not

Even if deluded about circumstances a self-aware being is just that, self-evidently, incorrigibly self-aware. And, a key facet of that self awareness is of responsible, rational freedom. Without which we cannot choose to follow and accept a rational case, we would just be mechanically grinding out our programming or and/or hard wiring.

Like, say, a full adder circuit:

1_bitFull-Adder-Circuit

Wire it right, designate the correct voltages as 1 and 0, and the outputs will add one bit with carry in and out. Indeed, more consistently correctly than we do.

Mis-wire, and it won’t, just as if the voltage-state assignments are wrong. But the circuits neither know nor care that they are performing arithmetic, they simply respond to inputs per the mechanical performance of the given circuits.

That is the context of my comment at 79 in the thread:

Z, 73:

mohammadnursyamsu: All current programs on computers work in a forced way, there is no freedom in it, the flexibility does not increase the freedom one bit.

[Z:] All you have done is introduce yet another term, “freedom”, which is not well-defined in this context.

Actually, not.

Absent responsible, rational freedom — exactly what a priori evolutionary materialist scientism cannot account for — you could not actually compose comment 73 above.

In short, freedom is always there once the mind is brought to bear, and without it we cannot be rationally creative.

And per observation, computation is a blind, mechanical cause effect process imposed on suitably organised substrates by mind. In fact, a fair summary of decision node based processing is that coded algorithms reduced to machine code act on suitably coded inputs and stored data by means of a carefully designed and developed . . . troubleshooting in a multi-fault environment required . . . physical machine, to generate desired outputs. At least, once debugging is sufficiently complete. (Which is itself an extremely complex, highly intuitive, non algorithmic procedure critically dependent on creative, responsible, rational freedom. [Where, this crucial aspect tends to get overlooked in discussions of finished product programs and processing.])

There really is a wizard behind the curtain.

Freedom, responsible rational freedom is not to be dismissed as a vague, unnecessary and suspect addition to the discussion, it is the basis on which we can at all think, ground and accept conclusions on their merits instead of being a glorified full adder circuit.

Where, of course, inserting decision nodes amounts to this: set up some operation, which throws an intermediate result, a test condition. In turn, that feeds a flag bit in a flag register. On one alternative, go to chain A of onward instructions, on the other go to chain B. And this can be set up as a loop.

First, the classic 6800 MPU as an example:

MC6800_Processor_DiagramLet me add [Nov 28] a more elaborate diagram of a generalised microprocessor and its peripheral components, noting that an adder is a key component of an Arithmetic and Logic Unit, ALU . . . laying out the mechanisms and machinery that, properly organised, will execute algorithms:

mpu_model

Next, the structured programming patterns that can implement any computing task:

The classic programming structures, which are able to carry out any algorithmic procedure
The classic programming structures, which are able to carry out any algorithmic procedure

It should be clear that no actual decisions are being made, only pre-programmed sequences of mechanical steps are taken based on the designer’s prior intent. (Of course one problem is that a computer will do exactly what it is programmed to, whether or not it makes sense.)

As a related point, trying to derive rational, contemplative self aware mindedness from computation is similar to trying to get North by heading due West.

Samuel Johnson, reportedly responding to the enthusiasm for mechanistic thinking in his day, is apt: All theory is against the freedom of the will; all experience for it. (Nor does this materially change if we inject chance processes, as such noise is no closer to being responsible and rational.)

If we are wise, we will go with the experience. END

Comments
Neural networks, on the other hand, are not frozen, but learn from their interaction with the environment.
Neural networks are intelligently designed.
Computers can find novel solutions to problems, for instance, in chess. It’s original, complex, and within the world of chess, functional.
Computers find only what they are programmed to find.Virgil Cain
December 1, 2015
December
12
Dec
1
01
2015
08:12 AM
8
08
12
AM
PDT
gpuccio: Algorithms are “intelligent” in the sense that a conscious agent has designed them intelligently. They are, in a sense, “frozen” intelligence. Neural networks, on the other hand, are not frozen, but learn from their interaction with the environment. gpuccio: For example, it cannot generate new original complex functional information. Computers can find novel solutions to problems, for instance, in chess. It's original, complex, and within the world of chess, functional.Zachriel
December 1, 2015
December
12
Dec
1
01
2015
07:22 AM
7
07
22
AM
PDT
Mapou: I have no problems at all with the physical and algorithmic part of conscious processes. I am well aware of its importance. That is the easy problem, as described by Chalmers. Easy, but not too easy. And important. But I do have problems with this: "Intelligence is a physical thing." And this: "All one needs to do in order to create machine intelligence is to emulate the neuronal circuits in the brain. The machine will not have conscious awareness of anything but it will act as if it did. Why? Because all the knowledge and the circuits are just physical stuff that can be emulated." Why? Because the consciousness which perceives the forms "prepared" by the brain is not a passive component. It perceives, it understands, and it reacts. If you take the consciousness away, the process is no more the same, and the results are no more the same. Algorithms are "intelligent" in the sense that a conscious agent has designed them intelligently. They are, in a sense, "frozen" intelligence. Now, frozen intelligence can do many thins, but it cannot do everything that "conscious" intelligence can do. For example, it cannot generate new original complex functional information. Why? Because the conscious recognition of meaning, and reaction to that recognition in the form of original (free) output to the brain, are IMO fundamental components of the process. You say: "It’s coming. Wait for it." I will wait. But I am not holding my breath.gpuccio
December 1, 2015
December
12
Dec
1
01
2015
02:28 AM
2
02
28
AM
PDT
Mung: We’re not talking about people, we’re talking about a computer and whether the computer makes a choice to play or not play a game of chess and if so, how that choice is made. No. We're talking about whether computers make choices. You pointed to a case where a computer may have no choice. The answer is that people are often limited in their choices also. Nonetheless, there's no reason a computer may not be able to choose to play or not play. To be relevant, the choice has to be based on some sort of external constraint, as is the case with humans, perhaps as a gambling decision, or due to limitations of resources.Zachriel
November 30, 2015
November
11
Nov
30
30
2015
02:29 PM
2
02
29
PM
PDT
gpuccio @89:
You say: “Attention, conscious or not, moves from one cortical representation to another. I can emulate this in a computer program.” No, you can’t. For the simple reason that attention means what our I is aware of. The computer is not aware of anything, so it has no attention. Obviously, it processes different things at different times. If you call that “attention”, then your statement becomes true. But then, it’s you that are conflating two completely different meanings for “attention”:
Not at all. I agree that a computer cannot be conscious. What I am saying is that conscious awareness does not happen in a vacuum. There is a physical part in the phenomenon that you seem to ignore. To be aware of a fruit on the table requires many physical things. The fruit has to exist and its representation in the visual cortex has to exist. Your spirit is simply aware of the representation. In order for that to happen, the physical neuronal circuits in the cortex that comprise the representation must be activated. This is the physical part of the attention phenomenon. I am saying that all of these physical circuits and activations that are related to the recognition and representation of an object in the brain are computable. Intelligence is a physical thing. Another way to put it is this. Awareness is a yin-yang phenomenon, i.e., it requires a subject and an object. The subject is the spirit that is in the brain and the object is an activated physical representation in the brain. All one needs to do in order to create machine intelligence is to emulate the neuronal circuits in the brain. The machine will not have conscious awareness of anything but it will act as if it did. Why? Because all the knowledge and the circuits are just physical stuff that can be emulated. The machines will have goals and will try to accomplish their goals intelligently. They will behave according to well-known psychological principles of classical and operant conditioning. In conclusion, I again predict the arrival, in the not too distant future, of machines that are uncannily and even frighteningly intelligent. And I mean it in the same sense that humans are intelligent with the exception that the machines will not be conscious. It's coming. Wait for it.Mapou
November 30, 2015
November
11
Nov
30
30
2015
09:45 AM
9
09
45
AM
PDT
Mung: how does the computer choose to not play at all, or to stop playing once begun? Zachriel: People may have a limited range of choices also. So? We're not talking about people, we're talking about a computer and whether the computer makes a choice to play or not play a game of chess and if so, how that choice is made. If you have nothing we'll all understand, really. The computer cannot choose whether or not to play a game of chess. So trying to answer how it does something that it cannot do is futile. But go ahead and try.Mung
November 30, 2015
November
11
Nov
30
30
2015
08:46 AM
8
08
46
AM
PDT
Mapou: We can probably stop it here. However, I will try to clarify once again my thought. You say: "Whether or not you are conscious of it does not take away from its power as a representation. When you are no longer consciously thinking about something, it does not mean that the physical representation of the thing in your brain disappears." No. One thing is the representation in my brain, which can be equivalent to the representation in the computer, or in any other physical media (a photograph, etc.). Another thing is the conscious event of my becoming aware of that representation subjectively. That's what the hard problem is about. Now, not only I become subjectively aware, I also subjectively react to that awareness, and not only to the objective physical representation in my brain. That simply cannot happen in the computer, because there is no subjective awareness there. Only objective processes. You say: "Attention, conscious or not, moves from one cortical representation to another. I can emulate this in a computer program." No, you can't. For the simple reason that attention means what our I is aware of. The computer is not aware of anything, so it has no attention. Obviously, it processes different things at different times. If you call that "attention", then your statement becomes true. But then, it's you that are conflating two completely different meanings for "attention": 1) What the subject is aware of at some moment 2) What an object (the CPU) is processing at some moment. OK, I have no intention to try to convince you. If you understand my position, and still don't agree, we can really stop it here. But if you want to clarify further points, for the sake of constructive discussion, I am happy of that.gpuccio
November 29, 2015
November
11
Nov
29
29
2015
10:33 PM
10
10
33
PM
PDT
gpuccio, In my view, you continue to conflate consciousness with intelligence. It's frustrating because the words become ambiguous or meaningless. You write:
I build a conscious representation of a cat on a table.
No. You just build a representation. Whether or not you are conscious of it does not take away from its power as a representation. When you are no longer consciously thinking about something, it does not mean that the physical representation of the thing in your brain disappears. Attention, conscious or not, moves from one cortical representation to another. I can emulate this in a computer program.
A complex computer receives the same phrase in input. Being complex, maybe it is well programmed to react to the input so that an observer can believe that the computer is understanding the meaning, in the sense that it understands that a cat is a cat, and that it is on the table. But that is simply not true, because the computer has no idea of what a cat is, and it has no idea that the cat is on the table, and it does not know what a table is, and it does not know what “on” means, and so on.
Maybe current intelligent programs do not have these abilities but I see no reason to suppose that future programs cannot know these things. They are all physical cause-effect phenomena, information that machines are exquisitely designed to process.Mapou
November 29, 2015
November
11
Nov
29
29
2015
08:58 PM
8
08
58
PM
PDT
Mapou: Briefly, about understanding and meaning: I read the phrase: "the cat is on the table". A very simple statement. Being a conscious intelligent being, and as my brain has the computing power to decrypt the language, I understand what the phrase mean: I build a conscious representation of a cat on a table. A complex computer receives the same phrase in input. Being complex, maybe it is well programmed to react to the input so that an observer can believe that the computer is understanding the meaning, in the sense that it understands that a cat is a cat, and that it is on the table. But that is simply not true, because the computer has no idea of what a cat is, and it has no idea that the cat is on the table, and it does not know what a table is, and it does not know what "on" means, and so on. This is not only philosophy: as computers are not conscious and have no understanding, we have important consequences: the most important for ID is: computers cannot generate new original complex functional information, including complex original language.gpuccio
November 29, 2015
November
11
Nov
29
29
2015
07:34 PM
7
07
34
PM
PDT
Dionisio, We know that intelligence is always at the service of motivation. So where will intelligent machines get their motivation? From us, that's where. It's all about classical and operant conditioning, stuff that we learned in psychology 101. If we condition them to behave like angels, that is what they will do. If we condition them to behave like assholes, then we will only have ourselves to blame when they kick our stupid arses to oblivion. Will intelligent machines love or hate in the sense that humans love and hate? Of course not. But they will surely behave as if they did. It's all in the conditioning. And, as I said earlier, many people will swear that robots are conscious. Only the most careful questioning will reveal otherwise. For example, they will have no way of determining whether a pattern that they have never seen before, is beautiful or ugly. They will know something is beautiful to us only because we will tell them what is beautiful and what is not. They will have no sense of beauty of their own. Why? It is because beauty is not a property of physical matter. It is a spiritual concept. One man's opinion, of course.Mapou
November 29, 2015
November
11
Nov
29
29
2015
06:48 PM
6
06
48
PM
PDT
Mapou @82 [addendum to comment @84]
Only if you mean ‘consciously wanting’. A machine can certainly have appetitive and aversive behaviors just like humans and animals. You are not hungry because your spirit is hungry. You are hungry because your body is hungry. This is related to the field of reinforcement learning. It’s all physical, cause-effect stuff. There is no reason that it cannot be emulated in a machine.
If gpuccio’s statement “5) It does not want anything.” refers mainly to ‘conscious‘ events, then most of the above quoted explanation (except the first sentence) seems off topic, doesn’t it? Perhaps most physical causes (thirst, food craving?) could be simulated, assuming that you know all the details required to have the complete set of conditions with their associated actions for the criteria decision table. Otherwise, you could only simulate it partially, inaccurately. However, conscious wanting is a different kind of issue. Can a “strong” AI robot love an unlovable person? Why? How? Can a “strong” AI robot love someone who hates the robot? Why? How?Dionisio
November 29, 2015
November
11
Nov
29
29
2015
05:17 PM
5
05
17
PM
PDT
Mapou @82
Only if you mean ‘consciously wanting’. A machine can certainly have appetitive and aversive behaviors just like humans and animals. You are not hungry because your spirit is hungry. You are hungry because your body is hungry. This is related to the field of reinforcement learning. It’s all physical, cause-effect stuff. There is no reason that it cannot be emulated in a machine.
If gpuccio's statement "5) It does not want anything." refers mainly to 'conscious' events, then most of the above quoted explanation (except the first sentence) seems off topic, doesn't it?Dionisio
November 29, 2015
November
11
Nov
29
29
2015
01:26 PM
1
01
26
PM
PDT
Mapou: In friendship, I agree to disagree. But I do disagree.gpuccio
November 29, 2015
November
11
Nov
29
29
2015
12:56 PM
12
12
56
PM
PDT
gpuccio:
2) It does not understand any meaning. That’s very important. Maybe the most important point. My meaning: “meaning” can only be defined as a cognitive subjective experience. Therefore, the computer, having no conscious experiences, cannot understand any meaning at all. Alternative meanings: it is perfectly possible to “freeze” into a complex software information which derives from a conscious understanding of some meaning, so that the software can operationally compute things as though it understood that meaning. But, in reality, there is no understanding at all.
I fully disagree with this. Meaning and understanding come from having an accurate model of one's environment from which one can make useful predictions. This is certainly computable. Whether or not one is conscious of the model is irrelevant to its utility, IMO. Like I said, our future intelligent machines will have full understanding or their environments and of natural language and will act accordingly to accomplish the goals we give them. You will be amazed and many people will be deceived and will conflate their intelligence with consciousness. This is not unlike the way many of us already conflate the emotional behavior of animals with consciousness.
3) It does not learn.
I disagree for the reasons I gave above.
4) It does not feel anything. This is easy. Felling is a subjective experience.
I see a difference between conscious feeling and unconscious sensing. The former is impossible without the latter, IMO. Intelligence only needs the latter. Consciousness needs both.
5) It does not want anything.
Only if you mean 'consciously wanting'. A machine can certainly have appetitive and aversive behaviors just like humans and animals. You are not hungry because your spirit is hungry. You are hungry because your body is hungry. This is related to the field of reinforcement learning. It's all physical, cause-effect stuff. There is no reason that it cannot be emulated in a machine.
6) It does not choose anything. This is more subtle, and connected to 5). I will not deal here with the problem of “free” choice. I will simply distinguish between conscious choices and non conscious algorithmic nodes. My meaning: a conscious choice is an output which proceed from a conscious desire, in the form of what we consciously perceive as an act of “will”. Whether free or not free, we consciously feel that our choices are our choices, that they proceed from us. Otherwise, we do not call them “choices”. Alternative meanings: any algorithm, even a very simple one, can respond to a condition with some predefined process, according to some logical gate evaluation. That’s what Zachriel calls “choice”, if I understand well his thought. That’s what “choice” means in AI. It’s OK for me, but in no way it is the same thing as a conscious choice as previously defined. Moreover, while we can debate if a conscious choice can be free or not (IOWs, free will, if it exists, applies to conscious choices, or at least to some of them), the same cannot be said of algorithmic “choices”: they are certainly not “free” (even if we admit that free will exists), and at most they can incorporate some random element. OK. there is always compatibilism, but I suppose that everybody here probably knows what I think of it! :)
OK, I agree that true choice is impossible to a machine.Mapou
November 29, 2015
November
11
Nov
29
29
2015
11:38 AM
11
11
38
AM
PDT
Zachriel: OK, your position is clear enough. So I hope is mine.gpuccio
November 29, 2015
November
11
Nov
29
29
2015
10:17 AM
10
10
17
AM
PDT
Mung: ok, so no choice involved at all. Peter: You always open King's pawn when playing white. Paul: Yeah. Peter: So, no choice involved at all. Paul: I choose to open King's pawn when playing white. Sally: You always have chocolate ice cream. Sue: Yeah. Sally: So, no choice involved at all. Sue: I choose chocolate ice cream. Mung: How does a computer choose willy-nilly? Does it toss a coin, for example? That would be one way. https://www.youtube.com/watch?v=g6uFEDBbRI0 Mung: how does the computer choose to not play at all, or to stop playing once begun? People may have a limited range of choices also. Mung: I just asked my computer what it wanted and it chose to not answer. You need an upgrade, obviously.Zachriel
November 29, 2015
November
11
Nov
29
29
2015
10:02 AM
10
10
02
AM
PDT
Zachriel: One day you may very well ask a computer what it wants and it will answer. I just asked my computer what it wanted and it chose to not answer. Or perhaps it chose to answer but used a language I just did not understand. What do you think Zachriel? Which one is more likely?Mung
November 29, 2015
November
11
Nov
29
29
2015
09:49 AM
9
09
49
AM
PDT
Mung: how does the computer “choose” which opening to use? Zachriel: Some computers always use the same opening. LoL. ok, so no choice involved at all. Got it. Mung: how does the computer “choose” which opening to use? Zachriel: Some choose willy-nilly. How does a computer choose willy-nilly? Does it toss a coin, for example? Zachriel, how does the computer choose to not play at all, or to stop playing once begun? Can it choose to not make an opening move at all?Mung
November 29, 2015
November
11
Nov
29
29
2015
09:46 AM
9
09
46
AM
PDT
gpuccio: Please, see my post #73. Thought we responded. gpuccio: 1) It is not conscious. Agreed. gpuccio: 2) It does not understand any meaning. That depends on what it means to understand. It may not be conscious of its understanding. gpuccio: 3) It does not learn. Of course computers learn, especially artificial neural nets. They aren't conscious of learning. gpuccio: My meaning: if we consider “learning” as a cognitive recognition of new meaning, then it is obvious that a computer cannot learn anything- Being able to extrapolate from experience to new situations is learning. Being conscious of learning is not a requirement of learning. gpuccio: 4) It does not feel anything. Computers can have sensory inputs, but are not conscious of them. gpuccio: 6) It does not choose anything. Being conscious of choosing is not a requirement of choosing. gpuccio: 4) It does not feel anything. 5) It does not want anything. One day you may very well ask a computer what it wants and it will answer.Zachriel
November 29, 2015
November
11
Nov
29
29
2015
09:30 AM
9
09
30
AM
PDT
A computer without a program for chess could never participate in a chess match. And everything that a computer does can be traced back to humans.Virgil Cain
November 29, 2015
November
11
Nov
29
29
2015
08:43 AM
8
08
43
AM
PDT
Zachriel: Please, see my post #73.gpuccio
November 29, 2015
November
11
Nov
29
29
2015
07:45 AM
7
07
45
AM
PDT
kairosfocus: Z, you snipped out of context. We asked for a transitive verb (if one can be provided). We snipped out the transitive verbs in the hopes you were attempting to answer. kairosfocus: Computers do not try out moves etc, they are down at the machine code level and register transfer level churning away. Computers churn chess. Is that your answer? Or are you saying we can't use transitive verbs with machines, as in steam drillers drill holes? Mung: how does the computer “choose” which opening to use? Some computers always use the same opening. Some choose willy-nilly. Some choose based on past results. Much like people do! harry: When one uses an abacus to do math calculations, is the abacus doing the calculating? Of course not. On the other hand, a calculator will calculate the square root of a number. gpuccio: It does not learn. Artificial neural nets learn. gpuccio: It does not choose anything. Computers choose, using the ordinary meaning of the term. kairosfocus: The operative word, Z, is PLAY. Well, most everyone uses the word "play" to refer to computers that (insert transitive verb) chess. Perhaps it is your use of the word that is in error. play: the conduct, course, or action of a game; a particular act or maneuver in a game; the moving of a piece in a board game (as chess). http://www.merriam-webster.com/dictionary/play gpuccio: That’s what “choice” means in AI. It’s OK for me, but in no way it is the same thing as a conscious choice Which is why we have the term "conscious choice", a subset of all choices. This is just semantics, but it is very hard to understand someone saying "computers can't play chess" or "computers don't make decisions", when they clearly do. They're pretty good at playing chess, actually. They can recognize your mother in a crowd, too!Zachriel
November 29, 2015
November
11
Nov
29
29
2015
07:11 AM
7
07
11
AM
PDT
Mapou: I have deep respect for all those who seriously study AI. I am convinced that AI can shed a lot of light about what Chalmers call the "easy" problem of consciousness. There is no doubt that the brain processes information for consciousness, and it does it algorithmically. We can certainly understand much about that from AI studies. What AI cannot solve is the "hard" problem of consciousness: why subjective experiences exist, and what they are. Now, unfortunately the ambiguous use of words has "warranted", in the imagination of people, a series of "analogies" which have really no objective support from facts. They make people assume that statements about the easy problem are really statements about the hard problem. But that is simply not true. I will try to clarify my previous statements about the computer in the light of this word ambiguity: 1) It is not conscious. My meaning: it has no subjective experiences and representations. IOWs, inj no way it is an example of a solution to the hard problem. Alternative meanings: probably none, unless someone really thinks that the computer has subjective experiences, or that being conscious can be defined alternatively. But I understand that not even Zachriel has suggested something like that. 2) It does not understand any meaning. That's very important. Maybe the most important point. My meaning: "meaning" can only be defined as a cognitive subjective experience. Therefore, the computer, having no conscious experiences, cannot understand any meaning at all. Alternative meanings: it is perfectly possible to "freeze" into a complex software information which derives from a conscious understanding of some meaning, so that the software can operationally compute things as though it understood that meaning. But, in reality, there is no understanding at all. 3) It does not learn. This is connected to 2). My meaning: if we consider "learning" as a cognitive recognition of new meaning, then it is obvious that a computer cannot learn anything- Alternative meanings: of course, a computer which has been programmed to accept new data from outer events, or from its interaction with them, can incorporate those new data into its computations, always according to the programming that it has received. Those new data can certainly bring new computational results, which can certainly be used in new computations, always according to pre-programmed instructions. However, there is no cognition in the process, therefore no "learning" in the cognitive sense I previously defined. 4) It does not feel anything. This is easy. Felling is a subjective experience. My meaning: the usual meaning of feeling. Alternative meanings: I leave that to Zachriel! 5) It does not want anything. Easy again. Desire and purpose are rooted in feeling: we want what is felt as good or pleasurable. My meaning: to feel that some event or course is desirable for us, in any sense (cognitive, moral, or else). Which usually motivates action to make that event or course real. Alternative meanings: any course of action can be programmed in a software as a response to some condition. In that sense, we can say that the software "wants" to act in that way. But there is no feeling in the process. 6) It does not choose anything. This is more subtle, and connected to 5). I will not deal here with the problem of "free" choice. I will simply distinguish between conscious choices and non conscious algorithmic nodes. My meaning: a conscious choice is an output which proceed from a conscious desire, in the form of what we consciously perceive as an act of "will". Whether free or not free, we consciously feel that our choices are our choices, that they proceed from us. Otherwise, we do not call them "choices". Alternative meanings: any algorithm, even a very simple one, can respond to a condition with some predefined process, according to some logical gate evaluation. That's what Zachriel calls "choice", if I understand well his thought. That's what "choice" means in AI. It's OK for me, but in no way it is the same thing as a conscious choice as previously defined. Moreover, while we can debate if a conscious choice can be free or not (IOWs, free will, if it exists, applies to conscious choices, or at least to some of them), the same cannot be said of algorithmic "choices": they are certainly not "free" (even if we admit that free will exists), and at most they can incorporate some random element. OK. there is always compatibilism, but I suppose that everybody here probably knows what I think of it! :)gpuccio
November 29, 2015
November
11
Nov
29
29
2015
03:43 AM
3
03
43
AM
PDT
PPS: Yes, I have not tried to boil the above down to a short little sound bite of dismissive rhetoric. Sometimes, we need to actually read, ponder and think; if, we are to go anywhere worth going intellectually.kairosfocus
November 29, 2015
November
11
Nov
29
29
2015
02:36 AM
2
02
36
AM
PDT
PS: It seems necessary to again call attention to the fatal self referential incoherence at the heart of evolutionary materialist scientism. Here, via Nancy Pearcey:
A major way to test a philosophy or worldview is to ask: Is it logically consistent? Internal contradictions are fatal to any worldview because contradictory statements are necessarily false. “This circle is square” is contradictory, so it has to be false. An especially damaging form of contradiction is self-referential absurdity — which means a theory sets up a definition of truth that it itself fails to meet. Therefore it refutes itself . . . . An example of self-referential absurdity is a theory called evolutionary epistemology, a naturalistic approach that applies evolution to the process of knowing. The theory proposes that the human mind is a product of natural selection. The implication is that the ideas in our minds were selected for their survival value, not for their truth-value. But what if we apply that theory to itself? Then it, too, was selected for survival, not truth — which discredits its own claim to truth. Evolutionary epistemology commits suicide. Astonishingly, many prominent thinkers have embraced the theory without detecting the logical contradiction. Philosopher John Gray writes, “If Darwin’s theory of natural selection is true,… the human mind serves evolutionary success, not truth.” What is the contradiction in that statement? Gray has essentially said, if Darwin’s theory is true, then it “serves evolutionary success, not truth.” In other words, if Darwin’s theory is true, then it is not true. Self-referential absurdity is akin to the well-known liar’s paradox: “This statement is a lie.” If the statement is true, then (as it says) it is not true, but a lie. Another example comes from Francis Crick. In The Astonishing Hypothesis, he writes, “Our highly developed brains, after all, were not evolved under the pressure of discovering scientific truths but only to enable us to be clever enough to survive.” But that means Crick’s own theory is not a “scientific truth.” Applied to itself, the theory commits suicide. Of course, the sheer pressure to survive is likely to produce some correct ideas. A zebra that thinks lions are friendly will not live long. But false ideas may be useful for survival. Evolutionists admit as much: Eric Baum says, “Sometimes you are more likely to survive and propagate if you believe a falsehood than if you believe the truth.” Steven Pinker writes, “Our brains were shaped for fitness, not for truth. Sometimes the truth is adaptive, but sometimes it is not.” The upshot is that survival is no guarantee of truth. If survival is the only standard, we can never know which ideas are true and which are adaptive but false. To make the dilemma even more puzzling, evolutionists tell us that natural selection has produced all sorts of false concepts in the human mind. Many evolutionary materialists maintain that free will is an illusion, consciousness is an illusion, even our sense of self is an illusion — and that all these false ideas were selected for their survival value.
[--> that is, responsible, rational freedom is undermined. Cf here William Provine in his 1998 U Tenn Darwin Day keynote:
Naturalistic evolution has clear consequences that Charles Darwin understood perfectly. 1) No gods worth having exist; 2) no life after death exists; 3) no ultimate foundation for ethics exists; 4) no ultimate meaning in life exists; and 5) human free will is nonexistent . . . . The first 4 implications are so obvious to modern naturalistic evolutionists that I will spend little time defending them. Human free will, however, is another matter. Even evolutionists have trouble swallowing that implication. I will argue that humans are locally determined systems that make choices. They have, however, no free will [--> without responsible freedom, mind, reason and morality alike disintegrate into grand delusion, hence self-referential incoherence and self-refutation. But that does not make such fallacies any less effective in the hands of clever manipulators] . . . [1998 Darwin Day Keynote Address, U of Tenn -- and yes, that is significant i/l/o the Scopes Trial, 1925]
So how can we know whether the theory of evolution itself is one of those false ideas? The theory undercuts itself. A few thinkers, to their credit, recognize the problem. Literary critic Leon Wieseltier writes, “If reason is a product of natural selection, then how much confidence can we have in a rational argument for natural selection? … Evolutionary biology cannot invoke the power of reason even as it destroys it.” On a similar note, philosopher Thomas Nagel asks, “Is the [evolutionary] hypothesis really compatible with the continued confidence in reason as a source of knowledge?” His answer is no: “I have to be able to believe … that I follow the rules of logic because they are correct — not merely because I am biologically programmed to do so.” Hence, “insofar as the evolutionary hypothesis itself depends on reason, it would be self-undermining.” [ENV excerpt, Finding Truth (David C. Cook, 2015) by Nancy Pearcey.]
Onlookers, carefully observe the studious avoidance of facing this issue on the parts of advocates of evolutionary materialist scientism and their fellow travellers.kairosfocus
November 29, 2015
November
11
Nov
29
29
2015
02:34 AM
2
02
34
AM
PDT
Z, In passing back above, I pick up in 32:
kairosfocus: Deep Blue has no intention to engage in a sport for fun or profit [Z, 32:] No, but that’s not a requirement to play chess.
In this difference lies the whole problem. To play a game or a sport inherently involves intent, motivation (fun or profit), engagement, goal-orientation, purpose, genuine decision, and so forth thus agency. But, we live in a time where thanks to the dominance of evolutionary materialist scientism, these per the force of that ideological imposition, must be squeezed out, discredited as dubious or illusory or even as the delusional and demonic superstitions of Sagan and Lewontin. And so, the indoctrinated are locked into a cramped, implicitly self referential ideology that refuses to see its own absurdity as responsible freedom is the premise of reason on acknowledging ground and accepting the following consequent as aligning with guidestar principles of logic and evidence. In trying to reduce an agent to a blind wetware neural network computational device with paralellism, looping and feedback, the very point at stake is squeezed out. So is the first fact of all: our self-perception of responsible rational freedom to understand, decide and act in a world that presents itself to us through our senses, awareness and understanding. Where, there is a difference between good sense and nonsense, sanity informed by wisdom and delusional, disintegrative insanity that is ever so wise in its own eyes and clever in its own self-deceiving conceits headed for a march of folly and shipwreck. This is not a clash of science vs superstition. Your side cannot even see the foundational Scientists' credo that they were thinking God's creative and sustaining providential thoughts after him, so living in a world of order and law that allowed them to examine cases, observe pattern, test consistency and with some degree of confidence summarise underlying law; albeit with some degree of provisionality and open-mindedness to correction. As in the rule of a ruler and architect of the world-system, an ordered reality, a cosmos not a chaos. The operative word, Z, is PLAY. Something that agents do by free and intelligent choice, something that is not mechanical or passive or blindly deterministic and/or a matter of dice tossing chance, maybe with some loading. What you and ilk are forced to do is to impose a materialistic procrustean bed, stretching or cutting everything to fit a cramped worldview that is at the outset self-referentially incoherent. And necessarily self-falsifying as a direct consequent. Never mind the August Magisterium all duly dressed in lab coats and putting on an impressive show in an oh so confident manner. The key symptom is the constant bending, distortion and equivocation in use of words that must ever be stretched, squeezed, hammered, twisted, bent to fit what Rational Wiki so tellingly summed up, after the Coup: "Methodological naturalism is the label for the required assumption of philosophical naturalism when working with the scientific method." Sez who? Sez the Materialist Magisterium dressed up in their August Lab coats even as they sweep the self referential incoherence and question begging under the carpet. Instead, mind, self aware mind exhibiting responsible rational freedom is our first fact. The one through which we perceive the material world. And free choice is a characteristic act of such agency. Where the power of mind over matter is readily seen in how mind creates functionally specific complex organisation and associated information beyond 500 - 1,000 bits, readily overwhelming the needle in haystack search challenge that confronts blind chance and mechanical necessity in a cosmos of 10^80 atoms, fast 1 - 10 eV valence shell interaction rates of about 10^12 - 14 acts/s and duration 10^17s. In short -- and this is exactly the point that has been a sticking point and last ditch bastion of materialism -- FSCO/I is a signature of intelligently directed configuration or design, a sign pointing to mind at work through decision, purpose, insight, creativity, knowledge and skill. (Hence the revolutionary nature of the design inference on FSCO/I, in whatever form. Horror of horrors once triumphant materialists, designing mind is BAAAAAACK!) Signal, not noise. Signature, not random ink spot splashed when the bottle fell. Signature that speaks to us in the digitally coded algorithms and linked clever organisation of cell based life and the fine tuned deeply isolated operating point of a cosmos set up so that it supports C-Chemistry, aqueous medium, cell based terrestrial planet life, right from the core laws, constants and parameters of the cosmos. Unwelcome sign, signature and signal. Bring out the thumbscrews! Expel the heretic! Out and stalk him and his family down to the third and fourth degree of relationship! Slander, cruelly mock and scorn! (After all, it is only ignorant, stupid, insane or wicked fundy fanatics who want to subject us to Right Wing Theocratic Christofascist Tyranny who could dare object to Science facts, Facts FACTS! We'll give them what they deserve! [Only, those caught up in this do not see how they are becoming what they so smugly project unto the despised other while refusing to objectively assess the foundational self-referential absurdity in evolutionary materialist scientism.]) But, e pur si muove. It still moves, undeniable, plain for those willing to see. Game over, materialists. Check . . . 3 moves to mate. KFkairosfocus
November 29, 2015
November
11
Nov
29
29
2015
01:38 AM
1
01
38
AM
PDT
@zachriel Of course gravity cannot choose in your sense of sorting present variables, but it can make an alternative future the present. That is some complicated maths, selfrefferential, where the law of gravity is entered as data into the law of gravity. Supposedly this mathematics shows that Newton's gravity, treated in this way, will have an antipatory aspect which equates to Einstein's gravity. The same 'abberation' in the perihelion of Mercury is predicted with anticipatory Newtonian gravity, as it is with Einstein's gravity. So it means Newton's theory is reinstated, and Einstein's theory is reconfigured as an anticipatory aspect of Newtonian gravity. With the added benefit that while Einstein's theory can only be applied with a steady frame of reference, the anticipatory Newtonian theory can be applied with an accelerating frame of reference. This above here is just sketchy, just to show broadly that theory can be made in terms of anticipation, which anticipation you still bizarrely equate to if else logic, which it is nothing of the kind. For as far as computer randomness simulating choosing goes. In a computergame, obviously if what the monster in the game does is dependent on the random function then this can give a credible experience of the monster choosing what to do. And no matter how sophisticated you make any if else logic, once you know everything the monster is going to do, will be the exact same thing in the same situation, then the illusion of the monster choosing anything is lost. While with randomness, the player might have the illusion that the monster has emotions, that it is "courageous", or "vicious" in deciding what it does. So there is the link to subjectivity again in regards to the agency of a decision, while there is no link to subjectivity at all in your if else logic.mohammadnursyamsu
November 28, 2015
November
11
Nov
28
28
2015
07:01 PM
7
07
01
PM
PDT
gpuccio @ 66, Very well put! As for Zachriel, he appears to be just as gullible about the possibility of dumb, lifeless matter mindlessly and accidentally assembling itself into the metabolizing, self-replicating, digital information-based nanotechnology of life as he is gullible about the abilities of computers. I would say he doesn't know what he doesn't know, but I think he doesn't want to know what he doesn't know, because that might disturb his devout atheism.harry
November 28, 2015
November
11
Nov
28
28
2015
02:41 PM
2
02
41
PM
PDT
gpucio:
It is not conscious. It does not understand any meaning. It does not learn. It does not feel anything. It does not want anything. It does not choose anything.
Although I disagree with Zachriel regarding the capabilities of current machines, I have a problem with the above. I do research in AI and I believe computers can do these things but unconsciously. Computers can learn and can choose in the same way that animals learn and choose: according to preprogrammed instincts/motivations. Programs do exist that can rearrange themselves to reflect their environments. They are called learning machines. They do it according to precise rules but then again, so does the human brain. Your spirit did not learn to see and recognize patterns; your brain did. The difference is that we can choose to override the recommendations of our own brain. Intelligence is always at the service of motivation. Our future machines will act as if they do understand their environments and the words we speak to them. They will behave very intelligently. The reason is that those things are causal/physical phenomena that can be computed in a machine. Most of you will witness the arrival of these machines in your lifetimes. It will usher in the age of full unemployment. Wait for it.Mapou
November 28, 2015
November
11
Nov
28
28
2015
02:31 PM
2
02
31
PM
PDT
harry: Thank you for bringing the abacus into the discussion! Indeed, that has always been one of my favorite concepts. It is true that the results of an algorithmic computation are independent from the hardware which implements the computation itself. That makes the folly of strong AI theory absolutely self-evident. Why? Let's imagine that strong AI theory (intended as the idea that consciousness arises as a by-product of software complexity) may be true. Then, let's say that we have a sophisticated computer which performs complex computations (parallel, loop-rich, or whatever) so that, at last, consciousness arises. Now, it must be true that if we perform the same computations, although more slowly, by a very big abacus system, that system should become conscious too! The whole idea is folly. The truth is that any algorithmic computation is only the sum of very simple computations. The whole system can be made mechanical, but it remains the sum of simple events. So, if we really don't think that computing 2 + 2 on an abacus generates consciousness (either if it is done by a person or automatically), why in the world should a long series of the same kind of events, in whatever order, become conscious? All the single events which take place in a computer are essentially of the kind of a 2 + 2 sum, or of simple logical gates. A computer, however big and complex, is just a big automatic abacus, nothing more. It is not conscious. It does not understand any meaning. It does not learn. It does not feel anything. It does not want anything. It does not choose anything. Whenever we use those words, as Zachriel usually does here, to describe what a computer does, we are only using analogies, and IMO very bad analogies. One can use words as one likes, but the underlying truth does not change.gpuccio
November 28, 2015
November
11
Nov
28
28
2015
02:12 PM
2
02
12
PM
PDT
1 2 3 4

Leave a Reply