Uncommon Descent Serving The Intelligent Design Community

Out-of-print early ID book now available as a .pdf

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

An early ID book (possibly the earliest), The Mystery of Life’s Origin by Charles Thaxton, Walter Bradley, and Roger Olson (1984), with a foreword by Dean Kenyon, has been out of print for a while, I am told. But a .pdf can be downloaded here for now.

Information theory is a special branch of mathematics that has developed a way to measure information. In brief, the information content of a structure is the minimum number of instructions required to describe or specify it,  whether that structure is a rock or a rocket ship, a pile of leaves or a living organism. The more complex a structure is, the more instructions are needed to describe it. —Charles Thaxton, biochemist

Meanwhile ….

Study: Sun not special, therefore alien life should be common?

Does time’s one-way street prove that other universes exist?

The day time went backwards

Flogos: Coming soon to a clear blue sky near you …

Science and ethics: When the devil offered a no strings research post.

Nature’s IQ: Intelligent design from a Hindu perspective

Science journalist warns against the “institutionalised idolatry of science”

Expelled film pre-trashed by United Kludgies of Canada (Trashing a film you haven’t seen is way less work.)

Is everything determined by forces over which we have no control?

Chuck Colson on neural Buddhism: Do neurons get reincarnated?

Hopeful signs: Disaster causes outpouring of charity in China

On Jane Goodall, apes, human uniqueness, and God

Comments
There are classics of Design available on the web. They should be more widely known and read. The Hand, Sir Charles Bell Organic Evolution, The Duke of Argyll Typical Forms and Ends in Creation, McCoshVladimir Krondan
June 2, 2008
June
06
Jun
2
02
2008
02:21 AM
2
02
21
AM
PDT
kairos: No peoblems with a technica KO: I can live with that! And I do hope that darwinian evolution will continue to exist and be taught, even in schools, when it is no more dominant. First of all, it will be funny, and moreover, people should not forget what a whole culture has been able to believe for decades...gpuccio
June 1, 2008
June
06
Jun
1
01
2008
02:13 PM
2
02
13
PM
PDT
... when Darwin theory was dominant and teached all over ^taught^ the world … Oops. Sorry for the mistake ...kairos
June 1, 2008
June
06
Jun
1
01
2008
01:41 PM
1
01
41
PM
PDT
GPuccio: And obviously I respect your cautius position, but believe me, we will win for KO! :-) Hopefully, but there is also a third possibility: a less spectacular but definitive technical KO; i.e. a situation in which, although not proved abosultely, most scientists will argue for design and only a limited, isolated subset will be ask for going back to "the old nice times when Darwin theory was dominant and teached all over the world ..." :-)kairos
June 1, 2008
June
06
Jun
1
01
2008
01:39 PM
1
01
39
PM
PDT
kairos: You are right Penrose does not pursue a stricly non materialist viewpoint. He remains, to say so, in a middle ground. But his argument is just the same the best basis for a non materialist interpretation of consciousness and knowledge. In a sense, it'a the same as for ID. ID is not necessarily linked to a spiritual perspective, but is the basis for it. The same can be said for Penroses's argument. And obviously I respect your cautius position, but believe me, we will win for KO! :-)gpuccio
May 31, 2008
May
05
May
31
31
2008
08:16 AM
8
08
16
AM
PDT
GPuccio The fact is that, even in Penrose’s views, the reason why human thought is not strictly algorithmic remains largely a mystery. Penrose has his model, which is based, if I am not wrong, on some application of quantum theories at the subcellular level. That’s interesting, but still highly speculative. But, althouth it's quite possible that this shouldn't be realized, I argued about machines that possibly could in the future be able to implement this kind of activity. After all in the last year there has been much excitement about quantum computing. It's quite possibly that at the end no meaningful results will be obtained in this field but at the moment is perhaps too early for a negative answer. If the two aspects which seem to characterize human knowledge, consciousness and non algorithmic cognition, are linked, and there are many reasons to think that it could be so, then the non algorithmic cognition would be possible only in the presence of consciousness. Unfortunately I haven't read Penrose's books; so I cannot give my idea on this link between non algorithmic cognition and consciousness. But I know that also in the ID field this link is heavily questioned. For example Dembski argued that Penrose's position is not much different from a purely materialist one. He argued so in his book "Intelligent design; a bridge between science and teology", pp. 220-222. I agree with you when you say: “On the other hand, strongly arguing about this impossibility without having a final proof about is IMHO potentially dangerous for ID side;” In a certain sense it's the main argument that has been used for attacking Dembski concerns his claims about strict conservation of CSI. I don't agree with Dembski on this point as I don't see any strict necessity to argue that CSI cannot increase through indirect generation. In my opinion arguing so is like trying to win the match for KO; so doing one puts himself in the position to be easily hit by the opponent.kairos
May 31, 2008
May
05
May
31
31
2008
07:20 AM
7
07
20
AM
PDT
Kairos: Just a note. You say: "I think this is true. But it is conceivable to produce machines that aren’t strictly algorithmic." That remains to be seen. The fact is that, even in Penrose's views, the reason why human thought is not strictly algorithmic remains largely a mystery. Penrose has his model, which is based, if I am not wrong, on some application of quantum theories at the subcellular level. That's interesting, but still highly speculative. If the two aspects which seem to characterize human knowledge, consciousness and non algorithmic cognition, are linked, and there are many reasons to think that it could be so, then the non algorithmic cognition would be possible only in the presence of consciousness. I agree with you when you say: "On the other hand, strongly arguing about this impossibility without having a final proof about is IMHO potentially dangerous for ID side;" That's right. Indeed, I am presenting my reflections on these subject only as a personal opinion, and don't want in any way to involve ID in that argument. But I do think that, in time, the ID controversy will have to face these problems.gpuccio
May 30, 2008
May
05
May
30
30
2008
03:29 PM
3
03
29
PM
PDT
kairosfocus: thank you for your intervention in this interesting discussion. Unfortunately, I think that the passage you cite about Penrose is really misleading. Reading it, it seems that Penrose is just commenting on some philosophical aspects of the AI question. That's not correct. The article cited in your post says: "However, Penrose certainly hasn’t disproved that our brain aren’t Turing machine. He points to some issues, but no proof." While there are certainly philosophical parts in Penrose's discourse, the central core of his argument is a rigorous mathematical demonstration. That argument is deductive in nature, and depends on a special application of Godel's theorem. It is explained in "The Emperor's New Mind", and explained again, and partially corrected, in "Shadows of the Mind". In that second book, Penrose also analyzes in detail a lot of objections made to his demonstration. I am aware that Penrose's argument is controversial, and that many do not accept it, and think that it is in some way flawed. But the fact is, the argument is a mathemathical one: it can be true or false, but it is wrong to say that it "points to some issues, but no proof." It's exactly the opposite. The argument is a proof, only not everybody agrees that it is correct. So, we could say that it is a controversial proof. Personally, I am convinced that Penrose's argument is perfectly valid. Of course I am not a mathematician, and I could be wrong. I have, anyway, tried to understand the various aspects which are controversial, as far as I can do that, and I have always agreed with Penrose's views. In a sense, Penrose's argument is only a way of drawing the right conclusions from Godel's theorem. The status of Godel's theorem is indeed not completely defined in our scientific culture. Everybody accepts it as a fundamental piece of knowledge (at least, everybody who knows that it exists), but nobody seems to really agree on its real meaning. Something similar happens with quantum mechanics: it is correct, it is powerful, but do we really understand what it means? I think that the new scientific paradigm will have to go beyond the "Copenaghen interpretation" attitudes, and delve deeply in the problems of meaning. Penrose's thought is a good example of how a strictly technical approach can lead to unsuspected answers of huge philosophical relevance. Other examples will certainly come, if scientists accept the idea that they can think creatively (non algorithmically), and that they do not have to merely imitate computers.gpuccio
May 30, 2008
May
05
May
30
30
2008
03:18 PM
3
03
18
PM
PDT
kairosfocus many thanks for your useful hints. As I've just said in my previous message I would be happy if Penrose be right, but I am not convinced about the fact that there is strong evidence that machines couldn't in principle perform non-algorithmic computation, such as the kind of inference required for Arvhimedes eureka or for Newton's gravitation theory. So, when you say: Thus Penrose says “you know, I don’t think that our brain really looks like a Turing machine at all!” I completely agree on this. The problem is to know if machines areor aren't constrained to this. Some would say that Penrose then goes off the deep end by hypothesizing that our brains are actually new types of computers: Non-algorithmic. They are not state machine! In Penrose’s world, since all Turing machines (and all computer architectures) are algorithmic because they are all Turing equivalent, our brains cannot be replicated by a Turing machine. I think this is true. But it is conceivable to produce machines that aren't strictly algorithmic. Now some people have have heard of neural nets, but it is important to recognize this is NOT what Penrose is talking about. Neural nets, fuzzy logic, etc are all algorithm in nature. (For instance, neural nets speak to the non-commutative nature of the order of synaptic connections. This is simply a different type of algorithm.) That's true at this time but for example a possible machine could better mimic the functioning of brain neurons at the analogical level and not at the digital level. Moreover, you could also add some form of indeterminism to the computation by using randomized if's. Olnly hints obviously. If Penrose is eventually declared correct, he will be revered for his insight. If wrong, then a computer can think. but without any consciousness obviously :-kairos
May 30, 2008
May
05
May
30
30
2008
03:03 PM
3
03
03
PM
PDT
GPuccio: Obviously, the meta level and the symbolic apparatus can be algorithmically simulated, but that’s not the same thing. Anyway, I suggest we leave it at that for now, and we can take again this discussion any time we, or others, have new arguments. I agree. Finally, I perfectly agree with you final note on necessity. That’s why I affirmed from the beginning hat, even if your views about the possible indirect generation of CSI were true, that would not be in any way detrimental to ID theory. On the other hand, if my views on the subject were true, that would certainly make ID much stonger than it already is. But I think we agree on those points. Yes. As I said, the proven impossibility of indirect CSI generation would be the KO punch for NDE and materialism and nobody would be happier than me about this eventuality. On the other hand, strongly arguing about this impossibility without having a final proof about is IMHO potentially dangerous for ID side; in fact NDEers could show every new (although modeste) achievement in the AI field as a new proof that machines haven't constraints. Obviously it's quite possible that this is not true, but ID explication is the strongest hypothesis even arguing for indirect generation of CSI.kairos
May 30, 2008
May
05
May
30
30
2008
02:35 PM
2
02
35
PM
PDT
Kairos: A teensy little note on a point:
machines that could be able to recognize analogies and differences in information stored in high level data structure
Recognition of deep analogies is -- per our experience -- an imaginative, intuitive, insightful, creative non-algorithmic process. (It is not mere pattern matching. Think of Archimedes and his bath [EUREKA!], or Newton and his falling apple as the moon swings by. How many have sat in a bath that overflowed? How many have seen fruit fall? Many. How many have seen the potential? Just one each. We see such result as "obvious" only after the fact of genius level insight and success. Indeed, we now in part see the world through the eyes of such great minds. We call that education and culture, or in some cases even language -- hence the point on absorbing CSI from the culture.) Oddly, that is embedded in our understanding of how science itself works -- abductive inference to hypotheses is a creative, non-routine step. So is the point of Godel's work on incompletemenss: we are able to see truths that we cannot provce relative to any coherent set of axioms, once we deal with a realistically complex mathematical system. [This also probably implies that not all problems of interest may be reduced to algorithms that start from known initial points, take in inputs, process based on stored routines and possibly intermediate inputs, shift internal states and steadily move stepwise to generate desired outputs.] I suspect this is a bit of what Penrose was driving at (all the huffing and puffing to dismiss what he had to say notwithstanding). Here is a perhaps helpful brief layman's level summary of P's point:
Strong AI really points to the idea that if we could only learn to load the right algorithm into a computer, we could replicate the programing that we have in our own brain. So, people with strong AI would suggest that it is simply getting the human software into a computer. If you watched Ghosts in the Shell, you will see this idea echoed over and over. People loading their programming into machines. So where does Penrose come into the picture? Penrose starts digging into the nature of algorithms. He points out that through things like Godel and Church Lamba calculus that we cannot figure out if all Turing machines really get to a completed state. On top of this, we dive into general "incompleteness" in that we can not only not figure a completed state for all Turing machines, but we also find out that various things in this universe (like the Quantum model and relativistic models) really never get to completion always. However, our brain deals with them just fine. Even though math is not a complete system (Godel says that all math will fundamentally experience paradoxes), we can use it with all of its problems. Thus Penrose says "you know, I don't think that our brain really looks like a Turing machine at all!" Some would say that Penrose then goes off the deep end by hypothesizing that our brains are actually new types of computers: Non-algorithmic. They are not state machine! In Penrose's world, since all Turing machines (and all computer architectures) are algorithmic because they are all Turing equivalent, our brains cannot be replicated by a Turing machine. Now, if you watch any Science Fiction, or if you are an anime fan like myself, you will recognize that this is destructive to a common motif. Ghosts in The Shell is one anime that uses the standard convention of Strong AI to a great extent. In this anime, computer programs can become self aware. In the same way, humans can load their algorithms (or their conscious mind) into computers. The same thing happens in the Matrix movies. If Penrose is correct, the Ghost in Shell idea could never happen. The idea of the Matrix is just that: an idea. The Turing machine architecture cannot hold a non-algorithmic QUANTUM computer programming. This last bit is highly controversial, and Penrose, while appreciated for tour of all things strange, simply lacks any real evidence. Stuart Hameroff is trying to get around some of this. However, if Stuart is correct in any of his ideas, you cannot create a quantum computer out of a Turing machine. You need the ability to go backwards in time, and the brain is all about collapsing probability functions and NOT moving through a state machine. Now some people have have heard of neural nets, but it is important to recognize this is NOT what Penrose is talking about. Neural nets, fuzzy logic, etc are all algorithm in nature. (For instance, neural nets speak to the non-commutative nature of the order of synaptic connections. This is simply a different type of algorithm.) What Penrose is suggesting is only found one place in nature: Our brain. There are paradox in our human self-awareness and Penrose does a brilliant job of going after a new way of thinking about them (did you know that we understand that our brain is on 1/2 second delay, and your brain lies to you to make you think that it isn't!). However, Penrose certainly hasn't disproved that our brain aren't Turing machine. He points to some issues, but no proof. If Penrose is eventually declared correct, he will be revered for his insight. If wrong, then a computer can think.
At least, food for thought. GEM of TKIkairosfocus
May 29, 2008
May
05
May
29
29
2008
03:50 AM
3
03
50
AM
PDT
Kairos: OK, I think we agree on most things. I will keep my ideas about the main point, where you say: "Certainly the task of recognizing analogies and differences would require sophisticated recognition algorithms but this could be embedded at the lower levels of the machine. I mean that at higher abstraction level it’s quite conceivable that a machine can work simulating what humans actually do in a non-algorithmic way. Certainly this would not be at alla a consciousness of any form, but I’m not convinced that a machine wouldn’t be able in principle to implement, although in a purely behavioral way (i.e. without any real subjective consciousness), the meta-relation you have referred to." The important thing is that we agree on the absence of consciousness. I go beyond that, and continue to think that absence of consciousness implies absence of understanding of meaning, and therefore absence of a true symbolic apparatus. Obviously, the meta level and the symbolic apparatus can be algorithmically simulated, but that's not the same thing. Anyway, I suggest we leave it at that for now, and we can take again this discussion any time we, or others, have new arguments. I perfectly agree that chess programs can increase their baggage of information the way you describe. That seems completely algorithmic to me, and I have no objections to that. I just don't think that's the way (or at least the only way) that human players improve their playing. Finally, I perfectly agree with you final note on necessity. That's why I affirmed from the beginning hat, even if your views about the possible indirect generation of CSI were true, that would not be in any way detrimental to ID theory. On the other hand, if my views on the subject were true, that would certainly make ID much stonger than it already is. But I think we agree on those points.gpuccio
May 29, 2008
May
05
May
29
29
2008
03:36 AM
3
03
36
AM
PDT
#56 gpuccio Thanks for the comments. Certainly this is a very complex and interesting argument. one of the strogest arguments for the non algorithmic (or at least, not purely algorithmic) nature of human knowledge comes exactly from mathemathics. It is based on the famous argument by Penrose, based on Godel’s theorem. It is too long and complex for me to sum it up here, but anybody can find it in detail in Penrose’s two books, “The Emperor’s New Mind”, and “Shadows of the mind”. In brief, the argument is aimed to mathematically demonstrate that there are mathemathical knowledges that are not algorithmic, which can be easily grasped by a conscious human being, and never by a computer, however complex. That’s intresting, because it is easy to imagine that a computer may have difficulties in emulating specific human things, like art, feelings, religion and so on, but ut is much more important to be able to demonstrate that it cannot emulate the human understanding of mathematics itself. ........ That ability to observe an object as a subject leads to the observer being in a “meta” relationship with the observed object: he puts himself “above” the things he is representing in his consciousness, and can better “look” at them. I completely agree on the fact that a mere algorithm capability is largely unadequate to solve many and many problems humans typically solve easily. However when I argued about the Gauss problem I didn't consider machines that strictly operate in an algorithmic way. I meant computing machines acting as inferenze engines at a more abstract symbolic level, i.e. machines that could be able to recognize analogies and differences in information stored in high level data structure. Certainly the task of recognizing analogies and differences would require sophisticated recognition algorithms but this could be embedded at the lower levels of the machine. I mean that at higher abstraction level it's quite conceivable that a machine can work simulating what humans actually do in a non-algorithmic way. Certainly this would not be at alla a consciousness of any form, but I'm not convinced that a machine wouldn't be able in principle to implement, although in a purely behavioral way (i.e. without any real subjective consciousness), the meta-relation you have referred to. The ability of the observer to put himself in a “meta” relationship with the observed things can always work on successive levels, creating a “mise en abime” which can be endless. That is the real cause of Godel’s theorem, and of Pendrose’s argument. A computer cannot do that. It is confined to its code, forever. It cannot put itself in “meta” respect to its own code. It can neither represent it nor observe it. A computer can never observe. But, as I've argue above, if we don't restrict a computer to act in a purely (native) algorithmic way, a computing machine can work on successive abstraction levels too; and the sequence (or better, the tree) of abstraction levels can be potentially endless too. Obviously I don't mean at all that machines can really be made really conscious of what they are doing, this is only fictionary stuff and the fact that even scientists working in the field think that this could be possible is a sign of irrationality. I simply mean that, restricting ourselves to see mere functionality, I don't see an a-priori restriction for a future machine to perform the same mathematical inference that was performed by the young Gauss. I don’t know. I would appreciate Gil Dodgen’s contribution here. My feeling is that a chess program has extraordinary computing abilities, and can store a lot of pre-formed decisions (made by human players), so that it can quickly compute the consequences of a move, and select the pre-formed solution which applies. Does that make the program intelligent? Is a trivial pursuit game intelligent, when it gives you the right answers? The contribution og Gil would be welcomed here. Actually I don't know if Deep Blue, or other programs for that matter, is now able to find new and better "strategies" to win chess matches. However I can put on the table some 0.01 cent ideas. Certainly a chess playing program can be impemented with the capability to continouously store all the data concerning all the matches it has played in the past, with complete statistics about moves and reaction by the opponents. So, it is conceivable that it could be found analogies between different situations that had all in common a tactical advantage or, at the best, the vistory in the matches. No, that’s not completely correct. Necessity has to be excluded separately. Chance is excluded on a probabilistic basis. You can apply statistical analysis only to random events, never to necessity. Even in the fisherian scenario, usually applied in biological sciences, statistics is used only to compute the probabilities that a random mechanism is the cause of observed facts (the null hypothesis), and not to directly prove the test hypothesis, which usually implies a causal mechanism (necessity). You are right; but I didn't explain well my idea about. I think (obviously IMHO) that the EF shouldn't exclude the action of necessity "in toto", i.e. the action of ANY deterministic agent. Instead it should be limited to what natural laws are able to produce from scratch.kairos
May 29, 2008
May
05
May
29
29
2008
02:34 AM
2
02
34
AM
PDT
Kairos: Thank you for the generous discussion. I agree with you that the problem is open. Still, maybe I can add some thoughts, stimulated by your comments. "the intuition that allows a mathematician to “see” a new, simpler solution to a problem that up to then had been considered much more complex. to solve." Now, that's interesting, because one of the strogest arguments for the non algorithmic (or at least, not purely algorithmic) nature of human knowledge comes exactly from mathemathics. It is based on the famous argument by Penrose, based on Godel's theorem. It is too long and complex for me to sum it up here, but anybody can find it in detail in Penrose's two books, "The Emperor's New Mind", and "Shadows of the mind". In brief, the argument is aimed to mathematically demonstrate that there are mathemathical knowledges that are not algorithmic, which can be easily grasped by a conscious human being, and never by a computer, however complex. That's intresting, because it is easy to imagine that a computer may have difficulties in emulating specific human things, like art, feelings, religion and so on, but ut is much more important to be able to demonstrate that it cannot emulate the human understanding of mathematics itself. Penrose's argument is certainly controversial, but I do believe that he is right. Moreover, I do believe (but that's my personal thought, not necessarily Penrose's) that the ral meaning of that argument is that consciousness is necessary to many fundamental forms of understanding, even in mathematics. Consciousness allows one single procedure which cannot happen without it: detachment. The conscious observer can detach himself from what he is representing, and observe it as an object of his representing perception. That ability to observe an object as a subject leads to the observer being in a "meta" relationship with the observed object: he puts himself "above" the things he is representing in his consciousness, and can better "look" at them. The ability of the observer to put himself in a "meta" relationship with the observed things can always work on successive levels, creating a "mise en abime" which can be endless. That is the real cause of Godel's theorem, and of Pendrose's argument. A computer cannot do that. It is confined to its code, forever. It cannot put itself in "meta" respect to its own code. It can neither represent it nor observe it. A computer can never observe. My point (and, I think, Penrose's) is that the presence of consciousness does not ingender only a subjective differnce, but also an objective one: the conscious experiencer can do things which a non conscious computing machine can never do. I am sure of that. I am almost sure that producing new CSI is one of those things, but I agree that that has to be formally proved. "Please think about the Deep Blue example I mentioned; certainly that machinery wasn’t at all conscious of its operation but its result was pretty impressive and more “intelligent” (in its ethimological sense: the ability to collect and select) than Kasparov, and certainly more, more and more intelligent that any other human paying chess." I don't know. I would appreciate Gil Dodgen's contribution here. My feeling is that a chess program has extraordinary computing abilities, and can store a lot of pre-formed decisions (made by human players), so that it can quickly compute the consequences of a move, and select the pre-formed solution which applies. Does that make the program intelligent? Is a trivial pursuit game intelligent, when it gives you the right answers? The fact is, a computer, however complex, is not different from an abacus. A software is stored knowledge. Years of Strong AI theory and of (often good) science fiction novels have conditoned us to thinf that, the more conplex is the software, the more "human" it will be. I think there is nothing true in that. Strong AI is one os the most stupid theories I have ever known (with darwinian evolution, it's really a fascinating competition). They have told us that if we compute in parallel instead of serially, miracles will happen. They are telling us that if we use enough loops in our programs, those programs will become conscious, as though a loop in the code could be the same thing as a cosnsciuousness observing in "meta" its contents. All of that is nonsense. They have succeded in convincing millions of conscious (and sometimes intelligent) people that the primary empirical thing they experience, their personal consciousness, does not really exist. That's collective hypnosis, at best. And so on. "I could be wrong but if I remind well the definition of CSI does not require the a-priori exclusion of the action of necessity and chance. Instead, this exclusion is done a-posteriori on a probabilistic basis." No, that's not completely correct. Necessity has to be excluded separately. Chance is excluded on a probabilistic basis. You can apply statistical analysis only to random events, never to necessity. Even in the fisherian scenario, usually applied in biological sciences, statistics is used only to compute the probabilities that a random mechanism is the cause of observed facts (the null hypothesis), and not to directly prove the test hypothesis, which usually implies a causal mechanism (necessity).gpuccio
May 28, 2008
May
05
May
28
28
2008
04:31 PM
4
04
31
PM
PDT
Sorry, in my previous post there are many typos, but I hope that the overall content be clear.kairos
May 28, 2008
May
05
May
28
28
2008
01:07 PM
1
01
07
PM
PDT
#54 gpuccio You have explained the issues involved with CSI very clearly. In fact what is really worth to be explored is the following question asking if CSI is something that can grow without the DIRECT action of an intelligent agent: Question: Even if CSI cannot be generated by natural laws + chance, and requires a designer, must CSI be generated directly by the designer, or can it be the product of a designed machine? In other words, can CSI be indirectly generated by a designer? That’s a very important point, although, as you said, it would not not be a problem for ID in both cases. And yet, the answer to that question has certainly important consequences. This is certainly true; in fact if you (and Dembski of course) are right on this point materialism, and its most known weapon (NDE) for that matter, are completely defeated on a mere theoretical basis. In other words, if CSI cannot grow at all through laws+chance this would be the final KO punch for both materialism and NDE. It's true that this would be the best of the news for us IDers, but are we sure that this fact can be really proven? Perhaps we have to accept that ID will win on the long-term but without any KO, simply because more and more of the people who are looking at the match will switch their votes from NDE to ID. Let’s put it this way. We certainly know that machines (including computers, software code, etc., in other words any product of human agency which exhibits CSI and can give an output of some kind) can output CSI in various forms. A computer outputs results which have CSI. A printer can print Shakespeare’s works. And so on. But the question is, do those machines really generate new CSI? Or do they just reutilize, in different form, the CSI they have in themselves, or the one which they receive as input? First of all, I have to admit that I cannot give a formally explicit discussion on that. I can just give my intuitive idea. Therefore, any input from you and others will be greatly appreciated. My personal idea is that the answer is no. Machines, however “intelligent”, cannot create new CSI. They are destined to reshuffle what they receive, sometimes very brilliantly (but that brilliancy, in some way, is itself a merit of their programmers). I think that a key point here is to ask if most of the intelligent higher level tasks humans perform could be itself classified as very sophisticated forms of reshuffling of information previously received, modified and stored. For example let us consider a very high form of intelligemce: the intuition that allows a mathematician to "see" a new, simpler solution to a problem that up to then had been considered much more complex. to solve. I think in particular about the well know anedoct concerning Gauss when, as a young student, found a genial and quick solution to the problem of adding n numbers, each one differing from the previous one by the same number. However, it is well known that the human brain in particularly well suited to found analogies and differences between different fields of expertise. So it's conceivable (obviously not being sure about) that Gauss actually did: a) "see" some sort of analogy between the monotonic and linearly growing sequence of numbers and a sequence of piles of different heights; b) "simulate" the folding of the sequence of piles observing that the result is n/2 piles with the same height; c) apply the result to the sequence of numbers. Indeed, this would have been a very clever and high-level form of reshuffling but its result would have\been a new, clever and simpler technique for solving a given problem. Moreover (it is possible that I'm wrong) according to the Dembski's definition of CSI the result would have a higher CSI because the Chaitin-Kolmogorof representation of the algorithm is much shorter. Obviously we don't know what was the real mental process that allowed Gauss to find the new algorithm, but from a conceptual point of view there is no an a-priori constraint that deny a very complex computing machine to do the same. Why do I say that? Because I believe that the real source of CSI is intelligent consciousness. Machines are not conscious, and they never will be. Therefore, in a strict sense, they cannot even be “actively” intelligent (they can, obviously, be passively intelligent). I agree on this point but the notion of consciousness is not involved in the definition of CSI and then in the inference of design process. Why do I think that the source of CSI is only intelligent consciousness? Because CSI cannot be generated algorithmically. Let’s start from the beginning. A piece of CSI (let’s say Hamlet) cannot be produced by natural laws + chance. OK. But Shakespeare was using his brain and mind, and brain and mind are complex, specified structures. Somebody has put CSI in them. Let’s say it was God. Moreover, Shakespeare has used as input a lot of previous CSI created by other human beings (his experiences, his culture, and so on). So, we could ask, was Hamlet just the result of algorithmic computations (that is necessity) performed by a complex structure with a lot of CSI in itself (that is, Shakespeare’s brain and mind, and everything it contained)? I don’t think so. I think Hamlet came from Shakespeare’s consciousness, where all these things certainly contributed to his representations, to his feelings, to his intuitions, to his choices. From his conscousness, with the contribution of all this data, came out Hamlet. All the data, including brain and mind, could not have created it in themselves. Shakespeare’s individual consciousness was necessary. Now, please take notice that I am not using the example os Shakespeare here because he is a great, creative artist. My example should stay valid for every form of new CSI. It is quite possible that this was what did really happen. But IMHO we cannot discard the possibility that the same could be done by a machinery with huge computation power and storage and whose inference rules be mainly based on searching analogies and differences within the actual storage state. Please think about the Deep Blue example I mentioned; certainly that machinery wasn't at all conscious of its operation but its result was pretty impressive and more "intelligent" (in its ethimological sense: the ability to collect and select) than Kasparov, and certainly more, more and more intelligent that any other human paying chess. Indeed, if CSI is not apt to be generated algorithmically by case and necessity, when wew see an algorithm outputting CSI, only two cases are possible: a) The CSI was already in the machine/algorithm, and is simply being copied, sometimes in a resfhuffled form: the easiest case is that of the printer which prints Hamlet, or of the phrases used by the computer in its dialogue windows, or of the reshuffled audio comment in soccer games. b) The CSI is really being computed (necessity), but starting from a different CSI. In that case, although I am not able to manage the formalism, it seems to me that the total CSI should not increase (which should be in some way the law of conservation of information). It is, in a sense, copied in a different form, through algorithmic (necessary) functions and transformations. I think that the real point is what is the real theoretical bounds on what can be obtained by reshuffling information. It is possible that I'm wrong about, but, according to my previous example it seem conceivable that many intelligent tasks could produce some new CSI. It is important to remember that there is no way that an algorithm can create CSI out of nothing, because otherwise the result would be the product of necessity, or of case and necessity, which contradicts the definition of CSI. I could be wrong but if I remind well the definition of CSI does not require the a-priori exclusion of the action of necessity and chance. Instead, this exclusion is done a-posteriori on a probabilistic basis. That’s it, for now. I would be very interested to know how these concepts can apply to neural networks, and to their apparent “learning”. Or to know, for instance from GilDodgen, who is certainly an authority about that, how they apply to chess playing and to the related software. I agree; his opinion and experience about would be very usefulkairos
May 28, 2008
May
05
May
28
28
2008
01:01 PM
1
01
01
PM
PDT
Kairos (and kairosfocus): I think we agree completely on the point that CSI can never, empirically (that is, IN PRACTICE), be generated by natural laws + chance. The theorical possiblity should not be a problem for anyone. So, if there is CSI, there is a designer. There is no question about that (at least for us). But the question posed by RRE is really interesting, and I am intrigued by it. I'll try to sum it up again in a very simple form: Question: Even if CSI cannot be generated by natural laws + chance, and requires a designer, must CSI be generated directly by the designer, or can it be the product of a designed machine? In other words, can CSI be indirectly generated by a designer? That's a very important point, although, as you said, it would not not be a problem for ID in both cases. And yet, the answer to that question has certainly important consequences. Let's put it this way. We certainly know that machines (including computers, software code, etc., in other words any product of human agency which exhibits CSI and can give an output of some kind) can output CSI in various forms. A computer outputs results which have CSI. A printer can print Shakespeare's works. And so on. But the question is, do those machines really generate new CSI? Or do they just reutilize, in different form, the CSI they have in themselves, or the one which they receive as input? First of all, I have to admit that I cannot give a formally explicit discussion on that. I can just give my intuitive idea. Therefore, any input from you and others will be greatly appreciated. My personal idea is that the answer is no. Machines, however "intelligent", cannot create new CSI. They are destined to reshuffle what they receive, sometimes very brilliantly (but that brilliancy, in some way, is itself a merit of their programmers). Why do I say that? Because I believe that the real source of CSI is intelligent consciousness. Machines are not conscious, and they never will be. Therefore, in a strict sense, they cannot even be "actively" intelligent (they can, obviously, be passively intelligent). Why do I think that the source of CSI is only intelligent consciousness? Because CSI cannot be generated algorithmically. Let's start from the beginning. A piece of CSI (let's say Hamlet) cannot be produced by natural laws + chance. OK. But Shakespeare was using his brain and mind, and brain and mind are complex, specified structures. Somebody has put CSI in them. Let's say it was God. Moreover, Shakespeare has used as input a lot of previous CSI created by other human beings (his experiences, his culture, and so on). So, we could ask, was Hamlet just the result of algorithmic computations (that is necessity) performed by a complex structure with a lot of CSI in itself (that is, Shakespeare's brain and mind, and everything it contained)? I don't think so. I think Hamlet came from Shakespeare's consciousness, where all these things certainly contributed to his representations, to his feelings, to his intuitions, to his choices. From his conscousness, with the contribution of all this data, came out Hamlet. All the data, including brain and mind, could not have created it in themselves. Shakespeare's individual consciousness was necessary. Now, please take notice that I am not using the example os Shakespeare here because he is a great, creative artist. My example should stay valid for every form of new CSI. Indeed, if CSI is not apt to be generated algorithmically by case and necessity, when wew see an algorithm outputting CSI, only two cases are possible: a) The CSI was already in the machine/algorithm, and is simply being copied, sometimes in a resfhuffled form: the easiest case is that of the printer which prints Hamlet, or of the phrases used by the computer in its dialogue windows, or of the reshuffled audio comment in soccer games. b) The CSI is really being computed (necessity), but starting from a different CSI. In that case, although I am not able to manage the formalism, it seems to me that the total CSI should not increase (which should be in some way the law of conservation of information). It is, in a sense, copied in a different form, through algorithmic (necessary) functions and transformations. It is important to remember that there is no way that an algorithm can create CSI out of nothing, because otherwise the result would be the product of necessity, or of case and necessity, which contradicts the definition of CSI. We should notice that copying CSI does not create new CSI (two copies of Hamlet do not contain more CSI than one copy). In the same way, reshuffling CSI does not augment it (unless the reshuffling itself is CSI), and the same is valid for recomputing it (applying necessary transformations to it). Again, if case and necessity cannot produce CSI, another causal principle is necessary. We call it agency, but in the end agency can be defined only as the product of intelligent consciousness. That's it, for now. I would be very interested to know how these concepts can apply to neural networks, and to their apparent "learning". Or to know, for instance from GilDodgen, who is certainly an authority about that, how they apply to chess playing and to the related software.gpuccio
May 28, 2008
May
05
May
28
28
2008
06:02 AM
6
06
02
AM
PDT
Footnote: Technical side-point. Foxit Reader got the TMLO download to my desktop, and Acrobat 8 opens it there, successfully [. . . 5 was also swamped by new formats]. [There is a reported hiccup on images with Foxit, reportedly due to an update that won't get through just now.] GEM of TKIkairosfocus
May 28, 2008
May
05
May
28
28
2008
04:51 AM
4
04
51
AM
PDT
GP and Kairos: Fascinating. Keep on going! GEM of TKIkairosfocus
May 28, 2008
May
05
May
28
28
2008
04:15 AM
4
04
15
AM
PDT
#45, 46 gpuccio I have just read your post, after having posted mine. This seems to be one of the rare cases that we don’t agree… I would appreciate further input from you. The question is really a good one! Probably we don't agree on this point but perhaps not so much as it could seem. I have re-read my post and I've seen that in my point 2. there are some typo errors that probably have changed the meaning of the text. I rewrite it by putting within [] the correct text: ---- 2. If point 1. is valid, and I don’t see how it could be disputed, the problem is simply a probabilistic one. Is it possible that BOTH the computer machinery AND its symbolic code could be arised by mere naturalistic forces. [? **here I put a point instead of a question mark; I didn't mean that this is possible; I only asked for answering NO after **] It [IF] chance is against any reasonable and phisical [physical] possibility in the known universe, then design inference is a mere matter of reason and common sense. ---- In other words, my opinion is that: - Although there isn't a strict theoretical constraint that makes it ABSOLUTELY impossible(i.e. with Prob=0), CSI cannot IN PRACTICE be produced from scratch by the mere application of natural laws+chance. In other words, it's quite impossible that eben a simple replicating structure could have been produced in that way, not to speak for the very sophisticated computing machinery needed to implement even the simplest computing step needed for act intelligently. HOWEVER: - Provided that a complex computing machinery had been previously assembled (and this is possible only through an evolution guided by an intelligent agent) and provided that its storage had been charged with all the information that allows it to mimic the human expertise in a specific field, then this machine should be able to produce new CSI to the same extent the human behavior it mimics should be able to produce new CSI. This doesn't imply at all that an intelligent agent is not necessary but simply that some amount of new CSI can be automatically produced by a pre-existent machinery that already embeds a much higher CSI. I don't think that this should be a problem for ID. After all the definition of CSI is in itself based on strict probabilistic issues, i.e. the fact that Prob to arise by natural laws+chance is quite under the UPB. On this grounds I think it's not possible to deny a-priori that some new CSI could be added. Who says that a computing system can actyually pass the Turing test? Penrose’s argument about the non algorithmic nature of human knowledge, based on the Godel theorem, would be against it. I didn't mean the global and absolute Turing test, but simply a test that be limited in a very specific field of knowledge where the human expertise can be easily formalized and added as expert rules to the computing machinery (for example chess playing where Deep Blue has been able to beat the world champion). Certainly this is a case where the (restricted) Turing test has been passed. Moreover, as this kind of programs have in itself the capability to "learn by playing" it seems that they are actually able to add some new CSI on a much higher pre-existent one.kairos
May 28, 2008
May
05
May
28
28
2008
02:46 AM
2
02
46
AM
PDT
RRE: A good example which comes to my mind is the audio comment to soccer games. There you have the appearance that a new comment is generated by the game during a contingent play, which would mean new CSI, but in reality we well know what is happening: single recorded phrases are shuffled according to necessary algorithms programmed in the game, which are triggered by the contingency, more or less designed (depending on how well you play) generated by the player. Nobody is really commenting anything. There is no perception of the play by a commenter, no representation, no original comment. In other words, no new CSI.gpuccio
May 27, 2008
May
05
May
27
27
2008
03:39 PM
3
03
39
PM
PDT
RRE: Yes, that would be my opinion, but, as kairos has said, that's a good question, and open to discussion. We have a good example in biology, which is the immune system. The immune response creates a specific response to antigens from a pre-existing repertoire, and then potentiates it (antibody maturation) through guided random mutation and selection. But the selection is possible only because the original configuration of the forein agent is retained in the immune system. So, even here it seems that the new information is modeled on information acquired from the outer world, and by an algorithm which is already pre-programmed in the system.gpuccio
May 27, 2008
May
05
May
27
27
2008
03:30 PM
3
03
30
PM
PDT
gpuccio, When I stated: 'generating by themeselves', I was referring to the computer and the game together. So a computer and program cannot generate or produce new CSI, only express pre-existing CSI which came into existence by programming as well as through creative input from an active agent, right?RRE
May 27, 2008
May
05
May
27
27
2008
03:05 PM
3
03
05
PM
PDT
gpuccio
Perhaps other kinds of fractals, which do not imply computations with complex numbers, may be found as the result of natural processes (I am thinking of snowflakes and similar, but I could be wrong).
Have you never eaten broccoli? Examine the images on this page http://www.fourmilab.ch/images/Romanesco/ Nature is full of fractals and Benoit B. Mandelbrot knew it, hence his book The Fractal Geometry of Nature I know the coast of Norway looks designed but it's really just a fractal. There are also natural fractal seed packing strategies that industry is attempting to emulate.Mavis Riley
May 27, 2008
May
05
May
27
27
2008
02:36 PM
2
02
36
PM
PDT
PS: Who says that a computing system can actyually pass the Turing test? Penrose's argument about the non algorithmic nature of human knowledge, based on the Godel theorem, would be against it.gpuccio
May 27, 2008
May
05
May
27
27
2008
02:07 PM
2
02
07
PM
PDT
Hi kairos, I have just read your post, after having posted mine. This seems to be one of the rare cases that we don't agree... I would appreciate further input from you. The question is really a good one!gpuccio
May 27, 2008
May
05
May
27
27
2008
02:05 PM
2
02
05
PM
PDT
RRE: In your example, I would say that there is a concurrence of two different components. The player is an intelligent agent, and his intelligent input certainly contributes to increase the information in a specific instance of the game. At the same time, the gane has been designed so that an apprent increase in CSI takes place during the playing. But in reality, the new CSI was already coded in the game, and it is gradually applied to specific parts of it. So, the only new CSI which is added to the game comes from the conscious, intelligent player. I am not sure I have understood well you other observation. If I have understood correctly, I agree with you that conscious intelligent agents (at least, human ones) have to connect with external reality through a mind and a body to output intelligent information (CSI). They have, indeed, to do the same thing to input information and consciously represent it. Mind and body, in my view, are instruments of consciousness. They do not generate it, but are necessary for its expression (again, in humans). They are the video game. Consciousness is the player. Again, the beauty of the model is that it is perfectly consistent. Here, as in the videogame example, new CSI can come only from consciousness. Body and mind are necessary to express it, but they cannot generate it by themselves.gpuccio
May 27, 2008
May
05
May
27
27
2008
02:02 PM
2
02
02
PM
PDT
#42 RRE The game will increase in complexity as I kill more bosses and finish more quests. My question is, does the code itself act as an intelligent agent and can it produce its own CSI when added to its compatible machine set (like the computer with powersource) as a direct cause? Very good question and I have thought a lot about. IMHO the answer should be a clear YES. There is no an a-priori limit to the extent a trained program (such as your man-trained game) could actually act intelligently. In a certain sense this is strictly connected to the reason why a carefully designed (and TRAINED) computing system can actyually pass the Turing test (i.e., its behavior could be made indistinguishable from an human one). However, I don't see why this could be a problem for strongly arguing for ID in the physical world. In fact, let us consider that: 1. In any way, any intelligent agent, who would have decided to assess a whichever world with some form of natural laws, should be constrained to act according to those laws. 2. If point 1. is valid, and I don't see how it could be disputed, the problem is simply a probabilistic one. Is it possible that BOTH the computer machinery AND its symbolic code could be arised by mere naturalistic forces. It chance is against any reasonable and phisical possibility in the known universe, then design inference is a mere matter of reason and common sense.kairos
May 27, 2008
May
05
May
27
27
2008
01:57 PM
1
01
57
PM
PDT
gpuccio, Say I have a piece of software that codes for a roleplaying adventure game. In the game the specificity of my character will increase as I go on adventures and take on quests, gain party members and new items. The game will increase in complexity as I kill more bosses and finish more quests. My question is, does the code itself act as an intelligent agent and can it produce its own CSI when added to its compatible machine set (like the computer with powersource) as a direct cause? I am under the impression that all intelligent agents must possess a functional symbolic code with a compatible machine set (whether that would be a software program and computer, robotic arm with PLC controller and compatible programming code, or a mind with a body). I do know from observation however that a mind has always been observed as the originator behind any code or program as the prime cause, whether direct or indirect, where such an origin is known.RRE
May 27, 2008
May
05
May
27
27
2008
11:36 AM
11
11
36
AM
PDT
Quick footnotes: 1] Denyse -- got the download to go [took very long], but the file vanished. Just as for you. Somebody needs to fix. 2] PBMRs and food to fuel. --> Corn production in the US has reportedly actually just about DOUBLED from 95 - 07, but Australia continues in drought. (And there is a dramatic increase still in US production, after ethanol production is taken out.] --> Those old enough to remember will recall how when oil went on a quadrupling cross the 70's, EVERYTHING else shot up. (Beware post hoc reasoning and the rhetoric of those with an agenda . . .) 2] PBMRS --> These may return nukes to the front burner . . . 3] intelligence is . . . --> Cf discussion here. --> We know intelligence from our experience of it in our selves and observation of others who behave in intelligent ways like ourselves. So, we do not need to belabour ourselves over getting to statements of necessary and sufficient conditions when we can point to examples and say: if it is like that, it is the same basic thing. 4] But agency is magic . . . --> JT, am I to take your posts as so much magic? Or lucky noise? [Or should I take them as the acts of an intelligent, communicating agent similar to my own experience of myself?] --> I repeat, fact no 1 -- hard as it may be to fit in with certain reductionistic, materialistic worldviews -- is that we are agents, and this is deeply embedded in our reasoning, communicating and deciding, much less acting. --> Indeed, we experience the external world through our conscious minds; which are thus more certain than that experience . . . thus w/views that make heavy weather of living with what has to come before experience and analysis of experience, are in deep trouble with factual adequacy. --> So cf discussion APP 6 the always linked . . . starting from what you do when you see an apparent message by the side of a railroad made up from stones . . . Okay for now GEM of TKIkairosfocus
May 27, 2008
May
05
May
27
27
2008
06:06 AM
6
06
06
AM
PDT
1 2 3

Leave a Reply