Uncommon Descent Serving The Intelligent Design Community

Artificial Intelligence and the Game of Checkers

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I was going to post this as a comment in Salvador’s thread (https://uncommondescent.com/index.php/archives/1481) but it became too long, so I’m posting it as a new topic.

On the subject of computer checkers I have some observations. I hate to brag (okay, I lied!) but I am one of the world’s leading authorities on this subject.

I know David Fogel. He is a very bright researcher and a really cool, nice guy. (Plus, he is a very talented pianist and musician. We have shared our piano CDs. Is there some correlation between music, mathematics and computer science?)

There are three essential elements in an artificial-intelligence (AI) games-playing computer program:

1) A move generator. The board and pieces must be represented in the computer’s memory, either as arrays or bit-maps. An algorithm must be devised to generate moves based on the rules of the game.

2) A tree-search algorithm. This is even more complex and requires sophisticated software engineering. The game tree explodes exponentially, so an alpha-beta minimax algorithm is employed. Other sophisticated algorithms must be used to curtail the exponential explosion: quickly accessible hash tables that contain previously-seen positions, iterative deepening with best-move prediction accessed from the hash tables, use of killer moves and move sorting based on a history heuristic, and much more.

3) A terminal-node evaluation function which attempts to ascertain the value of a position statically, since the tree search must terminate at some point. These values are backed up through the search tree with the alpha-beta minimax algorithm.

Most of the playing strength of these programs comes from the tree search (which is not possible without the move generator), since humans have a hard time keeping track of this much detail.

Fogel and his team of software engineers programmed elements 1 and 2, without which element 3 (the product of the genetic algorithm) would have been completely useless. The fitness function was the definition of a win at the end of the tree search: no moves left for the opponent. This was a goal defined by the programmers. The leaf-node evaluation function is not essentially strategic; “strategy” is mostly a byproduct of the tree search.

This is not to say that Fogel’s research is meaningless. It is intriguing and worthwhile. What all this means is that a cleverly designed trial-and-error process, combined with a lot of even more clever software engineering and human-devised algorithms, can produce interesting and productive results.

It should be noted that human-crafted (i.e., intelligently-designed terminal-node evaluation functions) totally trounce Blondie’s eval function with ease.

The really cool stuff in games-playing programs is the development of endgame databases. The massive brute force and memory of modern computers, and highly efficient retrograde analysis algorithms, have produced mind-boggling results.

My colleague and I computed both win-loss-draw databases and perfect-play lookup databases for the game of checkers. (We are the only ones to have computed PPL databases for checkers.) The PPL databases required nearly two months of 24-hour-per-day computation on my dual-processor machine equipped with four gigs of RAM. I kept both CPUs and all four gigs busy for that entire time.

You can download a copy of our paper on checkers endgame databases here. This paper was published in the premier international peer-reviewed publication for games-playing AI programs, the ICGA Journal, and was reproduced in the hardcover book, Advances in Computer Games, Proceedings of the 10th Advances in Computer Games Conference.

Alex Moiseyev, the current human checkers world champion, after playing the PPL databases, exclaimed in his thick Russian accent: “This play is INHUMAN!”

You can download a free small version of my checkers program at http://WorldChampionshipCheckers.com. You can get a copy of the massive and really cool PPL and WLD databases for the cost of disk duplication and shipping ($25) from bobnewell@bobnewell.net. The highly compressed data require 12 gigabytes of disk space.

There is an interesting aspect to this massive computing capability. I call it “the elegance of brute force.” The PPL databases have corrected classic human play that has been around for centuries. Was any new information produced?

I suggest that the answer to this question is both yes and no. Solutions were found that no human could ever have found without the power of modern computation. But what human or team of humans could compute PI to a billion decimal places?

Comments
Ok, suppose you have a billion digits of Pi. 2000 digits per page, 1000 pages per book, that's 2 million digits per book. So a billion digits would take 500 books. The fifty millionth digit should be at the end of book 25. You pick book 25 from the shelf, open it up, look at the last digit and there's the fifty millionth digit of pi. Total time - maybe a minute and it's all access time. You don't have to calculate the digits because they're already written down. You already have the information in the first billion digits of pi. Now start with just the algorithm and 500 blank books and start calculating the first fifty million digits of pi. The cows will all be dead of old age before you get there. The reason for the difference in speed is because in the first example you have the information in the digits of Pi. In the second example you have the information in the algorithm that calculates the digits of pi. They are not the same. Similarly, designers who possess knowledge of the strategy of "an interative process of making small variations at random then testing those variations and retaining those that improve our strategy" don't know what the result of the process will be until they actually do it and see what pops out. And, since it uses random variations, they'll get a different answer on their second run and subsequent runs.Houdin
September 6, 2006
September
09
Sep
6
06
2006
01:54 AM
1
01
54
AM
PDT
Houdin If I have an algorithm that calculates PI to any desired digit (and I do which is simply to divide 22 by 7) I do have the value of PI to any desired decimal digit of accuracy. The algorithm can be considered a compact method of storing the information, like shorthand. It simply takes longer to access it than you might expect. Just because it takes longer doesn't change the essential nature of it. You know PI to any desired decimal place. Consider that you might have PI written down to a billion decimal places. I ask you for the value of the 50 millionth digit. Manually you'll be counting out digits until the cows have gone and come home several times over. You have the information, you just have to wade through an iterative algorithm (in this case counting digits) to get a specific bit of it. This doesn't compare to a random process. There is nothing random about an algorithm that calculates PI. You get the same result every time. A random process can be expected to give you a different result every time (or at best there's no guarantee of the same result). You're comparing apples to oranges.DaveScot
September 5, 2006
September
09
Sep
5
05
2006
01:59 AM
1
01
59
AM
PDT
Gil and Tom, thanks for responding. It looks like evolution does forward pruning. For example, if a fertilized egg is created with a damaged gene that will prevent a heart from developing, that egg does not grow up to have offspring carrying that bad gene and those offspring will never produce eggs that test the combination of a bad heart with a modified liver or a bad heart and a modified ear or a bad heart and a modified lung, etc. Millions and millions of bad combinations will never be tried, making evolution much more efficient. Gil and Dave, if you have an algorithm for calculating PI to a million decimal places, you do not have knowlege of the digits of PI, you only have knowledge of the algorithm. For example, you can stare at the algorithm till the cows come home, but it won't tell you what the millionth digit of PI is. You'll have to follow the instructions in the algorithm over and over again until you finally calculate the millionth digit of PI and then and only then will you know what that digit is. Similarly, you won't know what the iterative process of making small random changes and retaining those that are useful will produce until you run the process over and over and see what it produces.Houdin
September 5, 2006
September
09
Sep
5
05
2006
01:32 AM
1
01
32
AM
PDT
Dave, Good point in #41: All this proves is that intelligent agents can use tools of their own design to speed up searches for answers. That was essentially my point in the comment about computing PI to a billion decimal places. Sure, we now know PI to a billion decimal places and didn\'t before, so we have new \"information\" in a sense, but this information was inherent in the algorithm designed by the programmers. It\'s basically the same thing with endgame databases or using evo algorithms with evaluation functions. None of this has any bearing on biological evolution.GilDodgen
August 27, 2006
August
08
Aug
27
27
2006
03:43 PM
3
03
43
PM
PDT
"it had been able to derive, by random mutation and selection, more information about how to play the game than the designers themselves possessed" Not really. The designers themselves possessed the information. The information was "our strategy can be improved by an interative process of making small variations at random then testing those variations and retaining those that improve our strategy". The computer only did faster what the designers were always able to do on their own. That's the story of computers in a nutshell. They do exactly what they're instructed to do (which is often not what we want them to do). All this proves is that intelligent agents can use tools of their own design to speed up searches for answers.DaveScot
August 27, 2006
August
08
Aug
27
27
2006
02:13 PM
2
02
13
PM
PDT
Gil Sorry to interrupt your thread. I'm banned on all Esley Welsberry's websites and wanted to send a message to him (he and/or his minions clutch at read every word we write here. We started out as igames.com but an early partner who didn't pan out owned the URL and we parted ways he kept it. I changed the name to gamelobby.com and owned that URL for many years. My son didn't care for it and some time ago got cardandboardgames.com instead. But Game Lobby stuck on the website and in the software itself. Anyhow, the cribbabe program is in the download package. All the games I wrote for it (cribbage, hearts, backgammon, spades, gin rummy, pinochle, and euchre) have the ability to stand alone and connect with other players by typing in an IP address. Once the package installs just go to the installation directory and launch the cribbage exe file. It's all written in MS Visual C++.DaveScot
August 27, 2006
August
08
Aug
27
27
2006
01:16 PM
1
01
16
PM
PDT
Houdin: "I have a question about alpha-beta minimax algorithms. Are those where you see that a certain move will lead to disaster and don't check out any more moves after that?" Not exactly. What you describe is called forward pruning, but this can be risky business in an AI games-playing program because, for example, it might be that you can make repeated, initially seemingly pointless sacrifices, but engineer a checkmate in doing so. Alpha-beta allows risk-free tree pruning within the horizon of a depth-first search and the knowledge implemented in the evaluation function. On the other hand, my checkers program does include a highly-sophisticated forward-pruning algorithm that is only employed in clearly defined, special circumstances. Its heuristics are 99.9999999% accurate, so I don't worry about its statistical deficiencies because they will realistically never come up in actual play. The concept of the statistical improbability of erroneous collisions in the hash tables is also used to improve search efficiency and memory usage. None of this has any relevance to biological evolution.GilDodgen
August 26, 2006
August
08
Aug
26
26
2006
07:45 PM
7
07
45
PM
PDT
"Are those where you see that a certain move will lead to disaster and don’t check out any more moves after that?" When we find that a game state (situation) leads to an inferior score for the player whose move produced the state, we don't waste time evaluating just how bad it is. An "inferior" score is one that is worse than the best score the player is known to be able to obtain through some other line of play. The inferior score may be quite good on an absolute scale. "Is this similar to the alpha-beta minimax algorithm?" Sorry, but I can't discern adversaries in your examples, and I can't see how evolutionary processes recursively searches to a certain depth in the phylogenetic tree and then backs fitness information up the tree. Am I missing something you're trying to say? I do find it interesting, though, that most "hopeless monsters" are culled early in ontogenetic development.Tom English
August 26, 2006
August
08
Aug
26
26
2006
01:44 PM
1
01
44
PM
PDT
GilDodgen: "an alpha-beta minimax algorithm is employed" Gil and Tom, I have a question about alpha-beta minimax algorithms. Are those where you see that a certain move will lead to disaster and don't check out any more moves after that? For instance, if you move a pawn and your opponant's next move checkmates you, you don't bother trying to figure out what move to make next because the game is over. This, of course, saves you from having to look at a huge number of moves that will never be made and you can use your computer cycles to figure out moves and counter moves that don't lead to checkmate. I ask because Darwinian evolution has a very similar situation. For instance, suppose you have a fertilized egg with a mutation to part of the DNA that codes for a protein that is essential to heart function. Without that intact protein, the heart never develops, so the embryo dies. That means that evolution never has to check out the countless situations where you have that mutation to the heart protein AND mutations to, say, a protein that is important to liver function or the clarity of the lens of the eye or toenail hardness or countless other mutations. Is this similar to the alpha-beta minimax algorithm?Houdin
August 26, 2006
August
08
Aug
26
26
2006
09:55 AM
9
09
55
AM
PDT
j: "Just because you’re not satisfied with one of Dembski’s replies so far doesn’t mean it shouldn’t be considered in the discussion." I discussed the paper. You were notably absent from the discussion. Even now, you offer no counter to my criticism of the "proof bluff." You merely assert that what others have already done should be done. So far? Does your faith that he will do better eventually make the present paper more valuable? j: "Actually, it seems that chance-and-necessity processes generate CSI only if the necessity is as defined by an intelligent agent, rather than being defined by chance. In which case its not really necessity." Say what?Tom English
August 25, 2006
August
08
Aug
25
25
2006
07:23 AM
7
07
23
AM
PDT
bFast A strong correlation that I have seen, however, is between an interest in and aptitude for games of intellectual skill, such as chess and checkers and skill as a software developer. Bingo! Give the man a CEE-gar. The best programmers can think very quickly and logically. They can form models of complex logic/decision chains in their minds, test those models in the abstract, and do it much faster than the average bear. These are the same skills needed to excel at games of intellectual skill as well as the math portion of the SAT and IQ tests in general which are just another kind of intellectual game.DaveScot
August 25, 2006
August
08
Aug
25
25
2006
06:50 AM
6
06
50
AM
PDT
I'm curious; is it the case that a weak checkers player (e.g. which just makes pseudo-randomly generated moves) has less CSI than a strong player? Does a program which checks if a correct move has been played have the same CSI as a weak or strong player? How do you work this out?Patrick Caldon
August 25, 2006
August
08
Aug
25
25
2006
06:36 AM
6
06
36
AM
PDT
Tom English: "It was j who indicated it was relevant." Actually, I said that the paper "probably should be considered in the discussion." Just because you're not satisfied with one of Dembski's replies so far doesn't mean it shouldn't be considered in the discussion. Tom English: "Evolutionary computations are chance-and-necessity processes that seem to generate CSI." Actually, it seems that chance-and-necessity processes generate CSI only if the necessity is as defined by an intelligent agent, rather than being defined by chance. In which case its not really necessity.j
August 25, 2006
August
08
Aug
25
25
2006
04:31 AM
4
04
31
AM
PDT
Salvador, Regarding post 28, the preface to the paperback edition of NFL merely points out that "Coevolutionary Free Lunches" does not apply to biological evolution. This does not change the fact that Bill continues to disseminate a claim that the main conclusion of the paper is wrong. And it is irrelevant to my observation that it is inappropriate for Bill to make a counter-assertion rather than provide a counterargument. It was not I who brought up Bill's "rebuttal" of "Coevolutionary Free Lunches." It was j who indicated it was relevant. I responded by arguing strongly that it was not. But let me say this: The claims of ID speak to many things other than living systems. Bill Dembski's law of conservation of information says that no chance-and-necessity process generates complex specified information. Evolutionary computations are chance-and-necessity processes that seem to generate CSI. The fact that they are not biological makes them no less a potential contradiction to the law of conservation of information.Tom English
August 24, 2006
August
08
Aug
24
24
2006
09:14 PM
9
09
14
PM
PDT
Tom, I rank myself as a novice based on the standards of the American Checker Federation. In ACF tournaments there are three divisions: Minors, Majors, and Masters. I wouldn't think of embarrassing myself by entering the Minors at the ACF Nationals because I would come in at the bottom of the pack, if not dead last. The Majors division represents the "expert" category (although there is no such formal ACF designation as "expert"). The ACF has a rating system similar to that of the chess world, but you must enter tournaments with other rated players in order to get a rating. By my definition, an expert would be one who could place respectably in the Majors division. The four opening- and mid-game factors I mentioned in post 24 (single-corner development, center control, double-corner and king-row defense) are not esoteric concepts known only to the checkers elite who have spent years studying the game. These are the ABCs of checkers strategy that every student learns immediately after learning the rules of the game. This is equivalent to learning about the benefits of castling in chess. I wish I had time to write about Marion Tinsley, who was the greatest checkerist of all time, in my humble opinion. I fortunately had the opportunity to meet him and have my computer program compete against him at the 1990 ACF Nationals. He scored three draws and then defeated my program in the last round. I once discovered that my computer program, after running overnight and analyzing about a billion positions (in a mail-play tournament in which I was competing with the use of my computer program -- and my competitor knew that I was using my program), had refuted a classic checkers line. I called up Marion. He was watching Nova on PBS and I asked him if I could take a few minutes of his time. He said that he would rather talk to me than watch Nova. I asked him if he needed a checkers board to run up the position. He said no. I verbally rattled off the moves in standard checkers notation but didn't reveal the refutation that my program had found after analyzing a billion positions. Marion thought for less than a minute and replied: "Of course, you have 11-16!" which was the move my computer program found. I talked with Marion just a few weeks before he died in 1995. He was a professor of mathematics and a devout born-again Christian.GilDodgen
August 24, 2006
August
08
Aug
24
24
2006
06:26 PM
6
06
26
PM
PDT
I personally find that the growing evidence that insects have evolved very little since the dawn of flowers is significant. The relationship between flowers and insects is significant. One would surely expect that these two kingdoms coevolved, but it seems that they didn't, or not very much at all anyway. In light of this contradictory (to NDE's prediction) evidence, I personally write off the majority of coevolutionary theorizing. With a few possible exceptions here and there, it doesn't seem to happen.bFast
August 24, 2006
August
08
Aug
24
24
2006
05:23 PM
5
05
23
PM
PDT
Wolpert is a rabid Darwinian. As such he has no business serving as an Editor of the Journal of Theoretical Biology. Ye Gods, evolution isn't even going on any more. "A past evolution is undeniable, a present evolution undemonstrable." John A. DavisonJohn A. Davison
August 24, 2006
August
08
Aug
24
24
2006
05:04 PM
5
05
04
PM
PDT
Tom wrote: Bill has instead disseminated a counterclaim and told the authors of the original proof to prove it. He gives a tiny “hint” to suggest that he knows how to generate the proof, but really can’t be bothered with the details. To reiterate what I said above, even if someone obtained his proof for him, we would have only contention of two parties, not resolution of the matter. I have given the Wolpert and Macready paper close scrutiny, and I believe the proof is correct. The fact that Bill, as a brilliant mathematician who certainly knows that a counterargument is called for, offers a diversion rather than a counterargument leads me to believe that he can’t find anything wrong with the proof either.
In contrast Bill wrote Preface to Paperback Edition of NO FREE LUNCH
… Ironically, the very sketchiness of mathematical details that Wolpert claims prevents one from properly assessing the book does not prevent him from offering just such an assessment. In his review, he writes: “Neo-Darwinian evolution of ecosystems does not involve a set of genomes all searching the same, fixed fitness function, the situation considered by the NFL theorems. Rather it is a co-evolutionary process. Roughly speaking, as each genome changes from one generation to the next, it modifies the surfaces that the other genomes are searching. And recent results indicate that NFL results do not hold in co-evolution.” Since one of my main claims in this book is that NFL results do apply to co-evolution, it would seem that, coming from the inventor of the NFL theorems, this criticism should be devastating. But it is not. In Wolpert’s 2005 paper with William Macready titled “Coevolutionary Free Lunches” (IEEE Transactions on Evolutionary Computation), the authors acknowledge, in both the abstract and the conclusion, that “in the typical coevolutionary scenarios encountered in biology, where there is no champion, the NFL theorems still hold.” I highlight this contradiction not to gloat but simply to point up that the issues raised in this book remain very much alive and under discussion, and that the key players are still quite far from reaching a consensus….
scordova
August 24, 2006
August
08
Aug
24
24
2006
03:03 PM
3
03
03
PM
PDT
Gil, Regarding post 24, tread carefully. You say you're a novice, but can beat Blondie. David says Blondie has achieved an expert rating in play against humans. I have been at conferences where he set up his laptop in the hallway and took on all comers. Each time, he announced the outcome of the competition at the meeting. I vaguely recall something like 25 wins and 1 loss in one case. I think your post itself calls the notion that you are a novice into question. If you can spot and name configurations as you do, you understand a lot more than most people do. I have played Blondie, and she trounced me. By the way, have you checked p. xi in David's book? ;) Cheers, TomTom English
August 24, 2006
August
08
Aug
24
24
2006
02:51 PM
2
02
51
PM
PDT
Gil, trrll: “In other words, the only thing they programmed explicitly was the ‘laws of nature’ for checkers…” Gil: "Wrongo. The alpha-beta minimax algorithm is a human-devised search technique designed to maximize a score at the root node of the search tree. This is not part of the laws of nature for checkers." Trrll is only slightly off the mark. See the section on "The Checkers Engine and the Darwin Engine," starting on page 178 of Fogel's book on the system. Excerpt: "The design for our program decomposed naturally into two separate modules. [...] The evolutionary process was an iterative exchange of information between the Darwin engine and the checkers engine. Having worked on evolving neural networks in many different applications, we already had most of the Darwin engine complete right from the start. Kumar generated the first prototype for the checkers engine in about a week." Unfortunately, the Darwin engine sounds somewhat like a neural-net engine here. Actually, it is trivial to make the Darwin engine generic, so that it evolves individuals without knowing any details about them. (For you programmers out there, the code uses methods of an abstract Individual class. Details of individuals are hidden in subclasses of Individual, e.g. NeuralNet.) The details of the evolutionary process are separate from the details of what is evolved. The checkers engine is essentially the environment. The "laws of nature" in this environment include the rules of checkers and various aspects of how two neural networks are pitted against one another. In essence, there are two identical instances of a checkers program sans static evaluator. Two neural nets are plugged into the programs as static evaluators. The checkers engine gets play underway, and mediates the competition between the two programs. The partial checkers program, with its minimax search and other techniques, does not have a goal of making static evaluators. It is merely part of the environment in which static evaluators are evaluated. The Darwin engine does not have a goal of making static evaluators. Yet the Darwin engine extracts information (in an informal sense) about static evaluation from the environment (checkers engine).Tom English
August 24, 2006
August
08
Aug
24
24
2006
02:40 PM
2
02
40
PM
PDT
Since I sensed a degree of interest in the topic of music and ID, and to alleviate Gil's thread, I opened a new thread: Wistar Convention, Salem Hypothesis, and Music Salvadorscordova
August 24, 2006
August
08
Aug
24
24
2006
01:54 PM
1
01
54
PM
PDT
Tom, That would be an interesting project if I could find the time. By the way, I’ve played Blondie and was able to beat her (with search depths in the range of seven or eight plies, as I recall), despite the distraction of the scantily clad and very voluptuous lady displayed on the monitor. As I mentioned, I’m a novice player. Most players lose because they get caught in “shots,” which are short-range tactics (sacrifice one, take back two, sacrifice two, take back three, etc.) that are detected immediately by the tree search but that can be tricky for a human to spot. I was able to beat Blondie pretty easily by taking advantage of her very basic positional weaknesses. For example, she failed to develop out of the single corner, made moves to the side of the board, allowed me to weaken her double corner, and did not properly defend her king row. At the same search depths I don’t do this well at all against my program (usually lose, sometimes draw, win on a very rare occasion), because all of this knowledge and much, much more has been handcrafted into the evaluation function. Over the course of several years, two of the nation’s top Checkers Masters (Ed Markusic and Leo Levitt) helped me implement and refine WCC’s evaluation function knowledge.GilDodgen
August 24, 2006
August
08
Aug
24
24
2006
01:32 PM
1
01
32
PM
PDT
Gil: “It should be noted that human-crafted (i.e., intelligently-designed terminal-node evaluation functions) totally trounce Blondie’s eval function with ease.” CHALLENGE (revised): If your checkers system is written in C / C++, why not get Blondie's static evaluator from David and link it to your own code? The best evolved neural net is hard-coded into the function, so it's easy to use. You would have to translate your board representation into Blondie's before calling the static evaluator, and you might have to map Blondie's return values into the range used by your program. HYPOTHESIS: If you restrict the search depth to 8-ply (of course, selective extensions can go deeper), Blondie's static evaluator will beat the static evaluator in your released system. Assuming my hypothesis is correct, it would be very interesting to see how much the search depth must be increased to compensate for your static evaluator's deficiency in "intelligence." This is a worthwhile study, and the results should be publishable.Tom English
August 24, 2006
August
08
Aug
24
24
2006
12:59 PM
12
12
59
PM
PDT
Trrll: “In other words, the only thing they programmed explicitly was the ‘laws of nature’ for checkers...” Wrongo. The alpha-beta minimax algorithm is a human-devised search technique designed to maximize a score at the root node of the search tree. This is not part of the laws of nature for checkers. Tom English: “When the specification of programs goes from ‘sub-novice checkers player’ in the initial generation to ‘expert checkers player’ in the final generation, and the two humans who worked on the system are novice players, the notion that the CSI came from the humans is not all that satisfying.” My program went from a very weak endgame player (both chess and checkers programs have the most trouble in the endgame) to an absolutely perfect endgame player through the generation of the perfect-play databases. (These DBs really are amazing to watch in action.) I gave the computer an algorithm and let it run for two months. When it woke up at the end, it had gone from a novice endgame player to the best player in the history of the game (within the domain of the DBs). I’m not sure that this is essentially much different than the phenomenon you mention in the Blondie project (which I admit is way cool and very interesting). I didn’t give the perfect-play information to the algorithm, it discovered it on its own. I’m not convinced that either of these processes actually represents the generation of CSI. I’d be interested in Dr. Dembski’s take on this. (Bill, care to comment?) By the way, I am a novice player myself. The game is much too difficult for me.GilDodgen
August 24, 2006
August
08
Aug
24
24
2006
12:51 PM
12
12
51
PM
PDT
Gil, "These computer programs — and the use of evolutionary algorithms to design evaluation functions, as cool as all this stuff is — shed no light whatsoever on how natural processes could have produced the complex information and functionally integrated machinery found in living systems. What these programs demonstrate is that intelligent designers can do neat stuff with computers." Certainly you are right about "functionally integrated machinery," because the structure of the static evaluator is fixed, as is its function within the overall system. But this does not mean that there cannot be a gain in complex specified information. When the specification of programs goes from "sub-novice checkers player" in the initial generation to "expert checkers player" in the final generation, and the two humans who worked on the system are novice players, the notion that the CSI came from the humans is not all that satisfying. Some careful accounting for CSI is called for, not recitation of doctrine. Nobody has given the accounting.Tom English
August 24, 2006
August
08
Aug
24
24
2006
12:04 PM
12
12
04
PM
PDT
j: "'Fitness among Competitive Agents: A Brief Note,' by William A. Dembski, should probably be considered in the discussion" Dembski: "I leave it as an exercise to show that averaged over all possible [values of certain variables], competitive searches based on the game matrix do not outperform blind search (for simplicity assume that the [the values of certain variables] are real numbers between 0 and 1). Accordingly, NFL holds even for competitive agents." Lickety-split, there goes the "Coevolutionary Free Lunches" proof of Wolpert and Macready, just like that. An exercise, indeed! If you see that someone makes an error in a proof and claims that a false proposition is a theorem, how do you deal with the situation? Do you prove that the negation of the false proposition is a theorem? No, because that resolves absolutely nothing. It leads only to two sides saying, "My proof is right. No, my proof is right. NO-O, my proof is right. NO-O-O, MY proof is right." And what is especially depressing about this is that with neither side scrutinizing the proof of the other, both proofs could be wrong. A norm in the cultures of math, science, and engineering is that when you claim a mathematical proof is invalid, you show what is wrong with it. Without a formal counterargument, you are blowing smoke, no matter how lofty an authority you are. Wolpert and Macready give what is known as a constructive existence proof of a coevolutionary free lunch. This means that they contruct a case in which a particular coevolutionary algorithm is generally superior to others. The obvious counterarguments are along the lines that the construction is flawed or the analysis of the construction is wrong. But it does not suffice merely to claim that there must be an error somewhere. You have to find the error. Bill has instead disseminated a counterclaim and told the authors of the original proof to prove it. He gives a tiny "hint" to suggest that he knows how to generate the proof, but really can't be bothered with the details. To reiterate what I said above, even if someone obtained his proof for him, we would have only contention of two parties, not resolution of the matter. I have given the Wolpert and Macready paper close scrutiny, and I believe the proof is correct. The fact that Bill, as a brilliant mathematician who certainly knows that a counterargument is called for, offers a diversion rather than a counterargument leads me to believe that he can't find anything wrong with the proof either.Tom English
August 24, 2006
August
08
Aug
24
24
2006
11:19 AM
11
11
19
AM
PDT
Now for the main topic. The challenge I see for AI simulations validating NDE is that AI simulations invariably have the end point in mind. Chess programs have the end of playing better chess. What would be really fun would be to write an AI chess program, then start playing checkers against it. It would intrigue me if the chess program could discover that the primary rules have changed, and if it could adapt to the changed rules and master them. This, even, however, would not show what is needed to demonstrate NDE via AI. I consider the bacterial flagellum. I propose the following simple definition of the flagellum: The flagellum is a motive device. Its job is to cause the bacteria to move. Its method is to rotate a filament. If NDE is true, then a pre-flagellum must at some point existed. By pre-flagellum, I mean a device that did something useful, but that did not cause the bacteria to move. Now, without the bacteria ever "wanting" to move, a single mutational event had to happen which crossed the line between the pre-flagellum and the flagellum. After that mutational event, moving the bacteria must have occurred. If NDE, then in all likelihood this first flagellum also did some other task. However, the fact that the bacteria moved, though it likely had no sense of direction, must have given it some sort of inherent advantage over its siblings. In the context of this discussion, the bacteria stumbled onto "checkers", rather than having any guide pushing it in the direction of checkers. When it stumbled onto "checkers" it had to find advantage in doing so. For this reason, AI has a long way to go in truly simulating an NDE process. As far as Behe's flagellum claim goes, I really believe that Behe's character will be tested when it can be shown that a bacteria can exist with a pre-flagellum. Not just any manufactured pre-flagellum, but a pre-flagellum that is fully sensible, where all of its parts need to be there. Then with a single mutational event the line between pre-flagellum and flagellum needs to be crossed. Not only must the line be crossed, where we now have motion for the bacteria, but the bacteria must experience some sort of advantage in having this uncontrolled motion. (For surely the control of the flagellum must have developed once there was something to control.) The day that the above can be demonstrated, if Behe is of good character he will admit "flagella are reduceable." That day certainly has not come.bFast
August 24, 2006
August
08
Aug
24
24
2006
11:09 AM
11
11
09
AM
PDT
I'd like to put my two cents into the discussion about talents that correlate to software development, as I have made it a hobby to observe this equation. I personally have not seen a high correlation between musical talent, musical appreciation and aptitude as a software developer. Scott, what can I say, I have also not seen a high correlation between mathematic aptitude and software developing skill. As a developer who has lightly dabbled in AI, who has developed significant patented software, but who has not developed software specifically intended to "do math" or to do complex engineering calculations, I have found that my math education has been virtually unused beyond algebra. A strong correlation that I have seen, however, is between an interest in and aptitude for games of intellectual skill, such as chess and checkers and skill as a software developer. I find that in designing code, I use farely similar thinking processes to those of playing chess. Software requires envisioning what the "opponent" (user) may do, it requires strategising a few moves ahead, and it requires compartmentalizing your though processes. At least when I play chess, I think in terms of my current offensive -- if I move here, then here, etc., then I will accomplish this. I also think in terms of the presumed strategy of the other -- I think this guy is trying to achieve this, so he's likely to be moving here or here, so I need to fortify my defenses accordingly. In general, good software naturally leads the user to where I want him to go just as good chess play leads the opponent to the same end. Fortunately, hopefully the user is a willing follower where the chess opponent is likely not. Gentlemen, do you also find a correlation between interest and aptitude for games of intellectual skill and aptitude for software development?bFast
August 24, 2006
August
08
Aug
24
24
2006
10:56 AM
10
10
56
AM
PDT
Fogel and his team of software engineers programmed elements 1 and 2, without which element 3 (the product of the genetic algorithm) would have been completely useless. The fitness function was the definition of a win at the end of the tree search: no moves left for the opponent. This was a goal defined by the programmers. The leaf-node evaluation function is not essentially strategic; “strategy” is mostly a byproduct of the tree search.
In other words, the only thing they programmed explicitly was the "laws of nature" for checkers: The definition of what is a possible move in checkers (corresponding to the laws of nature, which set the limits of the possible), and the definition of success (winning the game, corresponding in nature to an animal carrying out the activities required for reproduction more successfully than its fellows). The result, after generations of evolution, was a program that was capable of playing a better game of chess than its creators--i.e. it had been able to derive, by random mutation and selection, more information about how to play the game than the designers themselves possessed.trrll
August 24, 2006
August
08
Aug
24
24
2006
10:30 AM
10
10
30
AM
PDT
John is right that the fundamentals of music (harmony, the subdivision of the octave into 12 tones, etc.) are based on the physics and mathematics of sound (the nature of vibrating strings and columns of air). Re Scott, Marcos, and bass playing: I guess this just demonstrates that pianists are more highly evolved than bass players. I’m sure Ernst Haeckel would agree. :-) But back to the main topic of this thread. These computer programs -- and the use of evolutionary algorithms to design evaluation functions, as cool as all this stuff is -- shed no light whatsoever on how natural processes could have produced the complex information and functionally integrated machinery found in living systems. What these programs demonstrate is that intelligent designers can do neat stuff with computers.GilDodgen
August 24, 2006
August
08
Aug
24
24
2006
08:49 AM
8
08
49
AM
PDT
1 2

Leave a Reply