Home » Intelligent Design » Gil Has Never Grasped the Nature of a Simulation Model

Gil Has Never Grasped the Nature of a Simulation Model

Tom English challenged me with this:

I say categorically, as someone who has worked in evolutionary computation for 15 years, that Gil does not understand what he is talking about. This is not to say that he is trying to mislead anyone. It is simply clear that he has never grasped the nature of a simulation model. His comments reflect the sort of concrete thinking I have tried to help many students grow beyond, often without success.

The reason for Tom’s lack of success is that he, and Darwinists in general, try to explain everything with an overly — indeed catastrophically — simplistic model. Here’s what’s involved in a real-world computer simulation:

My mathematical, computational, and engineering specialty is guided-airdrop technology. The results of my computer simulations, and their integration into the mechanics of smart parachutes, are now being used to resupply U.S. forces in Afghanistan. C-130 and C-17 aircraft can now drop payloads from up to 25,000 feet MSL, out of range of enemy small-arms, shoulder-launched missile, and RPG fire, and the payloads autonomously guide themselves to their targets within a CEP (circular error probable) of approximately 26 meters. Did I do all of this highly sophisticated mathematical and software simulation without ever having “grasped the nature of a simulation model”?

One small part of developing this technology involves mathematically and computationally simulating the descent rate of a parachute and its payload at various altitudes. This includes the following: the drag coefficient of the parachute, the chute reference area, the density of the air at various altitudes (not only determined by altitude but lapse rate — the rate at which air temperature changes with altitude), and other subtle considerations, such as the flow-field effects of the payload which changes the drag characteristics of the parachute.

If any mathematical, computational, or real-world assumptions about any of these factors are wrong, or if any unforeseen factors are left out (and what I described above represents a small percentage of what’s involved), the simulation breaks down. We do our best, but we never know for sure until we throw the thing out of an airplane, see where it lands, and tediously analyze the telemetry data recorded by the in-flight computer.

Based on these observations and computer simulations that can be tested in the real world, what confidence can anyone have that biological evolutionary computer simulations have anything to do with reality?

The answer is: none. It’s all fantasy and speculation, masquerading as science.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

200 Responses to Gil Has Never Grasped the Nature of a Simulation Model

  1. While those credentials are all well and good, they don’t do much to address the substance of the criticisms levelled in the comments of the previous topic.

  2. Gil,

    You haven’t addressed the criticisms that Tom, Bill and I raised in response to your last post.

    You said

    All computational evolutionary algorithms artificially isolate the effects of random mutation on the underlying machinery: the CPU instruction set, operating system, and algorithmic processes responsible for the replication process.

    If the blind-watchmaker thesis is correct for biological evolution, all of these artificial constraints must be eliminated. Every aspect of the simulation, both hardware and software, must be subject to random errors.

    We pointed out that your statement revealed a misunderstanding of the distinction between the simulator and the thing being simulated.

    You are saying that an accurate evolutionary simulation must mutate the underlying hardware and software . But this makes no more sense than arguing that your airdrop simulations are invalid unless you airdrop the computer doing the simulation, or subject the software to temperature variations.

    In your simulations, a virtual payload is descending through a virtual atmosphere beneath a virtual parachute, subject to virtual temperature and density variations. The computer itself need not be airdropped nor subjected to real temperatures and density variations.

    In exactly the same way, an evolutionary simulation subjects virtual organisms to virtual mutations and winnows them through virtual selection pressures. There is no need to subject the hardware and software to real mutations or errors. In fact, doing so defeats the purpose of the simulation by rendering its results inaccurate.

    Your airdrop simulations wouldn’t give accurate results if the hardware and software were mutating underneath them. Why expect an evolutionary simulation to deliver accurate results under those conditions?

    Virtual mutations belong in an evolutionary simulation, but real mutations to the underlying hardware and software do not.

  3. I for one would love to see a real discussion about this. I’m a software professional myself but have no special expertise in evolutionary algorithms. I understand that it’s easy to subtly inject front-loaded CSI into an algorithm and that the more sublte the front-loading is the more illusory the end result is in terms of the appearance of chance and selection having created CSI .

    I think Gil’s original point was that it is necessary for a simulation to be true to the full range of the possible sources and effects of random mutation and that in the natural world these act on the hardware (physical cells) at a primary level and affect the software (abstract information encoded in DNA) secondarilly . Tom English balked at the suggestion that a serious simulation would need to subject itself to meta-level mutations, insisting that the modelling is an abstraction that brings the entire realm of the system being modelled into the software. I think this is right but that Gil’s somewhat rhetorical point still needs to be considered. That the models themselves (albeit completely manifested in the software) must bring the full range of potential deleterious effects into the picture, in order for meaningful inferences about natural evolutionary systems to be drawn from them.

    My question is have there been any case studies in evolutionary computation that have reached towards the level of sophistication in modelling (in software) the full range of effects of random mutation? If so, what are they and what can be learned from them?

  4. josephus63 wrote:

    I think Gil’s original point was that it is necessary for a simulation to be true to the full range of the possible sources and effects of random mutation and that in the natural world these act on the hardware (physical cells) at a primary level and affect the software (abstract information encoded in DNA) secondarilly.

    If that is Gil’s intent, he should be insisting on virtual errors in the virtual organisms (i.e. virtual hardware) and virtual mutations in their virtual genetic information (i.e. virtual software). The fact that he is asking for real mutations and errors in the hardware and software running the simulation shows that he is confusing the simulator with the simulated.

  5. Here’s my idea for a simulation that might mimic the way that RM & NS are supposed to produce new information:

    Our enviroment is a series of, say, ten mazes, with a creature that tries to reach the center. These mazes will consist of simply-connected mazes, and multiply-connected mazes. ( google search if you aren’t sure what these kinds of mazes are )

    Our ‘genes’ that gets mutated are the code that controls the movement of the simulated creature. Since we’re simulating how information is supposed to arise in pre-existing organisms, we’ll give it a starting code that guides it through the maze by ‘following the left hand wall’. This will only enable it to solve the simply connected mazes, and not very efficiently (it will go down many dead ends).

    The fitness function will consist of counting how many moves it takes to solve each maze, and adding up the total (larger numbers mean a less fit ‘organism’). Unsolved mazes are simply given a very high value (A maze is considered unsolved if it fails to complete it within that number of moves).

    For me, to convincingly demonstrate evolution, the resulting code would have to be able to solve the mazes the original could not, and be able to solve other new mazes.

    What do you guys think? Would it be a useful model of RM & NS? Would it fulfill my criteria? I predict that it won’t. The most it might do is produce code that solves the simply-connected mazes more efficiently.

  6. That’s you? The OPFOR guys featured your work several weeks ago in a picture essay of an actual drop run from a C-130(if memory serves well) in Afghanistan.

    http://op-for.com/aboutus.php

    Of course, they also link gratuitous Miss World competitions too, so don’t get to big an ego… ;-)

    Congrats on a job well done Gil! Please pass along to all your workers and crew in armed services a big thanks too!

    well, well, allow me to intercede with a politically incorrect moment of prayer.

    May G_d continue to bless your work with extraordinary insight, knowledge, energy and wisdom to overcome our enemy. And may he bless all others accordingly who support our troops in this global war against oppression and ideology of hatred, that which is all things opposed to the love of Christ. For Christ overcomes the world. In Yeshua’s precious name I ask, God raise up whom You will, Amen and Amen.

  7. In general I agree that Gil has overstepped when he suggests that the computer running a simulation must in itself be subject to random mutation.

    StephanA: “Our enviroment is a series of, say, ten mazes, with a creature that tries to reach the center. ”

    I suspect that you have already front-loaded.

    The only destination that RM+NS permits is survival. I think that any artificial “destiny” beyond survival becomes a front-loading.

    I think it hard, but possibly not impossible to write a truly not front-loaded simulation. I think that the landscape that must be offered to such as simulation must be the internet itself. If a program was written which reproduced itself, and which had some way of randomly being noticed by human operators, and if the human operators had the privelage of destroying any such program that “got in their way”, the results would be a pretty good simulation. If such a program produced variants which became fun, and extended to start doing real-world tasks, like maybe good quality spam filtering, then I would become convinced that RM+NS can actually account for genuine complexity as seen in nature. Remember, however, that the only thing the program can do is reproduce itself with a certain small error rate. The landscape it is in must contain at least the possibility of affecting the display of the computer it is running on.

    Michaels7, mind if I worship the Prince of Peace, rather than claim that my side of a war is somehow divinely right?

  8. The problem here is Gil is talking about models of the real world and programs such as Avida are just Conway’s Game of Life v2.0 Now with lots of new rules and random mutation. But still a fantasy world of cellular automata not a model of reality.

    P.S. Somebody needs to get with Wiki and get The Sim Games on this list of digital organism simulators. The judges all agree the Sims are much more popular than Avida.

  9. Chance never played any role in either ontogeny or phylogeny. The entire evolutionary scenario was planned from beginning to end. Even extinction was part of the scenario. The vast majority of past extinctions were internally produced and had nothing to do even with the accumulation of defective genes. So it was neither bad luck nor bad genes that produced extinctions, to comment on the title of David Raup’s book. Extinction was an orthogenetic, goal directed property of the evolutionary sequence.

    I don’t expect anyone to take my views seriously but I will predict with great confidence that no one will be able to demonstrate that I am wrong. It will be, as every other of my several challenges have been, unanswered and probably even unacknowledged. Ideologues are like that don’t you know.

    I am convinced that every aspect of phylogeny was predetermined or “prescribed” and that chance played no role in any of it. If catastrophic events had not killed the dinosaurs they would have died anyway. It was part of the program if you know what I mean. All the bizarre ones died and only the conservative crocodilia survived. The same can be said for all the other large animals of the past.

    Leo Berg saw this with great clarity:

    “The extinction of organisms is due to inner (autonomic) and external (choronomic) causes.”
    Nomogenesis, page 407

    I wish only that he had used the past tense.

    Let me add that the current extinction, the one that Leakey called the sixth extinction, is due entirely to the mindless insistence on the part of civilized man to deliberately destroy his own environment.

    I just presented a similar summary over at “brainstorms.” I’ll bet they will both die on the vine.

    How do you chance-worshipping, mutation-happy Darwimps like them green apples? I hope they give you a belly ache.

    I love it so!

    “A past evolution is undeniable, a present evolution undemonstrable.”
    John A. Davison

  10. 10

    Several points of which to take note:

    - Gil’s assertion that simulations are just fantasy, even were it true, has no bearing on the fatal error committed in his previous post.

    - Stephen A.’ simulation would not entail operators pushing computers through mazes on rolling tables as they ran the simulation.

    - Whatever Gil’s intended rhetorical point, he instead displayed ignorance – plain and simple – of the nature of computation and simulation that this post does nothing to correct.

    Scientific American magazine many years ago ran an amusing story (I think it was in an April issue) in which the discovery of an isolated Polynesian island culture was reported. Explorers were mystified by the the ritual behaviors they observed: Island inhabitants pulled ropes that traversed the entire island, ropes that shifted logs from position to position in a huge, island-wide network of ropes and logs. The explorers were astonished to learn that the system performed arithmetic by very slowly executing well known computational algorithms (it has been many years since I read this; some of these details may be misremembered).

    This was a spoof with a serious point: although the island was imaginary, real computation could in fact be done using ropes and logs properly arranged. With a large enough rope-log system an algorithm performing, say, long division could be (slowly) executed.

    More importantly, the same algorithm for division yields the same result by means of the same steps when performed by a person using pencil and paper, when coded in Basic and run on TI 99/4a, when executed by a quantum computer, handled by Searle within his Chinese room, and indeed when performed “mentally” within the working memory of an individual person. The logic of the algorithm is utterly independent of the particulars of the physical substrate on which it is executed (even if that substrate is itself computationally simulated.)

    The point is that the logic of algorithmic computation (including simulation) is independent of the substrate on which it is instantiated. Damage to hardware occurs at a logical level removed from that at which the algorithm is expressed. Similarly, a simulation operates at a level utterly removed from the hardware that hosts the simulation.

    If you don’t get this, well, you don’t get it.

  11. Bfast,

    “Michaels7, mind if I worship the Prince of Peace, rather than claim that my side of a war is somehow divinely right?”

    Not at all, please read my prayer again. I did not ask for my will to be done, but the Fathers – “raise up whom You will.” That includes intelligent troops that can negotiate peacefully in these regions, because wisdom is required for peace, not war. I do not link prayer only to destruction of our enemies.

    And my prayer is one of protection too – for all.

    Those drop offs guided from the C-130 go to Afghanis as well as our troops. Medicine is delivered to sick children, vaccines, food and so many other items needed to help not only protect our troops but to help the people themselves they are trying to free from oppression.

    If you think the Taliban do not oppress, torture, or murder and are equal to our cause and that of our troops, then we do disagree. I believe these efforts and the sacrifices our troops are making are a worthy cause and I certainly believe that most of them have a higher cause in their hearts than someone who burns down schools, indoctrinates children with hatred, oppresses and murders women for any number of fallacious reasons, and blows up civilians on purpose.

    Does that make our cause divine? Only the Lord can make such judgements, but my prayer is that it is in His will.

    When I pray the enemy does not come against the love of Christ and that He overcomes the world, I am in fact asking for His Word, His love to conquer their hearts, and not by mens weapons. When I pray against an ideology of hatred and oppression, I am praying against principalities and powers, not people, who are sheep that God calls. I pray for all people to be saved, not just Americans. This is what is in my heart.

    I pray for Iraqis and Afghanis to have peace for their children’s future. And I hardly think myself divine after my life of debauchery, my side or this nations.

    But it is important to note in reading the Psalms, King David did not pray for defeat and I find it quite hard to do so myself. And from my understanding, Christ liked his music and so did His disciples as they quoted them often. One can pray for victory in many ways, without wishing harm on innocents.

    In truth, the love of Christ will never happen if those trapped in nations controlled by dictators or false teachers who do not allow them to hear the Living Word. One cannot come to Christ without hearing.

    We worship the Prince of Peace together, but He is also the Son of the Lord of Host – Captain of Armies, King of Kings, and Lord of Lords, and Healer, Counseler. He has many names. And one period of His rule will be with a Rod of Iron.

    Repentence is another thing altogether… and one of which I often need to do. As well as this nation needs to as well. But that is politically incorrect also :)

    And I have taken this post off course, so my apologies to Gil.

  12. Michaels7, I personally have been very troubled by the American Christian communty’s belief that America is somehow “in the right” with the authority of Christ himself, and its enemies are “in the wrong” with the same authority. The fact that the USA has abandoned long held values of justice as exemplified by Guantonimo bay and the “secret” CIA prisons seems to be somehow justified. The fact that the stated reason for invading Iraq, namely because of Sadam’s aledged secret WMD programs, has not been validated does not seem to faze the Christian community one bit. “We support the decision of our leader. Bush is a man of God who is making righteous war on our enemies” is the cry of the religious right. I find it disgusting, horrifying, that the religious right would label this war and the actions at Guantonimo bay as “approved” by my creator.

  13. 13

    Several points of which to take note:

    - Neither Gil’s technical accomplishments, nor his assertion that simulations are just fantasy, have no bearing on the fatal error committed in his previous post.

    - Stephen A.’ simulation would not entail operators pushing computers through mazes on rolling tables as they ran the simulation.

    - The thread has seriously wandered off topic and onto the domain of prayer, while on-topic posts are (apparently) blocked by the Nixplanatory filter that governs the exchange here.

  14. 14

    Scientific American magazine many years ago ran an amusing story (I think it was in an April issue) in which the discovery of an isolated Polynesian island culture was reported. Explorers were mystified by the the ritual behaviors they observed: Island inhabitants pulled ropes that traversed the entire island, ropes that shifted logs from position to position in a huge, island-wide network of ropes and logs. The explorers were astonished to learn that the system performed arithmetic by very slowly executing well known computational algorithms (it has been many years since I read this; some of these details may be misremembered).

    This was a spoof with a serious point: although the island was imaginary, real computation could in fact be done using ropes and logs properly arranged. With a large enough rope-log system an algorithm performing, say, long division could be (slowly) executed.

    More importantly, the same algorithm for division yields the same result by means of the same steps when performed by a person using pencil and paper, when coded in Basic and run on TI 99/4a, when executed by a quantum computer, handled by Searle within his Chinese room, and indeed when performed “mentally” within the working memory of an individual person. The logic of the algorithm is utterly independent of the particulars of the physical substrate on which it is executed (even if that substrate is itself computationally simulated.)

    The point is that the logic of algorithmic computation (including simulation) is independent of the substrate on which it is instantiated. Damage to hardware occurs at a logical level removed from that at which the algorithm is expressed. Similarly, a simulation operates at a level utterly removed from the hardware that hosts the simulation.

    This is what some here seem not to grasp.

  15. 15

    But Bush IS a man of God who is trying to establish a democracy in the middle east and nothing Bruce Fast or anyone else can detract from that very worthwhile venture. I wish him well.

    How do like them apples Bruce? I hope it gives you gas.

    “A past evolution is undeniable, a present evolution undemonstrable.”
    John A. Davison

  16. Nice to see this thread hasn’t gone off topic.

  17. John A. Davison, “I hope it gives you gas.” It does.

  18. recip

    re ropes and logs

    It’s all well and good to model it but you still have to build the real thing to see if it works. That’s how models work. You don’t seem to understand that. Once you build it and it works as the model predicted then you have a working model, until then you have an untested model. Capisce?

  19. 19

    DS:

    “It’s all well and good to model it but you still have to build the real thing to see if it works.”

    Your reply doesn’t go to the point of my post, which is that the logic of algorithmic computation (including simulation) is independent of the substrate on which it is instantiated. This is true for all computation – not just modeling and simulation (e.g. for arithmetic computations). Hence the long division algorithm of my example remains logically unchanged across the many media (pencil and paper, 99/4a, quantum computer, banks of vacuum tubes, frontal lobes, etc) on which it can be instantiated. This independence is what makes Gil’s original suggestion vis mutating the hardware of a computer running an evolutionary simulation rather silly. This has been stated and illustrated many ways in this (and the previous) thread.

    I didn’t intend the rope-log computer as an analogy to something simulated – rather, the spoof illustrates this independence of computation and the physical substrate hosting omputation. We already know that computation can be hosted by virtually any medium, and that the rope-log computer could be made to work. (Whether it would be practical is an entirely different question.)

  20. If it flies, the simulation was good. If it crashes, the simulation was bad. Everything else is irrelevant.

  21. One other obvious point: A simulation must accurately depict the system being modeled. The computational machinery and information content of biological systems is inherent in, and quintessentially critical to, the function of the system being modeled, and therefore cannot be excluded from the effects of mutations, without the simulation being rendered completely meaningless.

    There is nothing analogous in guided-airdrop simulations. My sims have been proven to work in the real world. Bio-sims have not.

  22. 22

    StephenA – If you Google “genetic programming maze” you’ll find several sites devoted to this subject. At least one seems to let you design your own maze.

    bFast – what’s important to NS is differential survival. That is usually modelled as scoring well at some task. If only two states are allowed, you’ll still get progress, but slower than with a finer grained scale.

    It is also possible to claim that there is front loading in the choice of instruction set (this is in referenece to GP systems, such as StephenA was speculating on). A way to avoid this is to give a GP system a random instruction set which may or may not include the operators that the researcher knows a priori are useful. All this does is slow down the system, it doesn’t stop it from eventually finding better and better solutions.

    DS – Avida isn’t a CA, AFAIK.

  23. On the other thread, Tom English wrote:

    Teaching computer science students from the undergraduate to the doctoral level, I encountered quite a few who were excellent programmers, but who could not begin to comprehend the notion of a model. The concept is simply too abstract for some people. They never catch on to it.

    Tom,
    You’re absolutely right. As Gil demonstrates, some people never get it, even after having it repeatedly explained to them.

    Gil wrote:

    One other obvious point: A simulation must accurately depict the system being modeled. The computational machinery and information content of biological systems is inherent in, and quintessentially critical to, the function of the system being modeled, and therefore cannot be excluded from the effects of mutations, without the simulation being rendered completely meaningless.

    Gil,
    Let me make this as clear as I can:

    1. Yes, the reproductive apparatus of life is (in part) an information processing system.
    2. Yes, the computer and software running the simulation are also an information processing system.
    3. No, they are not the same system. In an evolutionary simulation, one of them (#2) is simulating the other (#1).
    4. Yes, a completely realistic simulation of evolution should include mutations of the reproductive apparatus being modeled(#1).
    5. No, this does not mean there should be mutations of the computer and software running the simulation (#2).

    Do you see the difference?

    Let me try again using your example of autonomously guided airdrop payloads.

    1. The guidance computer and software on one of your airdropped payloads form an information processing system.
    2. The computer and software you use to do an airdrop simulation also form an information processing system.
    3. They are not the same system. In an airdrop simulation, one of them (#2) is simulating the other (#1).
    4. If you wanted to simulate the effects of hardware or software errors on an airdrop, you would introduce the errors into the model of #1.
    5. You would not introduce errors into the computer and software running the simulation.

    Errors in the hardware and software of the simulator only serve to produce nonsensical simulation results. If you introduce an error into the simulator’s operating system, you won’t be able to trust the results of your simulation. This is just as true of an evolutionary simulation as it is of an airdrop simulation.

    Finally, just to really hammer the point home: the simulator and the model are separate. I can simulate a broken microprocessor chip on a perfectly functioning computer. You can simulate a failed airdrop on a perfectly functioning computer. We can simulate DNA copying errors using a perfectly functioning computer. The simulator and model exist at different levels. Errors in the model do not require errors in the simulator.

    Sorry to go on so long, but it really seems that the message won’t get through if it isn’t spelled out ultra-explicitly.

  24. recip

    The substrate doesn’t matter if you’re not modeling reality. If you’re modeling reality then the substrate of reality is a benchmark by which all others are measured. Say we’re modeling an aircraft. Do you get an FAA certification based upon it flying in a computer simulation? Of course not. The model may be flawed.

    An actual simulation of evolution would need to begin by modeling biochemistry just as Gil’s simulation begins by modeling the atmosphere. By modeling reality there then becomes a benchmark against which the simulation can be tested.

    An example of a real honest to God biological simulation is protein folding. It’s something of a holy grail. We don’t have a model that produces the folds reality produces. You have to walk before you can run. A real testable model of evolution is a long way off. We’re still working on the biochemistry part. When we get protein folding licked then we can start plowing through sequenced genomes getting accurate 3D models of the proteins they produce and how those proteins behave. When we get there things will be getting really interesting in biological simulations.

    Avida and other digital organism programs are silly in comparison to these which actually model (or attempt to model) something real where the model can be tested.

    And yes, disabling the computer the simulation is running on is a silly suggestion for how to make a better model of life. The simulations he’s talking about are silly to begin with since they’re nothing but fantasy worlds. That’s why I made a joke out of it saying another way to make it more real would be an asteroid that smashes the hardware to smithereens periodically. Are you forgetting the computer maxim – silly in, silly out?

  25. StephenA-

    Here’s my idea for a simulation that might mimic the way that RM & NS are supposed to produce new information

    In some sense, this suggestion is not very useful as long as Gil and other IDists believe that software and hardware must undergo mutation. If we treat their suggestions seriously, no simulation at all can occur. Sure, we know they’re wrong, but it’s also important to show them that they are wrong. Failure to do that means that they will continue to erroneously play the “simulations don’t mirror actual reality” card, which may be convincing to people who don’t know better.

    I’ve thought about creating evolutionary simulations which involve entire ecosystems (letting natural selection rather than an explicit goal act as the selector). Part of me thinks that it’s a waste of time because I know that ID advocates will always come up with some reason to dismiss the software. To put it another way: there is no conceivable simuation which could satisfy ID advocates. IDists may disagree with this, but the suggestions that involve destruction of the computer which runs the simulation or suggestions that are far outside of our compuational power (protein folding) end up preventing any “accurate” simulation from happening at all.

    DaveScot -

    An example of a real honest to God biological simulation is protein folding. It’s something of a holy grail.

    But, that’s really overkill. Sure, you can say that we need to mirror the real world all the way down to the protein-folding level. (Nevermind my earlier point that we have nowhere near the computational power to do so – the “Folding at Home” project has massive computing power being put into this, and they’re nowhere near the needed computational power. An evolutionary simulation that does this multiplied by billions of organisms over billions of years is clearly out of our reach. Further, even if it were done, I suspect it would be attacked with claims that information was subtlely smuggled in, that the simulation was “intelligently designed” and therefore useless, or (as Gil claims) that the software and hardware must be subject to mutations as well.) But, it’s overkill to do this if you simply want to show that RM+NS is capable of producing information. That point can be shown through simple genetic algorithms; no need for simulation down to the protein-folding level if you want to bust the argument that RM+NS can’t produce information. If your argument is that something else confounds the evolutionary process somewhere else, then maybe a system capable of doing protein-folding would be useful. But, to break through the claim that “RM+NS can’t produce information/CSI”, it’s overkill.

    As far as genetic algorithms are concerned, I think it’s already clear that they do produce information – the main question is how do we show that to people who resist this conclusion or have intellectual hangups or misunderstandings somewhere?

  26. 26

    I am beginning to think that evolution WAS entirely the loss of potentiality. There is no evidence that any contemporary organism is capable of ever becoming anything different from what it is right now. Furthermore, there is no evidence in the past for any such event even though I know it must have taken place. I conclude that the “evolvers ” are all gone by the wayside and we are left with only the products, all of which are doomed to extinction.

    How does that grab you?

    “A past evolution is undeniable, a present evolution undemonstrable.”
    John A. Davison

  27. bfast — In general I agree that Gil has overstepped when he suggests that the computer running a simulation must in itself be subject to random mutation.

    Wouldn’t that depend on what degree of evolution that you are trying to model? If one is trying to demonstrate how life occurred without intelligent design, it should be recognized as axiomatic that you can’t you do so using intelligent design which obviously includes hardware and software. OTOH, if one is trying to demonstrate evolution with intelligent design, then simulation programs will make sense. Further, the more the ID the more rational the evolution.

    “We support the decision of our leader. Bush is a man of God who is making righteous war on our enemies” is the cry of the religious right.

    Where the heck are you getting this from??? Anyway, isn’t the war was supposed to be some kind of neocon (Jewish) conspiracy

  28. And my last sentence was sarcasm, in case anyone misses that.

  29. The point is that the logic of algorithmic computation (including simulation) is independent of the substrate on which it is instantiated.

    As an undergrad at MIT, Danny Hillis and teammates implemented a tic-tac-toe player in Tinker Toys. It was guaranteed never to lose a game. I am sure there are some here who would say, “How can tic-tac-toe be played with Tinker Toys? You have to write on a piece of paper to play the game.” This reflects a common deficit in abstraction. One can as represent the game state in Tinker Toys as well as one can with marks on a piece of paper.

  30. bFast,

    You’re a great example of why I can’t make ID-NDE into a litmus test of the decency of a person. Thanks for your posts.

  31. It’s all well and good to model it but you still have to build the real thing to see if it works. That’s how models work. You don’t seem to understand that. Once you build it and it works as the model predicted then you have a working model, until then you have an untested model. Capisce?

    Proof of principle does not require a fit to data. We often see here claims of what random mutation and natural selection cannot do, and evolutionary computation puts the lie to those claims. It does not matter one whit whether the simulations fit biological observations. If you read the writings of Bill Dembski, you will see that he understands as well as I do that the essential questions regard informational physics, not biosystems per se.

  32. If it flies, the simulation was good. If it crashes, the simulation was bad. Everything else is irrelevant.

    An extremely limited notion of what a simulation can teach us. Read post 31.

  33. The computational machinery and information content of biological systems is inherent in, and quintessentially critical to, the function of the system being modeled, and therefore cannot be excluded from the effects of mutations, without the simulation being rendered completely meaningless.

    Ah, Gil, but what you seem not to comprehend — this, I think, is a genuine misunderstanding — is that there is no absolute distinction between an analytical model and a simulation model. And if simulation models suffer the defects you say they do, then Bill Dembski’s abstract models of evolution in Searching Large Spaces and The Conservation of Information are in even bigger trouble. By the way, consider that I published on conservation of information in search in 1996, ten years in advance of Bill’s dissemination of “The Conservation of Information.” I can tell you that the analytic models are not as detailed as the simulation models. The very reason we implement some models as computer programs is that they are not amenable to mathematical analysis.

    You cannot dismiss simulation models of evolution as simplistic without doing the same for Bill Dembski’s and my own analytic models. Sure you want to do that? It doesn’t seem wise to me.

  34. Tom English

    I understand your frustration and but I’m not going to allow your ad hominem attacks to stand. Two were deleted. Knock it off.

    I’m growing very frustrated by you and others’ inability to grasp the fact that models of reality need to be testable. They need to make predictions that can be tested against reality. I’ve given you examples of real models in biology (protein folding), mechanical design (aircraft), and electronics (microprocessors). These all model the real world and can be tested by seeing if they duplicate the results obtained in the real world.

    Here’s yet another real, testable model that interests me: Stellar Evolution. If you don’t see the difference between these, testable models of real world processes, and Avida which creates artificial laws for a world that doesn’t exist in nature, then I just don’t know what more I can say. You give me tinker toys and I give you the stars. The examples speak for themselves.

  35. 35
    Reciprocating Bill

    DS said:

    “The substrate doesn’t matter if you’re not modeling reality. If you’re modeling reality then the substrate of reality is a benchmark by which all others are measured.”

    You miss what I intend by substrate, and are using “substrate” with a different referent than am I. I am referring to the computational system capable of hosting the abstract collection of algorithms that compose the simulation. As pointedly illustrated above, this hardware may be composed of Tinker Toys, paper and pencil, hand calculator, ropes and logs, tunneling quantum events, etc. The logic of the algorithm that runs on these various physical substrates remains constant as the substrates vary.

    Using “substrate” in the sense I intend, as defined above, the assertion that an algorithm is independent of the particulars of the computational substrate is as true when simulating a system that exists in reality as when modeling hypothetical events for theoretical purposes.

    To return to my original example, the computational algorithms that compose a model of an approaching, very real hurricane are as independent of computational substrate as is any evolutionary algorithm run as a theoretical exploration. In the instance of a hurricane simulation you will indeed want to test the predictions issued by your model against the behavior of the actual hurricane, not the least for the purpose of improving your model. Same with Gil’s simulation of the process of dropping guided packages. Nevertheless, these simulations, whether of hurricane or guided package, run at a computational level that is utterly independent of the hardware that hosts those simulations. The computational substrate is not part of the simulation – nor is it the reality against which the results issued by the simulation are tested.

    You use substrate in the sense of the “system that is being simulated and the reality within which it is embedded.” This is not the substrate to which I refer. As you say, this reality is of obvious relevance when testing for accuracy the results issued by a simulation. But this is not the computational substrate to which we are referring.

    BTW, the assertion that it is necessary and desirable to simulate every detail of the system and every level of the system being modeled is misleading. This depends upon the purpose served by the simulation. For example, if I am simulating airfoil shapes to determine which yield the least drag and the greatest lift, it is perfectly legitimate to omit modeling of the interior structure of the airfoil. OTOH, if I am modeling the performance under flight stress of an airfoil destined to be built and installed on actual aircraft that internal structure becomes relevant. So the level of detail required is determined by the purposes served by the simulation. And, often times, one wants to pare one’s simulation down to simple terms for the purpose of better understanding the phenomenon being modeled.

  36. 36
    Reciprocating Bill

    DS:

    “Here’s yet another real, testable model that interests me: Stellar Evolution.”

    Your distinction between models that are testable against empirical observations of some reality (actual stellar behavior) and those that are theoretical explorations certainly legitimately points to different sorts of simulation that serve different sorts of purpose (although the question of to which class of simulation evolutionary models belong is another debate). But your legitimate distinction doesn’t speak to the problems with Gil’s original proposal.

    Gil insisted that random modifications (a feature of a computational model of NS) extend all the way down into the hardware on which that simulation is run (the computational substrate) before the evolutionary simulation is complete.

    This is just plain wrong.

    Nor does it accomplish what I think Gil and maybe you are reaching for – a test grounded 3-space reality of an evolutionary simulation. His proposal simply does not pose that test.

  37. Dave Scott,

    I’m growing very frustrated by you and others’ inability to grasp the fact that models of reality need to be testable.

    I have long been frustrated with certain parties’ inability to recognize that neo-Darwinian evolution is an abstract process, not necessarily implemented in biota.

    In 1995, I ran an evolutionary computation on a massively-parallel computer and obtained more than 20 thousand models predicting annual sunspots counts. The series of sunspots counts is chaotic, and has been a challenging benchmark problem in time series prediction for many years. With a combination of models, I obtained far better prediction accuracy than anyone else ever had (“Stacked Generalization and Simulated Evolution,” BioSystems, 39(1), pp. 3-18, 1996).

    My question for you is how I did that. I had and still have zero knowledge of sunspot formation. To my knowledge, nobody understands the blasted things. How could I have front-loaded the evolutionary computation when no physicist or statistician had ever managed to give a good model of the time series? I can see no way that you can attribute the success of the evolutionary process to my intelligence.

    The point I want you and others to get here is that such successes of evolutionary computation demonstrate the efficacy of abstract neo-Darwinian processes. Again, if this says nothing about evolution, then neither do Bill Dembski’s models of assisted search and added information. How are you going to rule out my work without ruling out his? Direct question.

    I’ve given you examples of real models in biology (protein folding), mechanical design (aircraft), and electronics (microprocessors). These all model the real world and can be tested by seeing if they duplicate the results obtained in the real world.

    Modeling always entails a choice of granularity. No one, but no one, simulates microprocessors at a fine level of granularity. The computational demands are too great, as you should know. There are people working in computational chemistry (e.g., simulation of protein folding), but proteins are no more the right level of granularity for evolutionary simulation than are transistors the right level for microprocessor simulation. If you truly know anything about microprocessor simulation, you know what I am saying. As for aeronautics, let’s consider airfoil design instead of mechanics. Simulation is very useful in that domain, even at high Reynolds numbers. But if you ask someone to give you a precise prediction of airflow, he or she will not be able to give it. The simulations are qualitatively correct, but not quantitatively. In other words, simulations used to design airfoils cannot give the kind of predictions you demand of evolutionary simulations.

  38. 38

    It seems I am wasting my time with this thread also.

    I did manage to flush out a Darwimp in response to my several papers. Just click on John A. Davison on the side bar and join the fun.

    The more the merrier I always say.

    SOCKITTOME!

    I love it so!

    “I’m an old campaigner and I love a good fight.”
    Franklin Delano Roosevelt

    “There is nothing more exhilirating than to be shot at without result.”
    Winston Churchill

    “Darwinians of the world unite. You have nothing to lose but your natural selection.”
    after Karl Marx

    “A past evolution is undeniable, a present evolution undemonstrable.”
    John A. Davison

  39. 39

    Tom,

    I think that DaveScot is making the following point. EC has abstracted evolution out of one description of biological reality. It has been successful in using that abstraction to attack problems. Only some EC researchers attempt to explain anything about biological reality, most are happy with having a useful tool to use on other problems. But to reconnect to biology, you have to ensure that you didn’t abstract away the wrong things in the first place. One way to do that is to do very little abstraction. DS, am I close?

    There ar people using to GP to work on protein folding problems.

  40. Tom English // Oct 1st 2006 at 7:12 am

    bFast,

    You’re a great example of why I can’t make ID-NDE into a litmus test of the decency of a person. Thanks for your posts.

    Tom, this is a bit shocking. I’ve been to Pandas Thumb and seen the sneering insults that are smattered about. But it never occured to me that belief in ID-NDE might be a litmus test for “the decency of a person” (I know it’s not your litmus test, but apparently the idea occured to you).

    The only realm in which I see this kind of litmus test for decency is liberal politics, in which liberals often believe conservatives are evil, but conservatives generally believe liberals are merely mistaken or foolish. Since I believe you identified yourself as a college professor (i.e. someone toiling deep in the heart of political liberalism), that explanation for your comment to bFast seems to fit. Am I mistaken?

  41. Sorry, my comments start with “Tom, this is a bit shocking…”

  42. Tom English,

    bFast,

    You’re a great example of why I can’t make ID-NDE into a litmus test of the decency of a person. Thanks for your posts.

    Good trick. I, an IDer, have been suggesting that a couple of other IDers on this site are viewing an issue with an opposite moral position to my own.

    If IDers hold opposite moral positions on any given issue, then your litmus test is bogus, isn’t it.

    Further, I personally agree with many issues held by the religious right. I only fail to understand how the religious right has adopted a love for war. They certainly didn’t get it from the New Testament that I read.

  43. Tom English says:

    Proof of principle does not require a fit to data. We often see here claims of what random mutation and natural selection cannot do, and evolutionary computation puts the lie to those claims.

    And what is that principle? The implication that EC in general involves “natural selection” seems to be quite a stretch.

  44. 44

    Natural selection is very real. Today, as in the past, it PREVENTS change. That is all it ever did. Don’t take my word for it. You never do.

    “The struggle for existence and natural selection are not progressive agencies, but being, on the contrary, conservative, maintain the standard.”
    Leo Berg, Nomogenesis, page 406

    “Animals are not struggling for existence. Most of the time they are sitting around doing nothing at all.”
    John A. Davison

    Any student of the living world knows that. Accordingly, Darwinians are not students of the living world.

    How does that grab you Darwimps? I bet it smarts a little eh? I certainly hope so.

    I l0ve it so!

    “A past evolution is undeniable, a present evolution undemonstrable.”
    John A. Davison

  45. Earlier I wrote:

    And if simulation models suffer the defects you say they do, then Bill Dembski’s abstract models of evolution in Searching Large Spaces and The Conservation of Information are in even bigger trouble.

    Some of you are avoiding the crucial point that the leading theorist of the ID movement focuses much more on probability and information than he does living things. If you look at the papers I linked to, you will see that Bill says nothing about protein folding and genes and chromosomes and predators and plagues and earthquakes and meteors. He models information flow that has never been observed directly. There are no data for his models to fit, and he gives no guidance as to how to validate the models. If biological realism were as crucial as you say, Bill would be in bad shape.

  46. Tom English says:

    If biological realism were as crucial as you say, Bill would be in bad shape.

    Well, let us look at the first Dembski article you reference.

    1 BlindSearch
    Most searches that come up in scientific investigation occur over spaces that are
    far too large to be searched exhaustively. Take the search for a very modest
    protein, one that is, say, 100 amino acids in length (most proteins are at least
    250 to 300 amino acids in length). The space of all possible protein sequences
    that are 100 amino acids in length has size 20100, or approximately 1.27×10130.

    Actually, he references the biological context right up front. Now certainly this doesn’t prove anything about the underlying issue, but it does frame the difficulties for a simple RM&NS. Now, biological realities may include some convenient arrangement of the landscape, or maybe some sort of life-friendly self-organization that remains as yet undiscovered. I’ll let the reader decide whether those possibilities favor a claim of “victory” by either side of the debate, but will merely point out that with regard to simple claims of RM&NS, known biological practicalities make Dembksi’s arguments the easier road to travel, so he is unlikely to forsake them as you claim.

  47. Tom,

    It seems to me Dembski isn’t really modelling, is he? I’m no math geek, nor am I an uber programmer geek, so I assume I’m missing something obvious and hope you’ll correct my error.

    Is WD really modelling when he takes what is there – eg, a 100 amino acid protein sequence – and determines the statistical search space accordingly? That seems much different than setting up an digital ‘environment’ to demonstrate evolution. What am I missing?

  48. Roger,

    Yes, Bill gives a made-up problem of searching for a particular sequence of amino acids. He does not go at all into the complexities of how amino acids specify proteins. And he certainly does not mention that in reality many permutations of an amino acid sequence may represent the same protein, and that numerous proteins are represented by amino acid sequences of length 100.

    So what Bill does is no more than to indicate that search for some biological structures takes place in combinatorial spaces. The mention of amino acid sequences does not embue his work with biological realism. The paper would be unchanged if the problem were to search for a sequence of 100 letters (the size of the search space would go from 20 ^ 100 to 26 ^ 100).

  49. todd:

    It seems to me Dembski isn’t really modelling, is he?

    Let’s focus on “Searching Large Spaces.” He gives a model of search for a small target in a large space. He calls that model assisted search. Late in the paper, he indicates that natural evolution must have been an assisted search. That is, his claim is that one can model natural evolution as assisted search.

    Is WD really modelling when he takes what is there – eg, a 100 amino acid protein sequence – and determines the statistical search space accordingly?

    He’s not trying to solve that particular problem. It’s just an example to motivate his analysis of assisted search. His mathematical results are applicable to a large class of problems.

  50. David vun Kannon wrote:

    …to reconnect to biology, you have to ensure that you didn’t abstract away the wrong things in the first place. One way to do that is to do very little abstraction.

    Less abstraction isn’t necessarily better. The more concrete the model, the longer it takes to run. A hurricane model that tracked the path of every air and water molecule would be quite concrete but worthless for forecasting, because it would run so slowly that its forecasts would turn into retrocasts.

    Less abstraction doesn’t necessarily mean better accuracy, either. A model of the soybean market that attempted to track the activity of every neuron in the brain of every farmer, trader, and consumer would degenerate into an incoherent mess, whereas a more abstract model could yield useful results.

    The evolutionary models that are out there are not attempts at “photorealistic” simulations of evolution. Nobody expects to see one of them reproduce the marsupial/placental split, for example. They are instantiations of abstract Darwinian processes (replication, variation, selection), and their purpose is to yield information about the capabilities and limitations of Darwinian processes. Biological evolution is just one instance of a Darwinian process. Whatever we learn about Darwinian processes in general applies to biological evolution in particular.

    The significance of Avida, in particular, is as an example of how a Darwinian mechanism can produce irreducible complexity. Honest critics can no longer claim that NDE cannot in principle produce IC. They must show that a particular IC structure cannot be produced because of the particular local genomic and fitness landscapes.

    This is a blow to the many ID advocates who saw the existence of IC as proof of design.

  51. John Davison wrote:

    Natural selection is very real. Today, as in the past, it PREVENTS change. That is all it ever did. Don’t take my word for it. You never do.

    Ok, John, since you’re feeling neglected, I’ll take your bait. How do you explain nylonase, given your belief that natural selection is solely conservative?

  52. 52

    Because a computer simulation purports to show something is possible doesn’t mean it’s possible or anywhere near possible in the real world.

    That should be noted first off.

    Secondly- I’d bet that a lot of people here and others in ID, in general, would disagree that avida shows what you claim. Even if it did show this in a computer simulation, again- it’s not the real world. On top of that- I’d say a lot of honest IDers would disagree with you and do so honestly. It doesn’t make a person dishonest to discount avida as a fantasy.

  53. In my original post about mutating the CPU instruction set, the OS, etc., I was being somewhat sarcastic. Obviously, this would be silly, and I wouldn’t expect anyone to take such an experiment seriously. My point was that if mutations are genuinely random, we should expect that in a biological system (e.g., a cell) they would interfere with or modify all aspects of a cell’s basic functioning, which would affect the ability of the cell to survive and reproduce. If random mutations killed off a significantly large percentage of cells or made them sterile before they had a chance to reproduce, pass on their genetic information, and for natural selection to work its magic, the rest of the simulation would be rendered invalid.

    My point was just that simple, but apparently I didn’t make it clear.

    My point was also not that genetic algorithms and evolutionary computing are not useful and powerful tools in a wide variety of problem-solving domains.

    I should probably also have been more explicit about the bottom line of my contention: Ridiculously exaggerated and unsubstantiated claims have been made for the real-world relevance of computer simulations of biological evolution. For example, Avida was touted in a premier international science journal as having refuted Michael Behe’s irreducible-complexity challenge to random mutation and natural selection as a viable mechanism to explain away obvious difficulties.

    Avida did nothing of the sort, but the claims made on its behalf were soaked up uncritically.

  54. Because a computer simulation purports to show something is possible doesn’t mean it’s possible or anywhere near possible in the real world.

    Does the above hold for a mathematical model which says that something is impossible (or as good as) as well?

  55. 55

    Karl Pfluger

    Nylonase is an enzyme, not an organism. Adaptive enzymes have been known for decades and probably have nothing to do with creative evolution.

    Does that help? Probably not.

    I am not neglected. I am cynically and deliberately being ignored. So also were my sources and they still are. We are ignored because collectively we have exposed and destroyed the biggest hoax in the history of science. Got that? Write that down.

    Thanks for posting and exposing yourself.

    I love it so!

    “A past evolution is undeniable, a present evolution undemonstrable.”

  56. Dr Davison,

    You sir, are an iconoclast of the highest order! Got that? Write it down! :)

  57. I think I had better clarify what my suggestion was for since BC seems to think I’m against ID.

    I think that it is possible to simulate the effects of RM & NS in a computer model, and my suggested model was an attempt to do that. However, I predict that the effects of RM & NS if accurately simulated (even if only in a rather abstract sense) will not produce new information. I think most ID supporters agree with me here (if you don’t, let me know). What I contend is that the simulated models that are supposed to produce new information from mimicing evolution do not in fact accurately simulate RM & NS.
    If I understand correctly, some here also contend that no simulation is capable of producing new information. They may be right, but I don’t know enough to argue that case, so I leave that area alone.

    I hope it is clear what I am saying, and what I am not saying.

  58. P.S. I am just an interested layman who dabbles in programming, so please keep your replies as free of jargon as possible.

  59. 59

    Todd

    Thanks for the compliment. Iconoclasts seek to overthrow established order. That is the purpose of science now as in the past. Thanks again.

    I love it so!

    “A past evolution is undeniable, a present evolution undemonstrable.”
    John A. Davison

  60. Tom:

    Isn’t what Dembski is doing different than Avida or other Evo-sims are doing? It seems to me you are misstating Gil’s point of contention regarding evo-sims.

    Karl:

    Avida proved IC can arise without intelligent cause? So then, what caused Avida? It seems an impossible task to create a program, especially one which purports to ‘prove’ stochastic evolution, and keep the marks of intelligence out of the picture. This is the bill of goods ‘your side’ is selling – it is counter-intuitive, so I hope you understand how laymen such as myself consider such a claim with great skepticism.

    Here you have creators (programmers) defining the ground rules of the Avida universe, which you say produces IC, refuting IC as a hallmark of intelligent cause – yet… you have no universe to produce the IC without intelligent agents defining the terms!

    Is there some way to resolve this paradox and leave your claim standing? I’m eager to hear it!

  61. StevenA: Re: #56

    Ditto that for me. I’m not a math geek like Dembski nor a uber goober programmer geek like Gil – I don’t ask for ID for Dummies, just minimal mumbo jumbo and gobbledy gook!

  62. Dr Davison:

    I’m glad you recognized it for what it was…and it didn’t give you gas! Heh. :)

  63. Tom English:

    I have long been frustrated with certain parties’ inability to recognize that neo-Darwinian evolution is an abstract process, not necessarily implemented in biota.

    What in the world is this supposed to mean?

    In 1995, I ran an evolutionary computation on a massively-parallel computer and obtained more than 20 thousand models predicting annual sunspots counts. The series of sunspots counts is chaotic, and has been a challenging benchmark problem in time series prediction for many years. With a combination of models, I obtained far better prediction accuracy than anyone else ever had (“Stacked Generalization and Simulated Evolution,” BioSystems, 39(1), pp. 3-18, 1996).

    Gee, let me see: you use a random (chaotic) model, and lo and behold, you’re able to simulate–what is it again? Oh, yeah, a “chaotic process”. Wonderful. Congratulations.

    How could I have front-loaded the evolutionary computation when no physicist or statistician had ever managed to give a good model of the time series?

    As my above comment illustrates, what need did you have to “frontload” anything!

    As to Gil’s proposal, don’t computers, in fact, break down? Tom, have you had that experience? I just had to replace a power unit on a PC. Hard drives break down. Things break down. As well, if you want to talk about “substrates” that “instantiate” a simulation program, then computers are just like cells, the “substrate” upon which the DNA “program” is “instantiated”: and things can go wrong in them. We use our intelligence to solve “hardware” problems; what do cells use? Isn’t that really the point that Gil is trying to draw out? Perhaps Gil would like to comment, but cellular (and hence computer) breakdown has its valid context. But, of course, why worry about whether a “substrate” is going to fall apart when no one, to date, has demonstrated a program that has been “instantiated” on a reliable “substrate” that anything resembles outputting true information.

    The point I want you and others to get here is that such successes of evolutionary computation demonstrate the efficacy of abstract neo-Darwinian processes.

    Sir Fred Hoyle in “The Mathematics of Evolution” likens the evolutionary model to a simple feedback equation that includes a selection term: you vary the system, and if the variance is positive it goes to 100% probability, and if it is negative, then it goes to 0% probability. Why fawn over such a simplistic mechanism and call it “neo-Darwinian abstraction”. Call it what it is: “trial and error.”

    Ah, Gil, but what you seem not to comprehend — this, I think, is a genuine misunderstanding — is that there is no absolute distinction between an analytical model and a simulation model. And if simulation models suffer the defects you say they do, then Bill Dembski’s abstract models of evolution in Searching Large Spaces and The Conservation of Information are in even bigger trouble.

    But there’s a big difference between an analytical solution and a solution derived from perturbation theory, and the analytical is always preferred. So why would you consider Dembski’s argument, that is along the lines of an analytical solution, inferior to evolutionary modeling? I don’t get this.

    [T]he “Folding at Home” project has massive computing power being put into this, and they’re nowhere near the needed computational power.

    And what does this say about the likelihood of random mutation being able to “find” the right solution to protein folding?

    Karl Pfluger:

    3. They are not the same system. In an airdrop simulation, one of them (#2) is simulating the other (#1).
    4. If you wanted to simulate the effects of hardware or software errors on an airdrop, you would introduce the errors into the model of #1.

    This is exactly what Gil is proposing. don’t you see it?

  64. Tom English –

    In your sunspot sim, you may not have known anything about sunspots, but I’d bet you had a dataset of sunspot activity. And when you “evolved” your model, you found one that “fit” the data best. You had a prespecified target, and you gave organisms “partial credit” for getting better at predicting via a “fitness function” evaluator.

    If this was in fact what you were doing (and every evolutionary algorithm I’ve seen has done something like this), then the simulation is a directed search a la Dawkins’s “Weasel” program, which even he himself admits has nothing to do with true, undirected Darwinian evolution, in which organisms don’t get partial credit.

    Is this in fact what you did, or am I mistaken?

  65. todd:

    Isn’t what Dembski is doing different than Avida or other Evo-sims are doing?

    Yes. He has developed analytic models, not simulation models.

    It seems to me you are misstating Gil’s point of contention regarding evo-sims.

    Where? How? Even the most abstract of evolutionary and artificial-life models is more biologically realistic than Bill Dembski’s analytic models. Gil’s contention applies even more to ID theory (by that, I mean the math) than it does simulated evolution.

  66. Gil wrote:

    In my original post about mutating the CPU instruction set, the OS, etc., I was being somewhat sarcastic. Obviously, this would be silly, and I wouldn’t expect anyone to take such an experiment seriously.

    Gil would have us believe that after three days of being criticized for confusing the simulator with the simulated, and after writing a new post which evades the issue altogether but attempts to establish his credentials, that he is just now getting around to mentioning that his first post was “sarcasm” which noone should take seriously.

    Gil, is it really that painful to admit that you were wrong? Even if it was pointed out by (gasp) Darwinists?

    Gil continues:

    My point was that if mutations are genuinely random, we should expect that in a biological system (e.g., a cell) they would interfere with or modify all aspects of a cell’s basic functioning, which would affect the ability of the cell to survive and reproduce.

    Yes, mutations are necessary in an evolutionary simulation. That’s why they all implement mutations and penalize the organisms having deleterious mutations.

    If random mutations killed off a significantly large percentage of cells or made them sterile before they had a chance to reproduce, pass on their genetic information, and for natural selection to work its magic, the rest of the simulation would be rendered invalid.

    Why would that render the simulation invalid? That’s exactly what natural selection should do if the mutation rate is high enough. Suppose , in an extreme case, that a supernova went off within a few light years of Earth. The radiation would be expected to kill all terrestrial life. How is that contrary to NDE?

    Of course, the interesting stuff in evolutionary simulations happens when the mutation rate is quite a bit lower, as it is in real life.

  67. Tom English says:

    Yes, Bill gives a made-up problem of searching for a particular sequence of amino acids. He does not go at all into the complexities of how amino acids specify proteins. And he certainly does not mention that in reality many permutations of an amino acid sequence may represent the same protein, and that numerous proteins are represented by amino acid sequences of length 100.

    You are correct that Bill doesn’t completely specify all the complexities of biology in that one article. But he certainly does address some of those issues elsewhere. The larger point remains: Dembski sees these complexities as friendly to his arguments, hence he is unlikely to ignore them, if for no other reasons than self-serving ones.

    On the other hand, they aren’t necessarily the friend of your position. That’s why I’m a little skeptical when you try to diss their relevance. Of course that’s before you then turn around and make the claim that:

    Even the most abstract of evolutionary and artificial-life models is more biologically realistic than Bill Dembski’s analytic models.

    Feel free to provide some evidence for this claim. Why not take the Avida program that Karl mentions, and show us how it is more biologically relevant than Dembski’s analysis.

    And while you are at it, let us know what “principle” is being proved, and where the “natural selection” is in Avida. I see Karl decided it was just plain “selection” in his description.

  68. John Davison wrote:

    Nylonase is an enzyme, not an organism.

    Some organisms produce nylonase, some do not. It takes a mutation to turn the ones that don’t into ones that do. How is this not an example of non-conservative natural selection?

    Adaptive enzymes have been known for decades and probably have nothing to do with creative evolution.

    Probably? For you to make such a sweeping statement as this…

    Natural selection is very real. Today, as in the past, it PREVENTS change. That is all it ever did.

    …don’t you think you should be a bit more certain of the evidence?

    Does that help? Probably not.

    Nope. Write that down.

  69. todd wrote:

    Avida proved IC can arise without intelligent cause?

    I said, “The significance of Avida, in particular, is as an example of how a Darwinian mechanism can produce irreducible complexity.”

    So then, what caused Avida?

    Programmers.

    It seems an impossible task to create a program, especially one which purports to ‘prove’ stochastic evolution, and keep the marks of intelligence out of the picture.

    Avida doesn’t “purport to ‘prove’ stochastic evolution”, and of course Avida was created via intelligence. How is that relevant to whether the process modeled by Avida requires intelligent input? You seem to be making Gil’s mistake. The process doing the modeling is not the same as the process being modeled. The fact that the former requires intelligent input in order to operate does not mean that the latter does.

    Here you have creators (programmers) defining the ground rules of the Avida universe, which you say produces IC, refuting IC as a hallmark of intelligent cause – yet… you have no universe to produce the IC without intelligent agents defining the terms!

    If you’re arguing that the universe itself must have an intelligent cause, fine — that’s a legitimate question. But that’s emphatically not what leading ID proponents are saying. They claim that IC systems cannot evolve via Darwinian mechanisms, regardless of how the universe itself came into existence.

    Is there some way to resolve this paradox and leave your claim standing? I’m eager to hear it!

    There was no paradox to begin with.

  70. Tom English writes “Even the most abstract of evolutionary and artificial-life models is more biologically realistic than Bill Dembski’s analytic models.”

    You’re joking, right? A digital organism composed of virtual microprocessor instructions, that reproduces by instantaneous magical means, driven to success or failure by laws of fantasy decreed by a programmer/god… ascribing any degree of biologically realistic to that has got to be a joke. If you’re serious then you’re living in denial because I know you’re normally smart enough to not get sucked into such a foolish position.

  71. Tom English writes “I have long been frustrated with certain parties’ inability to recognize that neo-Darwinian evolution is an abstract process, not necessarily implemented in biota.”

    It’s not abstract in the case of biota, Tom. You get that much don’t you? If a process exists ONLY in the abstract then models of it are more appropriately called fantasies. What else can they be if they have no basis in reality? If the process exists in reality then a model of it can be compared to reality (tested) to see how robust the model is. Avida models a fantasy world. The laws of nature in it are made up out of whole cloth, the creatures that live in it are composed of artificial CPU instructions. Write the down.

  72. Tom English writes (re sunspots) “My question for you is how I did that. ”

    With a computer is how you did it. Computers can generate and sort possible solutions to problems at blazingly high speeds. You chose a search method that happened to work well on the data set in question. It’s not mysterious. Computers are fast.

  73. Tom English wrote:

    I have long been frustrated with certain parties’ inability to recognize that neo-Darwinian evolution is an abstract process, not necessarily implemented in biota.

    PaV asks:

    What in the world is this supposed to mean?

    He means that any scenario in which there is replication, heritable variation, and selection constitutes an example of Darwinian evolution, whether or not it involves living creatures.

    PaV scoffs at Tom’s sunspot models:

    Gee, let me see: you use a random (chaotic) model, and lo and behold, you’re able to simulate–what is it again? Oh, yeah, a “chaotic process”. Wonderful. Congratulations.

    PaV, stop and think for a minute. Are all chaotic processes identical? If you’re able to model the dripping from a faucet, does that mean you’re automatically able to model an n-body planetary system? Of course not.

    To model a particular chaotic process, it is not enough to simply create another chaotic process. You have to come up with a chaotic process whose behavior matches the behavior of the system you’re trying to model. The fact that Tom’s model was better than any existing model in predicting sunspot activity is a significant achievement.

    Perhaps you don’t understand that Tom is using the word “chaos” in its technical sense. If that’s the problem, see the following link:

    http://www.imho.com/grae/chaos/chaos.html

    As to Gil’s proposal, don’t computers, in fact, break down?

    Yes, but when a computer breaks down, any simulations running on it will cease to produce meaningful results.

    We use our intelligence to solve “hardware” problems; what do cells use?

    To the extent they are able, they use their inherited repair mechanisms to undo the damage. To the extent they are unable, they die or cease to function properly.

    So why would you consider Dembski’s argument, that is along the lines of an analytical solution, inferior to evolutionary modeling? I don’t get this.

    Read Tom’s statement, paying careful attention to the word ‘if’:

    And if simulation models suffer the defects you say they do, then Bill Dembski’s abstract models of evolution in Searching Large Spaces and The Conservation of Information are in even bigger trouble.

    BC wrote:

    [T]he “Folding at Home” project has massive computing power being put into this, and they’re nowhere near the needed computational power.

    PaV asks:

    And what does this say about the likelihood of random mutation being able to “find” the right solution to protein folding?

    Nothing. They are entirely different problems. The Folding @ Home project is trying to explain why proteins fold as they do. Natural selection is merely “trying” to find proteins which enhance fitness. Natural selection has no knowledge of protein folding and makes no attempt to explain it.

    I wrote:

    3. They are not the same system. In an airdrop simulation, one of them (#2) is simulating the other (#1).
    4. If you wanted to simulate the effects of hardware or software errors on an airdrop, you would introduce the errors into the model of #1.

    PaV asks:

    This is exactly what Gil is proposing. don’t you see it?

    Not at all. This is Gil’s categorical statement:

    All computational evolutionary algorithms artificially isolate the effects of random mutation on the underlying machinery: the CPU instruction set, operating system, and algorithmic processes responsible for the replication process.

    If the blind-watchmaker thesis is correct for biological evolution, all of these artificial constraints must be eliminated. Every aspect of the simulation, both hardware and software, must be subject to random errors.

    He is talking about mutating the CPU and the OS, not the model.

  74. PaV writes Call it what it is: “trial and error.”

    Because that wouldn’t sound very impressive to the casual observer. In point of fact children discover the so-called genetic algorithm quite naturally with no tutelage. You have a problem to solve, you try out a possible solution, if it works perfectly you stop there, if works a little bit you build on it, if it doesn’t work at all you try something completely different.

    There is absolutely nothing new about this problem solving method. It’s as old as dirt. The only thing new is using computers to speed up the process but these current artifical life programmers were far from the first ones to use a computer to speed up the process. What they’ve done is pasted a new name on an old method so they can claim they are the inventors. It’s an old story. “Expert systems” (a 1980′s fad) are the classic example of putting a new name on an old method in order to claim ownership.

  75. Tom English write “It does not matter one whit whether the simulations fit biological observations.”

    A damn good thing too because if you tried, they wouldn’t.

  76. 76

    Karl Pfluger

    I am widely known for making what you call “sweeping statements.” You bet I am. They are the only kind that really matter. And what are you known for? Come on, don’t be shy. Lead me to your publications, especially the ones that deal with organic evolution. I’ll bet you don’t have any. A near as I can determine you are just one more garden variety Darwimp. The internet is crawling with them, thousands of them, all mumbling the same mindless drivel with what Grasse called “Olympian authority.” It makes me sick to my stomach.

    “Never in the history of mankind have so many owed so little to so many.”
    after Winston Churchill

    It is hard to believe isn’t it?

    I love it so!

    “A past evolution is undeniable, a present evolution undemonstrable.”

  77. Tom English writes: “There are people working in computational chemistry (e.g., simulation of protein folding), but proteins are no more the right level of granularity for evolutionary simulation than are transistors the right level for microprocessor simulation. If you truly know anything about microprocessor simulation, you know what I am saying.”

    Your ignorance is showing again.

    http://www.ac.uma.es/hpca10/tutorials.html

    Tutorial 1: Advanced Processor Architectures and Verification Challenges
    Presenter: Sunil Kakkar, IBM Global Services, Bangalore, India
    Abstract: Due to the complexity associated with design verification, testing costs for high performance processors are spinning out of control. The verification effort grows exponentially as design complexity grows linearly. It is commonplace for a processor design to contain over 100 million gates today. Considering that a one million gate design requires 5-8 verification engineers, the task of verification is dominating project costs. Server farms of thousands of machines are needed to run design verification tests around the clock.

    Five years ago design verification consumed 50% of the total effort expended on a chip design. Today this percentage has grown to over 80% and is likely to further dominate costs in the near future. All this opens up new frontiers of challenges in verifying the complex processor architectures of today with all the practical constraints that are placed on the verification team. This tutorial will describe the current state-of-the-art in design verification and suggest some directions that could lead to reducing this burden.

    About the presenter:

    Sunil Kakkar has over 20 years of experience in Processor Design, Architecture, Performance and Functional Verification and has successfully led large design verification and performance analysis teams that have ended up handing off fully functional first silicon to the manufacturing and the test teams. Holding a Bachelor’s degree from IIT-Kanpur and two Master’s degrees from the University Of Illinois, Sunil holds a patent for a specialized microprocessor verification flow technique that he invented at Sony and which led to bug free first silicon. Sunil has also taught at the University of Berkeley program for industry professionals. Sunil has also invented a VDL (Verification Design Language) which when used to specify a digital design at a higher level of abstraction can be used to generate testbenches automatically in any HDL or high level language. Sunil was invited to chair the IEEE Computer Society Conference session on Verification. Sunil has worked for companies like Hewlett Packard and Transmeta and is currently managing the Processor Architecture, Performance and Verification Groups at IBM Global Services in Bangalore. He is the chief technologist for the IBM’s e-verification technology initiative in the Asia-Pacific region.

  78. Oh no – there goes Tow-kee-oh…

  79. 79
    Reciprocating Bill

    PaV on Tom E.:

    Tom English:

    “I have long been frustrated with certain parties’ inability to recognize that neo-Darwinian evolution is an abstract process, not necessarily implemented in biota.”

    PaV:

    “What in the world is this supposed to mean?”

    Read Dennett’s discussion of algorithmic processes in “Darwin’s Dangerous Idea” (p. 50). He argues that, by the same token that the logic of long division can be hosted upon virtually any physical substrate (“The power of the procedure is due to its logical structure, not the causal powers of the materials used in the instantiation, just so long as those casual powers permit the prescribed steps to be followed exactly”), selectionist causation (natural selection in biology) is itself an abstract algorithmic process that is portable in the same sense. The power of the selectionist algorithm may be demonstrated in a variety of substrates. This is why modeling of NS by means of computation is so powerful and relevant. And this is what Tom meant.

    (BTW, in revisiting Dennett’s book for the first time in about ten years, I was amused to see that he chose the words “substrate” and “instantiate” to express this notion. But perhaps I was influenced by my prior reading.)

  80. Karl responds:
    Yes. He has developed analytic models, not simulation models

    Isn’t there a big difference between simulation and analysis? This is what I find misleading – analysis takes a look at an existing system and returns some dataset relevant to that which is examined while a simulation is the system itself. For you to say Dembski’s analysis is flawed by Gil’s critique of evo-sims seems to be an apples an oranges type of fallacy. Indeed, his isn’t a ‘model’ in the same sense as evo-sims are models, so when you say:

    Even the most abstract of evolutionary and artificial-life models is more biologically realistic than Bill Dembski’s analytic models

    you aren’t even talking about the same thing. Dembski’s ‘model’ applies to large ‘spaces’ of data returning specific information (or so I simply gather, correct me if I’m wrong). His is abstract in that any given set of data can be analyzed, so when you say Dembski “ says nothing about protein folding and genes and chromosomes and predators and plagues and earthquakes and meteors. He models information flow that has never been observed directly. There are no data for his models to fit, and he gives no guidance as to how to validate the models“, it appears you conflate the two types of models and, I hate to say it, obfuscate the issue at hand.

    Why should he say anything about protein folding? You want to add folding to the search space? I’m sure it can be done, but I doubt it will improve the statistical return in favor of random mutation and natural selection.

    I’m curious, do any evo-sims model anything that has been observed directly?

  81. Sorry the above is meant for Tom, not Karl.

  82. Recip

    We already know that trial and error can find solutions to problems. This is not something that needs testing in the abstract. Avida proves nothing we didn’t know already.

    What we don’t know is if trial and error can produce organic life in the real world. To figure this out we need to first have functioning model of biochemistry in order to understand the limits that constrain the natural process.

    Trial and error can produce any solution that is physically possible given enough time to search and evaluate the possibilities. It isn’t a question of if it works, the question is how fast does it work when the substrate is biotic. Back when the universe was thought to be infinite and unchanging there was no question of rm+ns having the time to produce life since the amount of time was as a big a chunk of infinity as needs be. Now that we know the universe is finite in size, age, and changes over time there are some serious limitations that rm+ns must deal with. The probabilities that it must overcome in the real world to accomplish what we observe in life is the key to whether or not it’s a plausible explanation. In order to figure that out we have to model biochemistry and the natural environment to see how fast and how many trials can be produced & evaluated, or indeed, if it is even physically possible in a real natural environment for organic polymers to reach the self-replicating point.

  83. Karl Pfluger:

    He means that any scenario in which there is replication, heritable variation, and selection constitutes an example of Darwinian evolution, whether or not it involves living creatures.

    Let me clue you in, Karl, a computer is not a biological organism. I hope you understand the distinction. If you don’t, I can explain it in greater detail.

    To model a particular chaotic process, it is not enough to simply create another chaotic process. You have to come up with a chaotic process whose behavior matches the behavior of the system you’re trying to model. The fact that Tom’s model was better than any existing model in predicting sunspot activity is a significant achievement.

    And it only took him 20,000 models to find a combination that gave him good answers. As I said, this is “trial and error.”

    Perhaps you don’t understand that Tom is using the word “chaos” in its technical sense.

    Whether he is referring to a mathematically “chaotic” system or not, the fact remains that the “design fit”, the final output of the model, is “chaotic”. ThusTom’s further point that he didn’t “frontload” anything is rendered meaningless.

    Yes, but when a computer breaks down, any simulations running on it will cease to produce meaningful results.

    Thanks for pointing out the obvious. I suspect you grew up, and perhaps still live, in Germany, because the only other person I’ve met who is similarly condescending grew up there.

    Read Tom’s statement, paying careful attention to the word ‘if’:

    I, of course, understand what the word “if” means, in case you’re wondering about that, too. My question to you is, “Have you read Searching Large Spaces”? There is no “model” of evolution in there. It’s an entirely mathematical development. That is the point–it’s an analytical approach, not a simulation.

    PaV asks:

    And what does this say about the likelihood of random mutation being able to “find” the right solution to protein folding?

    Nothing. They are entirely different problems. The Folding @ Home project is trying to explain why proteins fold as they do. Natural selection is merely “trying” to find proteins which enhance fitness. Natural selection has no knowledge of protein folding and makes no attempt to explain it.

    Karl, you seem to be missing the point that if the proteins don’t fold properly, then biochemical function comes to an end. I hope you understand that. And, of course, NS can’t act on something that is not biologically active. Hence, the massive amount of computer power required to search out the proper solution to folding is an undertaking that “random mutation and NS” would in some way have to deal with. It validates the ID argument that these search spaces are enormous, hence rendering RM+NS hopeless. (How many iterations [=generations] can a computer run per second? Quite a few, I imagine.)

    I wrote:

    3. They are not the same system. In an airdrop simulation, one of them (#2) is simulating the other (#1).
    4. If you wanted to simulate the effects of hardware or software errors on an airdrop, you would introduce the errors into the model of #1.

    PaV asks:

    “This is exactly what Gil is proposing. don’t you see it?”

    Not at all.

    What you are blithely ignoring is that in statement #4 you acknowledge the ability to simulate CPU errors occurring in the primary simulation(model #2 simply models errors in the CPU running model #1). Dave Scot has already pointed this out. But, of course, you’re denying that Gil, or anyone else, can do that, as in this exchange:

    PaV:As to Gil’s proposal, don’t computers, in fact, break down?

    Karl:“Yes, but when a computer breaks down, any simulations running on it will cease to produce meaningful results.

    I wrote:

    We use our intelligence to solve “hardware” problems; what do cells use?

    Karl answers:

    To the extent they are able, they use their inherited repair mechanisms to undo the damage. To the extent they are unable, they die or cease to function properly.

    You seem to have missed the point, Karl. Yes, I’m fully aware of the repair mechanisms that cells have. But you see, Karl, the analogy is that those repair mechanisms mimic what we humans would do to get the “hardware” up and running; and we humans, for the most part, are “intelligent agents”. So, the suggestion is that those repair mechanisms were intelligently designed to avoid, as you’ve stated, the computer from breaking down.

  84. Karl wrote,
    You seem to be making Gil’s mistake. The process doing the modeling is not the same as the process being modeled. The fact that the former requires intelligent input in order to operate does not mean that the latter does.

    I find your argument lacking because the parameters by which AVIDA operate essentially ‘front load’ evolution.

    The specific critiques of AVIDA I’ve read point this out in more detail. The hand of the investigators are all over the latter, for it is defined by the former! Therein lies the rub and the paradox you don’t see.

    I remain unconvinced a programmed simulation (ahem) falsifies the ID IC argument, which is precisely what you are claiming. For if it can be demonstrated that IC systems can arise without intelligent guidance (stochastically), then a central critique of materialist evolution is off the table. While this wouldn’t ‘prove’ stochastic evolution, it certainly supports the notion.

    AVIDA proves programmers can set up conditions in a simple non-real model universe and produce IC. That they set the conditions, the search fields and rules of this universe (then produce IC) is apparently of no consequence to you!

    Wow. From where I sit, your argument is what I’ll generously call “a stretch”… :)

  85. Recip Bill:

    Read Dennett’s discussion of algorithmic processes in “Darwin’s Dangerous Idea” (p. 50). He argues that, by the same token that the logic of long division can be hosted upon virtually any physical substrate (”The power of the procedure is due to its logical structure, not the causal powers of the materials used in the instantiation, just so long as those casual powers permit the prescribed steps to be followed exactly”)

    Well, here you are explaining, of course, that “intelligence” is found in nature. This is an ID argument. Thank you for making it for us.

  86. Tom English

    On the power of abstraction. Is it possible that you yourself are a simulated organism a la “The Matrix”?

    The thing about abstraction, Tom, is that we can quickly take an abstract straight to absurdity. That’s why we make sure that models whose results can cause loss of life and property (say like tornado prediction) are limited as much as practical in the abstact and tested as much as is practical against the real world process being modeled.

  87. Dave,

    If Tom is a mathematician by training, and you, like me, an engineer, then he’ll never understand our practical side, and we’ll probably never understand the abstract side.

    Here’s how I see it: if you love equations, you become a mathematician; if want to explore equations, you become a physicist; if you want to use equations to do something, you become an engineer. Does that pretty much size things up?

  88. 88

    Is there any difference between Tom and Karl? Aren’t they both chance-worshipping Darwimps?

    Iti s hard to believe isn’t it?

    Who is next?

    I love it so!

    “A past evolution is undeniable, a present evolution undemonstrable.”
    John A. Davison

  89. Dr D,

    Does name calling ever advance an argument? While both men have issued pompous prose, neither have been rude. Name calling is rude. It’s hard to believe, isn’t it? ;)

  90. PaV wrote,
    Here’s how I see it: if you love equations, you become a mathematician; if want to explore equations, you become a physicist; if you want to use equations to do something, you become an engineer. Does that pretty much size things up?

    If you want to wrestle equations in a vat of Crisco what do you become?

    :D

  91. 91
    Reciprocating Bill

    PaV: “Well, here you are explaining, of course, that “intelligence” is found in nature. This is an ID argument. Thank you for making it for us.”

    Of course, some (not I) do argue that an “intelligence” installed the algorithmic process of variation and selection in nature – then allowed the process to operate without further intervention. Advocates of Deistic evolution, of course – not popular around here, I gather. That would be the only sense in which the algorithmic process of variation and selection can be recruited to something like ID.

    That said, your comment misses the point of my post, and Tom’s earlier lament. Selectionist causation exhibits substrate neutrality (e.g. is not limited to biological systems) independent of its origins.

    Dennett goes on to attribute two additional properties to algorithmic processes – mindlessness and reliability. The passage is worth a squint, whether or not you find it congenial.

  92. John Davison,

    That’s it? No response regarding nylonase?

  93. 93

    Todd

    Of course it is rude. Do you really think you can reason with ideologues of whatever persuasion? I know better and so did Einstein -

    “Then there are the fanatical atheists whose intolerance is the same as that of the religious fanatics and it stems from the same source…They are creatures that can’t hear the music of the spheres.”

    and so did Winston Churchill -

    “A fanatic is one who can’t change his mind and won’t change the subject.”

    and so did Thomas Henry Huxley -

    “Of all the senseless bable I have ever had the occasion to read, the demonstrations of these philosophers who undertake to tell us all about the nature of God would be the worst, if they were not surpassed by the still greater absurdities of the philosophers who try to prove that there is no God.”

    All three of these comments are in complete accord with the Prescribed Evolutionary Hypothesis. We are each a victim of our “prescribed” fate.

    All these internet forums are little more than venues for the presentation of ones largely innate, “prescribed” convictions concerning the fundamental question – WAS there a purpose in the creation of the universe or not? I do not regard that as subject to any form of debate or discussion and I have nothing but pity for those who feel otherwise. It is a monumental waste of time, at least of my time.

    “Militants on either side of intractable social issues would surely welcome a silver-bullet, a gene altering chemical, that would bring about conversions in their opponents.”
    William Wright, Born That Way, page 166

    Well folks, I hate to tell you but such a cure is not on the horizon and in my opinion never will be. In the meantime, rudeness will just have to do at least as far as I am concerned. If that is unacceptable there is always bannishment. I’ve been there – done that. At this point in my life I couldn’t care less.

    “The best offence is to be as offensive as possible.”
    John A. Davison

    “Carry the battle to them. Don’t let them bring it to you. Put them on the defensive. And don’t ever apologize for anything.”
    President Harry S. Truman

    Got that? Write that down!

    I love it so!

    “A past evolution is undeniable, a present evolution undemonstrable.”
    John A. Davison

  94. Tom English wrote:

    There are people working in computational chemistry (e.g., simulation of protein folding), but proteins are no more the right level of granularity for evolutionary simulation than are transistors the right level for microprocessor simulation. If you truly know anything about microprocessor simulation, you know what I am saying.

    DaveScot responds:

    Your ignorance is showing again.

    http://www.ac.uma.es/hpca10/tutorials.html

    Dave,

    What makes you think Sunil Kakkar is talking about simulating a processor at the transistor level? I see nothing in the excerpt you quoted to suggest that. Could you point to a specific passage?

    I’ve done processor design myself, and I can assure you that we don’t simulate a processor at the transistor level. It would take forever and consume ungodly amounts of memory. As Kakkar notes, it already takes huge server farms to simulate a microprocessor. Why would we make the problem worse by simulating at the transistor level?

    Even the gate level is rarely used. Instead, the bulk of the simulation happens at the behavioral level. Formal verification tools are used to prove that the gate-level netlist is logically equivalent to the behavioral code, making it unnecessary to do extensive gate-level simulations.

  95. 95

    Karl Pfluger

    Nylonase is by definition an enzyme, not an organism. The suffix ase means enzyme. Get it? probably not. We now have organisms capable of degrading all kinds of things like diesel fuel. Does that make them new species? No and do you know why? It is because that is a reversible state that is why. To Darwimps like yourself any change is an evolutionary change. Right? Wrong. No evolutionary change from speciation on up to the formation of any of the higher categories has ever been shown to be reversible. If it is reversible it is not evolution. Let me know the next time you hear of a reptile having ever evolved into an amphibian or a bird into a dinosaur.

    “A past evolution is undeniable, a present evolution undemonstrable.”
    John A. Davison

  96. Todd,

    Both Avida and biological evolution are instances of Darwinian processes.

    In both, organisms are able to reproduce and pass their characteristics to their offspring. In both, random mutations arise which affect the organisms’ ability to survive and produce offspring. In both, selection pressures favor some varieties and penalize others.

    You seem to be hung up on the fact that the ‘universe’ and the selection pressures are artificial in Avida. But that is irrelevant to the question of whether Avida implements a Darwinian process.

    Again, the only ingredients required for a Darwinian process are reproduction, heritable random variation, and selection. Avida has all of these.

    Avida has shown that a Darwinian process is capable of producing irreducible complexity. Does that prove that biological evolution can also do so? Of course not. But what it does do (and this is extremely significant) is to show that nothing about IC is inherently unreachable by a Darwinian process.

    If you want to argue that biological evolution cannot produce IC, you can no longer simply say “Of course evolution can’t produce IC, because no mindless Darwinian process can ever produce IC.” You have to come up with specific reasons why biological evolution, unlike Avida, cannot generate IC.

    As I said before, this renders IC fairly useless as a concept, because we’re back to where we were before the concept of IC was introduced: we look at a structure and ask ourselves, “Could that have evolved?”

  97. 57. StephenA

    However, I predict that the effects of RM & NS if accurately simulated (even if only in a rather abstract sense) will not produce new information.

    It’s easy to show that random mutation can produce useful “information”. Let’s say that I have a simple gene sequence (sequence A): ACGGAC. Let’s say that I mutate it at position 4, so it looks like (sequence B): ACGCAC. Is this an information increase, decrease, or the same? If we say that random mutation can’t produce new information, sequence A must contain equal or more information than sequence B ( A >= B ). Now, let’s mutate sequence B and call the result sequence C. There are six positions, each of which can be changed to three other values (18 possible mutations). So, each of those mutations has a 1 in 18 chance of happening. One of those mutations will change position 4 back to “G”. In other words, it will change sequence B back into sequence A. If we say that random mutation can’t produce new information, then sequence B contains equal or more information than sequence C ( B >= C ). But, if sequence C is sequence A, then ( A >= B >= C and A = C ). Now let’s say that A to B was actually a decrease in information. That means B to C MUST be an increase in information. Thus, I’ve shown in a mathematically rigourous way, that mutation CAN increase information. Now, what about those other 17 mutations? If those other 17 mutations are bad, and only 1 is good, then mutations will, on average, be bad. That’s true. That’s where natural selection comes in. Natural Selection works to preserve the good mutation (organisms with the good mutation produces more offspring), and organisms with the 17 bad mutations reproduce more slowly or die before producing any offspring. If this is a sexually reproducing organism, that one mutation can actually spread across the entire species, giving every organism of the species a useful sequence. So, yes, RM+NS can produce new information.

    60. todd

    Avida proved IC can arise without intelligent cause? So then, what caused Avida?

    If you want to compare GAs to the real-world, you have to make a distiction between universe-ID and biological/gene-ID. The programmers act as universe-intelligent-designers in the case of Avida. The ID movement spends most of it’s time arguing that biological entities cannot arise out of the low-level laws of the universe (Behe says IC requires intelligent design but other systems can evolve through NDE, Dembski says nothing at all can arise out of NDE). This idea can be called “biological-ID”. Your argument “what caused Avida” is the equivalent of saying, “Sure, biological organisms can produce IC through NDE, but who made the laws of the universe and who made the atoms that living things are made out of?” That criticism doesn’t affect our argument that NDE can create IC. Your comment is only relevant if we were arguing that the universe was not created (universe-ID).

  98. Recip Bill:

    That said, your comment misses the point of my post, and Tom’s earlier lament. Selectionist causation exhibits substrate neutrality (e.g. is not limited to biological systems) independent of its origins.

    In responding to Karl, above, I mentioned that Fred Hoyle likened the common understanding of evolution to a feedback loop with a selection coefficient. If we live in a world that is orderly, one cannot help but suppose that rather than persisting in chaos, nature is going to find some sort of solution. The selection coefficient will be either positive or negative, and probability of a certain physical state will either move to 1 or 0. The term selectionist causation sounds almost mystical. Why not just say: “We find nature involves feedback loops”?

    But, all of this points to “order”. And the question becomes, Whence the order from chaos? Of course, Genesis has an answer. And even physics has a kind of answer; but that answer then leads somewhat inexorably to the “anthropic principle”. What is staggering about the universe is that we can understand it. Didn’t Einstein have something to say along those lines?

  99. Karl Pfluger:

    Avida has shown that a Darwinian process is capable of producing irreducible complexity.

    Can you provide an example that is worth my perusing.? I’ve wasted much time being pointed by others in the direction of modeling examples that are clearly “sneaking in” information via fitness functions and such, so please give me an example, a model, that is worth investigating.

  100. 100

    Avida is a joke.

  101. DaveScot:

    The thing about abstraction, Tom, is that we can quickly take an abstract straight to absurdity.

    Agreed. But what do Turing machines, production systems, and the lambda calculus have to do with electronic digital computers? How can it be that such abstract systems, none of which has a real implementation, serve as model computers in a hugely successful theory of computation? There is no notion of system failure in any of them, so some here must say, for consistency with their prior remarks, that the theory has nothing to do real computation. This is balderdash, of course. The object of study is computation, and a key issue is how the properties of the model computer limit the possible computations.

    Incidentally, I don’t think Turing knew of the American implementation of an electronic digital computer when he conceived of his abstract computer. He was not modeling anyone’s computer, but he came up with what has proved to be the most useful of all model computers. By useful, I mean that it appears in more theorems in the theory of computation than does any other model computer.

    Something to contemplate is the an electronic digital computer is not “really” a Turing machine. It is quite literally a finite state machine, but the number of states is so large that the Turing machine, with its infinite storage (something that cannot exist), is a more useful model. I mention this to emphasize that a “false” model is sometimes a better choice than a “true” one. Modeling is more about utility than reality.

    In evolutionary computation we explore in theory and in simulation the properties of abstract evolution. There is considerable debate as to which salient features of biological evolution should be preserved in an abstract model of evolution. But everyone in my community agrees that radical abstraction is as important to understanding of evolution as it has been to understanding of computation. I think many here are mistakenly assuming that we are out to model biological entities. In fact, our models are our objects of study, just as Alan Turing’s abstract computer was his.

    That’s why we make sure that models whose results can cause loss of life and property (say like tornado prediction) are limited as much as practical in the abstact and tested as much as is practical against the real world process being modeled.

    The properties of computation are independent of the particulars of real-world computers. It makes just as much sense for us in evolutionary computation to seek abstract principles of evolution. ID theory is founded on the notion that information is central to life, so why is this such a radical concept?

  102. JasonTheGreek:

    Because a computer simulation purports to show something is possible doesn’t mean it’s possible or anywhere near possible in the real world.

    That should be noted first off.

    An evolutionary computation is an evolutionary process in the real world. If the computation does something, an evolutionary process in the real world has done it. If you cannot grasp this, then you are not grasping what much of evolutionary computation is about.

  103. Karl:

    Avida has shown that a Darwinian process is capable of producing irreducible complexity.

    Nonsensical statements like this will quickly get you unselected from this blog. And the ID proponents who post here, know better than to buy such claptrap.

    Consider this a warning.

  104. Tom English

    An evolutionary computation is an evolutionary process in the real world. If the computation does something, an evolutionary process in the real world has done it. If you cannot grasp this, then you are not grasping what much of evolutionary computation is about.

    So by your logic a book of fairy tales are fairy tales in the real world.

    Ooooooooooooookay. Whatever you say.

  105. 105

    A computer simulation IS not the same thing as a real world event in nature where a trillion accidental mutations occur that give rise to new information, systems, body types, etc.

    If the computation has successfully produced an IC system, it’s automatically done it in the real world in biology? Hardly.

    We can’t even predict hurricane paths or basic weather patterns correctly half the time, but whatever takes place in a GA simulation equals what has happened in the real world, even if real world (biologically speaking) time frame equals millions of years? Far fetched doesn’t even begin to describe the scenario.

  106. 106

    I once created a picture in MS paint of 15 feet really buff humans carrying large laser fired weapons, battling off slimy green aliens. I wonder what city they’re currently in. I wonder if they know me as their creator? Will it be legal to keep them as pets? Or will they have rights as any other person?

  107. Tom

    A computer can be designed without knowledge of a Turing machine. I have no idea what point you meant to make with that. Is it one of those sweeping statements like “nothing in biology makes sense except in the light of evolution” where only the hand waving pundits actually believe it?

    The bottom line remains that none of the digital organism programs I’ ve seen are modeling anything in the real world which can be used to determine if the model is accurate.

  108. Ha Jason! I beat you to the punchline. I will concede that your laser-wielding super-humans beats my non-descript book of fairy tales. But you know what they say, the early bird catches the worm… :-)

  109. Tom English

    It occurs to me that biological computer simulations would not exist without intelligent designers afoot. Does that prove that real biologic organisms too would not exist without an intelligent designer afoot? According to your fantasy=reality logic that must be true.

  110. Tom English writes:

    There is considerable debate as to which salient features of biological evolution should be preserved in an abstract model of evolution.

    Gee, ya think? If you were actually modeling a biological system you could continually test your simulated results against the real world to see how close you were to having everything (or enough of everything) right. But because you’re just wool-gathering you can’t test your models. You can model pink unicorns if you want but until there’s a pink unicorn to be found in the real world to see if they behave like your model that would be wool-gathering too.

  111. BC:

    Thus, I’ve shown in a mathematically rigourous way, that mutation CAN increase information. Now, what about those other 17 mutations?

    This isn’t a mathimatical proof, but if you reread this sentance, you’ll see that there were two “mutations” that neither increased, nor decreased information. There are such things as “neutral mutations”.

  112. Karl

    Avida has shown that a Darwinian process is capable of producing irreducible complexity.

    Scott didn’t tell you why that was nonsense. Any complexity produced in a stepwise fashion by a computer is by definition not irreducible.

    Make us all sit up and take notice by getting a computer simulation to reveal a biochemical pathway, based on nothing but random mutation and simulated natural selection, where a flagellum can be produced. I remain quite unimpressed by Avida finding pathways where higher level operands are produced by trial and error tinkering with microcode. Even a blind squirrel finds an occasional acorn.

  113. Karl:

    If you want to argue that biological evolution cannot produce IC, you can no longer simply say “Of course evolution can’t produce IC, because no mindless Darwinian process can ever produce IC.” You have to come up with specific reasons why biological evolution, unlike Avida, cannot generate IC.”

    You seem to be suggesting–and quite earnestly it appears–that Avida is more “real” than nature, and that if Avida can produce IC, then this proves that nature ought to be able to do it as well, leaving IDers in the position of having to prove that nature can’t produce IC.

    As I say, you seem to be earnest, but this is kind of a wild understanding of the “real”. Why so strong a conviction? What makes you so sure of all of this?

  114. 114
    Reciprocating Bill

    PaV said:

    “The term selectionist causation sounds almost mystical. Why not just say: “We find nature involves feedback loops”?

    Because “feedback loop” does not capture the logic of selectionist causation. My boiler and thermostat are tied together in a negative feedback loop that causes my old home’s winter temp to cycle around a setting that I select – but there is no selectionist causation in play in that process.

    Perhaps it is more accurate to say that we find so many feedback loops in nature because natural selection builds so many of them.

    Nor is the operation of variation and selection the least mystical; it is easily described and modeled. In fact, the mindless simplicity of the process if probably why so many balk at the claim that it has built most of the complexity we see in biology.

  115. Tom English wrote:

    It makes just as much sense for us in evolutionary computation to seek abstract principles of evolution.

    Tom, your last post was written in a very open, forthright fashion for which you’re to be commended. Having said that, and sincerely meaning it, I, however, just don’t get what you mean by “abstract principles of evolution”. Can you list two or three? I think it was Karl, perhaps you, who said of a Darwininian process that it involves reproduction, variation, selection and an accrued benefit (something like that). Is that the kind of thing you’re talking about?

  116. BC

    Feedback loop does indeed capture the essence of rm+ns. RM generates trial balloons. NS is a feedback mechanism that informs the trial generator whether or not the last trial was a step in a positive direction.

    You wrote:

    Perhaps it is more accurate to say that we find so many feedback loops in nature because natural selection builds so many of them.

    It is dead accurate to call your statement a tautology.

  117. 117

    Natural selecton never had anything to do with creative evolution except to preserve the status quo long enough for extinction to make way for the next step in a determined goal-directed process, a process no longer going on. Neither natural nor artificial selection of micromutants will ever create a new species. All new species and the higher categories came from within the relatively few forms that were capable of producing descendents fundamentally different from themselves. The environment played no role in those events except possibly to act as a simple stimulus for an endogenous prescribed potential.

    Chance never had anything to do with either ontogeny or phylogeny just as Berg claimed 83 years ago. Some folks are just slow learners. They are known as Darwinians.

    Evolution is finished. Get used to it. Robert Broom did, Julian Huxley did, Pierre Grasse did and so did I.

    “A past evolution is undeniable, a present evolution undemonstrable.”
    John A. Davison

  118. 118
    Reciprocating Bill

    DS: “Feedback loop does indeed capture the essence of rm+ns. RM generates trial balloons. NS is a feedback mechanism that informs the trial generator whether or not the last trial was a step in a positive direction.”

    Cybernetic feedback entails homeostatic adjustment to a target by means of feedback. This is implied in your phrase “positive direction” (which entails the notion of a correct vs. incorrect direction) and may be exemplified by the feedback that directs a guided missile to its target my means of correction.

    However, there are no preset targets or “directions” guiding natural selection, only selection pressures that are strictly contingent and local. So the analogy to cybernetic feedback and course correction breaks down.

  119. It does not matter one whit whether the simulations fit biological observations. –Tom English

    Just the other day I was wondering if you supported this idea. I think that about says it all…

    Karl:

    Gil, is it really that painful to admit that you were wrong? Even if it was pointed out by (gasp) Darwinists?

    Hello??? I distinctly remember pointing out to you that he was likely joking.

    The significance of Avida, in particular, is as an example of how a Darwinian mechanism can produce irreducible complexity. Honest critics can no longer claim that NDE cannot in principle produce IC. They must show that a particular IC structure cannot be produced because of the particular local genomic and fitness landscapes.

    This is a blow to the many ID advocates who saw the existence of IC as proof of design.

    This was probably only a “blow” to those new to ID. While, as you agree, Avida does not simulate real-life biology, it does show that an IC system can evolve in tightly constrained environments under certain conditions of replication, variation, and selection. This is important, as some ID proponents seem to regard “irreducibly complex” as tantamount to “unevolvable in principle” WHICH IS NOT TRUE.

    Fortunately you conceed this:

    Avida has shown that a Darwinian process is capable of producing irreducible complexity. Does that prove that biological evolution can also do so? Of course not.

    But then you make another common goof.

    But what it does do (and this is extremely significant) is to show that nothing about IC is inherently unreachable by a Darwinian process.

    IC primarily deals with DIRECT Darwinian Pathways; always has. Behe has always stated that INDIRECT Darwinian pathways are another matter. And we’re talking about Darwinian processes in Biological Reality, which is not not nearly as constrained…

    Anyway, in the “The Evolutionary Origin of Complex Features,” published in Nature in 2003 by Lenski, the selective forces that have 100% probability affixed are those for various simple binary arithmetic functions, which are ultimately used to build the “equals” (EQU) function, and for the EQU function itself. What’s more, the more complex the function, the greater the reward given to the digital organisms for it. There is no analogy for such selective forces in nature. Nature doesn’t care whether something is more or less functionally complex, it only cares whether it can survive in a particular environment. And what happens when no step-by-step rewards are given for functional complexity? An article on Avida in Discover magazine last year (Feb. 2005) stated, “when the researchers took away rewards for simpler operations, the organisms never evolved an equals program.” By building rewards into the system — ie providing a highly constrained fitness function — the programmers gave the system a purpose. Hence its creative power:

    dynamics.org/Altenberg/FILES/LeeEEGP.pdf

    “Both the regression and the search bias terms require the transmission function to have ‘knowledge’ about the fitness function. Under random search, the expected value of both these terms would be zero. Some knowledge of the fitness function must be incorporated in the transmission function for the expected value of these terms to be positive. It is this knowledge — whether incorporated explicitly or implicitly — that is the source of power in genetic algorithms.”

    But let’s break the discussion down even further. I think of fitness functions as a “funnel” that must be properly constrained in order to provide results. The design of this funnel must be balanced; it can either be too constrained or not constrained enough. The programmer’s goal is to find a balnace which the stated goal can be reached. In my opinion, there really isn’t such a thing as a “generic” GA program which can solve anything thrown at it–each program has to be designed to fit a purpose.

    Let’s say I have a Chess GA program. Assume abiogenesis and start off with an AI script that recognizes the environment (the chess board) and knows how to move the pieces (survive in the environment) and has a certain basic strategy. At startup this script is duplicated many times without any mutations. The scripting system making up simulated life cannot be abnormally simplistic, like with AVIDA, and the scripts must have the ability to replicate themselves. The functionality for replication must not be protected. The replication process is capable of producing AI scripts that no longer recognize how to play certain elements of chess or they cannot compile at all (death). As in, replication is not limited to producing fully functional chess strategies. Unfortunately the rules of chess are static so the environment doesn’t change.

    Now let’s say I applied a very broad constraint in my fitness function: if the script still retains the ability to compile (aka play chess) then it survives. “Old” scripts eventually die. “Lower lifeforms” are afforded a niche where they thrive instead of arbitrarily being eliminated in favor of “higher lifeforms” based upon a constrained process. As in, winners of games get duplicated more often, and with a larger population comes more processor time for this subsection of the population, but losers are not necessarily eliminated in an arbitrary fashion. They just need to be capable of basic survival. Thus a group of “winners” may eventually be modified to the point they start losing horribly or they split off.

    That’s it.

    Now let’s say I applied a very narrow constraint in my fitness function: the script must not only compile but it must win its game in a small number of moves in order to survive. This is tantamount to the environment being overly hostile.

    I wonder what you could expect from these approaches.

  120. BC

    However, there are no preset targets or “directions” guiding natural selection

    Of course there is a target. Does the term “differential reproduction” ring any bells?

  121. This isn’t a mathimatical proof, but if you reread this sentance, you’ll see that there were two “mutations” that neither increased, nor decreased information. There are such things as “neutral mutations”.

    Yes, but look at it again. Are there ANY mutations that cause a reduction in information from sequence A (whatever it happens to be) to sequence B? You can argue that my example involved neutral mutations, but you have to argue that ALL possible mutations to sequence A are neutral in order to debunk my example showing that mutations can produce information. If there are ANY sitatuations where A->B involves a decrease in information, then B->C must involve an increase. Essentially, you have to argue that all mutations to all possible sequences are neutral. If any mutation to any sequence decreases information, then, by virtue of the fact that all mutations can be reversed within a certain probability, some mutations must increase information.

    To DaveScot:
    You keep quoting me on things I didn’t say. Those are actually quotes from Karl and Bill.

  122. 122
    Reciprocating Bill

    DS: “Of course there is a target. Does the term “differential reproduction” ring any bells?”

    What I intended is that there are no targets in the sense of pre-envisioned end states – particular organisms, features/functions of organisms, etc., relative to which the process can take a positive (or negative) direction. Not in the standard scheme of variation and selection.

  123. Scott:

    Karl:

    Avida has shown that a Darwinian process is capable of producing irreducible complexity.

    Nonsensical statements like this will quickly get you unselected from this blog. And the ID proponents who post here, know better than to buy such claptrap.

    Consider this a warning.
    Well, I don’t know it to be nonsensical claptrap, and I just reread Bill Dembski’s criticism of Avida in his expert rebuttal for the Kitzmiller case (pp. 18-20, available at designinference.com). If you do the same, you will see that Bill never says that Avida did not give rise to irreducibly complex structures. You can count on it that if Avida had not, he would have said so. Instead he argues that Avida is not tied to biological reality. That is, Avida’s evolution of irreducible complexity implies nothing about biological evolution.

    If you have read this long thread, then you know that Karl and I have emphasized that evolution can be studied in the abstract, apart from biological systems. Bill Dembski has tacitly admitted that Avida evolved irreducibly complex structures, and Karl himself warned against extending the Avida results to biological evolution:

    Avida has shown that a Darwinian process is capable of producing irreducible complexity. Does that prove that biological evolution can also do so? Of course not.

    I don’t see the problem with this.

  124. 124

    DS:
    Make us all sit up and take notice by getting a computer simulation to reveal a biochemical pathway, based on nothing but random mutation and simulated natural selection, where a flagellum can be produced.

    OK, it is no flagellum, but EC systems are being used to discover peptides that would be patentable, if a human had found them.

    http://www.genetic-programming.....ptides.pdf

    Now these model results can and probably will be synthesized and tested against the real world of real bacteria.

    While these peptides were less than 30 units long, you’re arguing against Moore’s Law if you don’t expect longer results in the future. This is already within an order of magnitude of the 100 unit protein of Dembski’s analysis.

    The GA was compared in this study to other chance-based search procedures. It was found to be much more efficient. The take-away point is that not all search procedures are equally efficient for this task (which is not in conflict with the No Free Lunch Theorem).

    The GA did not have any knowledge of biology. That was encapsulated in a neural network that repersented the fitness landscape. Based on the description in the paper, the GA could have been used equally well for optimizing air drop parameters.

  125. Karl: “Avida has shown that a Darwinian process is capable of producing irreducible complexity.”

    Tom English: “Well, I don’t know it to be nonsensical claptrap, and I just reread Bill Dembski’s criticism of Avida in his expert rebuttal for the Kitzmiller case (pp. 18-20, available at designinference.com). If you do the same, you will see that Bill never says that Avida did not give rise to irreducibly complex structures.”

    It’s claptrap. The Avida algorithm used in the Lenski paper is not a Darwinian process. Darwinian processes are blind/dumb/purposeless.

    Tom English (31): “We often see here claims of what random mutation and natural selection cannot do, and evolutionary computation puts the lie to those claims.”

    Now you are calling ID a lie?

    Provide an example of such an evolutionary computation program. One that demonstrates that random mutation & natural selection (alone) can produce CSI.

  126. Darwinian processes are blind/dumb/purposeless.

    While, yes, the search is guided by the simulation parameters the search pathways are not predefined and the end goals are more generalized and not explicit. So AVIDA is blind/dumb to a certain extent. But I agree that without intelligently setting up the model it wouldn’t produce anything too interesting.

  127. 127

    PaV,

    There’s much to disagree with in your last few comments, but let me concentrate on your most egregious statements.

    You wrote:

    Karl, you seem to be missing the point that if the proteins don’t fold properly, then biochemical function comes to an end. I hope you understand that. And, of course, NS can’t act on something that is not biologically active. Hence, the massive amount of computer power required to search out the proper solution to folding is an undertaking that “random mutation and NS” would in some way have to deal with.

    PaV,
    Here’s the source of your confusion: you think that NDE must “understand” or “compute” protein folding in order to find proteins that fold “properly” and enhance the fitness of the organism possessing them. That is a complete misunderstanding of how NDE works.

    NDE will “try” any protein that is produced by a mutation, without regard for how it folds (the mutations are random, remember?). The proteins that happen to fold “properly” will enhance fitness and will therefore be retained by selection. They’re not retained because of how they fold; indeed, NDE doesn’t “know” how they fold. It keeps them because, and only because, they enhance fitness.

    It’s analogous to the use of soap. Soap was invented long before anyone knew the chemistry behind it. How? People found that a certain combination of ingredients produced a substance that was an effective cleaning agent. It didn’t matter how it worked; the fact that it did work motivated people to keep making it and using it.

    In exactly the same way, NDE doesn’t need to know why a protein works (which depends, among other things, on how it folds). It simply keeps the ones that work and discards the ones that don’t. Folding computations are superfluous in such a system.

    Dave,

    If Tom is a mathematician by training, and you, like me, an engineer, then he’ll never understand our practical side, and we’ll probably never understand the abstract side.

    Here’s how I see it: if you love equations, you become a mathematician; if want to explore equations, you become a physicist; if you want to use equations to do something, you become an engineer. Does that pretty much size things up?

    How sad to imagine that a mathematician’s training precludes one from understanding or appreciating practicalities, or that an engineer’s training precludes one from understanding abstraction. I’m happy to report that it’s not that way at all in the real world.

    Abstraction is part and parcel of my work as a computer engineer. All of the following are useful abstractions in the computer world:

    1. Programming languages.
    2. Instruction sets.
    3. Virtual memory.
    4. Java virtual machines.
    5. Virtual servers.
    6. Object interfaces.
    7. APIs.
    8. Standard cell libraries.
    9. Filesystems.
    …and so on.

    And what sort of an engineering drudge would simply plug numbers into equations with no curiosity for why they work?

    You seem to be suggesting–and quite earnestly it appears–that Avida is more “real” than nature, and that if Avida can produce IC, then this proves that nature ought to be able to do it as well, leaving IDers in the position of having to prove that nature can’t produce IC.

    I’d be curious to know where you got that impression, for I’ve said nothing of the kind.

    Here’s what I did say:

    Again, the only ingredients required for a Darwinian process are reproduction, heritable random variation, and selection. Avida has all of these.

    Avida has shown that a Darwinian process is capable of producing irreducible complexity. Does that prove that biological evolution can also do so? Of course not. But what it does do (and this is extremely significant) is to show that nothing about IC is inherently unreachable by a Darwinian process.

    If you want to argue that biological evolution cannot produce IC, you can no longer simply say “Of course evolution can’t produce IC, because no mindless Darwinian process can ever produce IC.” You have to come up with specific reasons why biological evolution, unlike Avida, cannot generate IC.

    To reiterate:
    1. Avida fits the definition of a Darwinian process.
    2. Avida has been shown to produce irreducible complexity.
    3. Therefore, to show that a system in nature is unreachable by Darwinian evolution, it is not sufficient to show that the system is irreducibly complex, unless you can explain why evolution, a Darwinian process, is incapable of producing IC, while Avida, another Darwinian process, is quite capable of doing so.

  128. 128

    I wrote:

    Avida has shown that a Darwinian process is capable of producing irreducible complexity.

    Scott wrote:

    Nonsensical statements like this will quickly get you unselected from this blog. And the ID proponents who post here, know better than to buy such claptrap.

    Consider this a warning.

    Scott,

    If you have an argument to make, make it. Otherwise, feel free to take your bluster and bold fonts elsewhere.

    I am prepared to justify my assertion. Are you able to do the same?

  129. It’s claptrap. The Avida algorithm used in the Lenski paper is not a Darwinian process. Darwinian processes are blind/dumb/purposeless.

    The reason Avida is compared to darwinian process, even though Avida is given an explicit goal whereas Darwinian ones have no explicit goal has to do with the structure of Genetic Algorithms. What happens in Genetic Algorithms is this:
    Start with some organisms. There is some mutation. The mutated organisms are scored according to how well they accomplish the (predefined) goal. The ones which don’t score well are killed (or at least not allowed to reproduce). The ones that do best are allowed to have the most children. Repeat the process. The end result is that each generation accumulates a better and better genome for survival (where survival is based on accomplishing the predefined goal). Remember: the ONLY role the goal plays in GAs is to create a survival differential.

    Naturalistic Evolution doesn’t employ a goal. Instead, what happens is this: Start with some organisms. There is some mutation. The mutated organisms that survive in their environment (sometimes because they have the best genes) tend to produce the most offspring, and the ones with bad genes and genetic defects die off (producing little or no offspring). The end result is that each generation accumulates a better and better genome for surviving (where survival is based thriving in their environment).

    Now, people will get hung up on the fact that Genetic Algorithms use a goal. But, I don’t think it’s a very big deal. GAs use the goal to provide a survival differential. The survival differential gives direction to the genome’s evolution (in this case, towards accomplishing the goal). Natural Selection, by the competition for food, mates, survival, has it’s own built-in survival differential which gives directionality to the genome’s evolution. In both cases, genomes evolve in whichever direction favors survival. I’ll say that again because it’s an important point: in [genetic algorithms and in real-world evolution], genomes evolve in whichever direction favors survival. Now, people complain that “evolution has no goal!” That’s true, naturalistic evolution has no externally-defined goal, but that’s different from saying that it has no direction – the direction of genome evolution is towards better survival.

    So, using the “goal”/”no goal” complaint about Genetic Algorithms sounds like splitting hairs to evolutionists. Sure, GAs don’t mirror competition and environments, but they do mirror mutation, genomes, and survival differentials which give rise to directionality in genome evolution. GAs mirror traits x,y, and z of real-world evolution (where x, y, and z are some of the major hangups over the evolutionary mechanism), but because they don’t mirror traits u and w, people want to throw them out entirely. Isn’t it obvious that GAs do legitimize *some* of features of naturalistic evolution – features which some people have erroneous hangups about? GAs do legitimately illuminate features of real-world evolution. Don’t throw them out because of some insistence that they don’t perfectly mirror everything or because you misunderstand them.

  130. Tom English

    An evolutionary computation is an evolutionary process in the real world. If the computation does something, an evolutionary process in the real world has done it. If you cannot grasp this, then you are not grasping what much of evolutionary computation is about.

    So by your logic a book of fairy tales are fairy tales in the real world.

    Is this argumentum yo mama?

  131. Jason:

    We can’t even predict hurricane paths or basic weather patterns correctly half the time, but whatever takes place in a GA simulation equals what has happened in the real world, even if real world (biologically speaking) time frame equals millions of years? Far fetched doesn’t even begin to describe the scenario.

    Apparently you have not read the thread. I commented:

    The simulations are qualitatively correct, but not quantitatively.

    Or have I made a mistake in assuming that you know the difference between qualitatively correct and quantitatively correct preditions? Just ask if you do not understand the distinction. Under the reasonable assumption that hurricanes are chaotic, it is impossible to predict their tracks over more than the short term. This does not mean that we cannot model the general behavior of hurricanes. Similarly, we cannot predict exactly how evolution will proceed, but we can conceivably capture aspects of how it works. To understand a system is not necessarily to predict its long-term behavior, but to model its general behavior.

  132. Gil, Karl wrote:

    Gil would have us believe that after three days of being criticized for confusing the simulator with the simulated, and after writing a new post which evades the issue altogether but attempts to establish his credentials, that he is just now getting around to mentioning that his first post was “sarcasm” which noone should take seriously.

    This certainly had crossed my mind.

    The results of my computer simulations, and their integration into the mechanics of smart parachutes, are now being used to resupply U.S. forces in Afghanistan.

    Google gives me 386 hits for the Affordable Guided Airdrop System (AGAS), but none with an instance of “Dodgen.”

    Browsing some papers, it seems that your firm has used simulations to develop the technology, but that the deployed AGAS system does not make use of simulation. One paper indicates that accurate delivery of the load is a matter of 1) determining atmospheric characteristics prior to the drop and planning a descent trajectory, 2) making the drop at precisely the right point, and 3) controlling actuators that pull on the risers, subject to constraints on fuel consumption. The controller could in principle compute and re-compute simulations of the descent to plan trajectory corrections, but I would expect a design team to choose a relatively simple and robust controller. Have I got this wrong?

    Did I do all of this highly sophisticated mathematical and software simulation without ever having “grasped the nature of a simulation model”?

    Hmm. I think you do not understand how concrete your practitioner’s perspective is — all particulars and no principles. “Well, golly, we had to model the ‘chute down to the stitches in the seams to make it land in a certain place, so the best way to gain an understanding of evolution is to model right down to the nuclear membrane. Yup. It’s obvious.” The objectives of controlling an entity and understanding an entity often do not lead to the same sorts of models. For instance, good control can often be achieved with a good statistical model, but statistical models, unless structured in certain ways, do not make good scientific models. (I understand that you must be using first-principles physical models.)

    Test drops are expensive, and I am sure simulation was essential to economical development of the system. But in what sense does your simulation get the load onto the target? And how much of the first-principles modeling did you do yourself? Generally a software engineer involved in such a project would implement models developed by others. Is the controller model-based? Did you develop the controller? Did you develop its model?

  133. DaveScot:

    Tom English write “It does not matter one whit whether the simulations fit biological observations.”

    A damn good thing too because if you tried, they wouldn’t.

    I have read a great deal of the ID literature, and somehow attempts to fit ID-theoretic predictions (as opposed to “postdictions”) to the biological observations have eluded me. Would you please point me to appropriate references?

  134. DaveScot:

    Tom English writes: “There are people working in computational chemistry (e.g., simulation of protein folding), but proteins are no more the right level of granularity for evolutionary simulation than are transistors the right level for microprocessor simulation. If you truly know anything about microprocessor simulation, you know what I am saying.”

    Your ignorance is showing again.

    Considering that a one million gate design requires 5-8 verification engineers, the task of verification is dominating project costs.

    I would have thought a Dell Millionaire would know the difference between a transistor and a gate. Furthermore, I would have thought he might know that verification of a large microprocessor design is not accomplished by simulation of the entire microprocessor at the gate level, let alone the transistor level.

  135. BC

    The problem with Avida isn’t that it fails to implement a genetic algorithm more or less along the lines of organic rm+ns. The problem is that it didn’t create anything non-trivial. An EQU instruction cobbled together out of microcode is trivial. Decades ago we were using GA (but we didn’t call it that) to create optimized printed circuit board layouts connecting thousands of points in 3 dimensions. And even that I wouldn’t have the gall to say was anything remotely approaching the complexity of even the simplest bacteria. Avida is child’s play in more ways than one.

  136. I’m curious, do any evo-sims model anything that has been observed directly?

    They have for many years. Consider J. L. Crosby (1967), “Computers in the Study of Evolution,” Sci. Prog. Oxf., Vol. 55, pp. 279-292 (not the earliest example I could give you, but a striking one). Among other biological phenomena, he considers the 45 alleles of a single gene in a population of just 500 plants. Why are there so many alleles? Wright, Fisher, and Moran had given the question different mathematical treatments, and had never agreed on the answer. Crosby obtained a convincing answer through simulation.

    Oe. organensis is a long-lived perennial, and there have been insufficient generations since the catastrophe for this number [of alleles] to have fallen very much. The mathematical arguments may have been very interesting, but they had little relevance to the biology of the problem. Wright had also attempted to consider spatial distribution [as had the computational model]; this was necessarily rudimentary because of the limitations of mathematics, and had led him to a conclusion quite different from that derived from the more realistic computer model.

    The best place to find Crosby’s paper is in David B. Fogel (ed.), Evolutionary Computation: The Fossil Record. It may interest some of you that Prof. Bob Marks, a friend of Bill Dembski at Baylor, and I both served as technical reviewers of the volume.

  137. DaveScot:

    We already know that trial and error can find solutions to problems.

    Define trial-and-error.

  138. Actually Tom, they’re mosfets if you want to get techincal about it, and there are two mosfets in the most basic logic gate (inverter). A NAND gate requires four mosfets. Even assistant professors of computer science at Texas Tech should know that all other logic gates can be constructed from NAND gates.

    What assistant computer science professors at Texas Tech probably don’t know is that microprocessor simulations, prior to creating the first mask, absolutely have to model at the gate level because of something called propagation delay which can result in something called race conditions. I was whipping out the fuse programming for programmable logic arrays while you were still in high school and I didn’t have the benefit of simulators way back then. Prop delays had to be calculated by hand to eliminate race conditions just as they had to be when designing with discrete TTL logic which I did for many years before logic arrays were invented. In 1991 I implimented the core logic for an 80486 motherboard in 19 discrete PALs with nothing but PALASM and hardware design genius.

    Google it in all the spare time you have now that you’ve been booted off Uncommon Descent for your nasty habit of getting personal.

  139. Tom asks that I define “trial and error”.

    No problem. Since this is about genetic algorithms and biological evolution I’ll just quote wiki’s examples of trial and error processes. It says it all.

    Examples
    Trial and error has traditionally been the main method of finding new drugs, such as antibiotics. Chemists simply try chemicals at random until they find one with the desired effect.

    The scientific method can be regarded as containing an element of trial and error in its formulation and testing of hypotheses. Also compare genetic algorithms, simulated annealing and reinforcement learning – all varieties for search which apply the basic idea of trial and error.

    Biological Evolution is also a form of trial and error. Random mutations and sexual genetic variations can be viewed as trials and poor reproductive fitness as the error. Thus after a long time ‘knowledge’ of well-adapted genomes accumulates simply by virtue of them being able to reproduce.

    Thanks for playing, Tom. There’s a lovely consolation prize waiting as you exit stage left. It’s an Avida generated EQU instruction autographed by fellow chance worshipper/ professor-in-denial Richard Dawkins.

  140. DvK

    While these peptides were less than 30 units long, you’re arguing against Moore’s Law if you don’t expect longer results in the future. This is already within an order of magnitude of the 100 unit protein of Dembski’s analysis.

    Great! Now we’re talking. Wake me up when it produces 40 odd proteins, millions of which are assembled in a precise manner to generate a flagellum.

    Not all things are scalable, DvK. Just because I can pile rocks to the roof of my house doesn’t mean I can eventually pile them to the moon. You Darwinists have a penchant for demonstrating the simple and extrapolating to the complex like everything just scales up without limits. Those of who know things don’t always scale like that need better proof of concept than “Poof! Chance did it.”

  141. 141

    DaveScot wrote:

    Actually Tom, they’re mosfets if you want to get techincal about it…

    Dave,
    A MOSFET is a transistor. That’s what the ‘T’ in ‘MOSFET’ stands for.

    …and there are two mosfets in the most basic logic gate (inverter). A NAND gate requires four mosfets.

    That’s only true for CMOS logic.

    Even assistant professors of computer science at Texas Tech should know that all other logic gates can be constructed from NAND gates.

    So? The fact that gates contain transistors, and that NAND gates can be used to form other gates, doesn’t mean that you have to model a microprocessor at the transistor level.

    …microprocessor simulations, prior to creating the first mask, absolutely have to model at the gate level because of something called propagation delay which can result in something called race conditions.

    Yes, timing analysis (and a tiny fraction of simulation) is done at the gate level, not at the transistor level. As Tom and I have been saying, it doesn’t make sense to model a microprocessor at the transistor level.

    The only times you would model at the transistor level would be when characterizing your cell library, your macros, and the occasional fully custom speed path.

  142. BC: “So, using the “goal”/”no goal” complaint about Genetic Algorithms sounds like splitting hairs to evolutionists.”

    I’m an evolutionist. (I think you meant Darwinists?) It’s not splitting hairs. The goal is the power supply for GA’s.

    Both the regression and the search bias terms require the transmission function to have ‘knowledge’ about the fitness function. Under random search, the expected value of both these terms would be zero. Some knowledge of the fitness function must be incorporated in the transmission function for the expected value of these terms to be positive. It is this knowledge — whether incorporated explicitly or implicitly — that is the source of power in genetic algorithms. (Altenberg 1994)

    Karl Pfluger (128): “Again, the only ingredients required for a Darwinian process are reproduction, heritable random variation, and selection. Avida has all of these.”

    You’re misusing the word “Darwinian.” Intelligence must not be involved in deciding what survives for the process to be Darwinian. Why did you leave off the word “natural” from selection? By your defintion, dog breeding would be a Darwinian process.

  143. Karl

    Not only is gate level verification the most robust it’s not even good enough anymore. My emphasis below.

    http://www.techonline.com/comm.....icle/21478

    At increasingly dense nanometer-process technologies, electrical and physical phenomena exhibit significantly greater effect on circuit performance. In fact, interconnect delay, coupling capacitance and power-network IR voltage drop already dominate gate delay at the 130 nm process node and threaten to overwhelm gate delay in emerging 90 nm technologies (Figure 5). Nevertheless, conventional signoff methodologies rely on traditional gate-level verification tools that are unable to accurately detect these nanometer effects. Traditional signoff methods ignore the very effects that cause nanometer designs to fail. Because of this gap between analysis requirements and traditional verification capabilities, the semiconductor industry, on the average, needs two or more silicon re-spins for over 50% of advanced designs, according to research firm Collett International.

  144. Scott,

    If you have an argument to make, make it. Otherwise, feel free to take your bluster and bold fonts elsewhere.

    I am prepared to justify my assertion. Are you able to do the same?

    Karl: As a moderator here, it is my responsibility to call people out on their empty and unwarranted assertions and just-so stories. Especially the tired ones that have been exposed time and time again. Therefore, it is unlikely that I’ll be going away any time soon. You, on the other hand, will likely become fast extinct from this blog unless you can legitimately support silly comments like the one I quoted above.

    Now, your challenge is to demonstrate how Avida proves that blind, comatose, natural mechanisms can build highly complex, specified, cellular machinery which requires all of it’s components simultaneously to function.

    DaveScot put it in the proper perspective:

    Scott didn’t tell you why that was nonsense. Any complexity produced in a stepwise fashion by a computer is by definition not irreducible.

    Make us all sit up and take notice by getting a computer simulation to reveal a biochemical pathway, based on nothing but random mutation and simulated natural selection, where a flagellum can be produced. I remain quite unimpressed by Avida finding pathways where higher level operands are produced by trial and error tinkering with microcode. Even a blind squirrel finds an occasional acorn.

  145. 145

    Avida is a joke. – ha ha ha.

    “A past evolution is undeniable, a present evolution undemonstrable.”

  146. I’m an evolutionist. (I think you meant Darwinists?) It’s not splitting hairs. The goal is the power supply for GA’s.

    Quoting Altenberg as he makes an analogy proves nothing. My description is a more detailed explanation of what Altenberg says; they aren’t incompatible. When he says, “It is this knowledge… that is the source of power in genetic algorithms.”, he is right – the “knowledge” or “goal” is used to provide a selection differential. Without a selection differential, nothing happens in GAs or in real-world evolution. I already described the role goals play in genetic algorithms, and why natural selection acts as a perfectly good replacement for the predefined goal in genetic algorithms. I’m not going to repeat myself ad nauseum. You can read what I wrote again if you want, but if you don’t understand it, you’re not going to understand it.

  147. 147

    DS:
    Not all things are scalable, DvK. Just because I can pile rocks to the roof of my house doesn’t mean I can eventually pile them to the moon.

    Absolutely agree. A single data point tells us nothing about how this kind of process will scale. But it does serve as an existence proof. A GA discovered real, biologically useful peptide sequences by modeling biology and then completeing the circuit by testing the sequences against live bacteria. All the talk about Avida is irrelevant to this result.

  148. Actually, a model of a supernova can be performed on a Mac, if you accept a “paradigm shift”.

    http://www.holoscience.com/new.....e=re6qxnz1

    EXCERPT

    How does a star explode? The conventional “implosion followed by explosion” model has many shortcomings. An electric star, on the other hand, has internal charge separation which can power a star-wide, expulsive lightning-flash. The star relieves electrical stress by fissioning or blowing off charged matter. A star also has electromagnetic energy stored in an equatorial current ring. Matter is ejected equatorially by discharges between the current ring and the star. Our own Sun does it regularly on a small scale. However, if the stored energy reaches some critical value it may be released in the form of a bipolar discharge, or ejection of matter, along the rotational axis. The remnant of SN 1987A shows such a bipolar ejection in the form of two blobs of matter (inside the bright ring).

    A companion star may initiate a stellar discharge that results in fissioning. It is significant in this context that an unexplained and much-disputed “Mystery Spot” appeared along the line joining the two blobs and was seen briefly a couple of months after the explosion and then quickly faded from sight. The spot was too far away to have been ejected by the supernova and its brightness (10% of the supernova) was too great to be explained by reflection off a cloud of matter. It may have been a faint companion that triggered, or was a part of the circuit of the electrical supernova discharge.

    The bright beaded ring shows that matter has been ejected equatorially. However, the ring is not expanding. The other two fainter rings are also arranged above and below the star on the same axis and show similar but fainter “bright spots”.

    Conventionally, a shock wave from an exploding star should show spherical, rather than axial, symmetry. And there is no particular reason why the shock front should form a ring of bright spots. We should expect some visible indication of the spherical cavity.

    Stars are an electrical plasma discharge phenomenon. Electrical energy produces heavy elements near the surface of all stars. The energy is transferred over cosmic distances via Birkeland current transmission lines. The energy may be released gradually or stored in a stellar circuit and unleashed catastrophically. It is these cosmic circuits that are the energy source for the supernova explosion – not the star. That is why the energy output of some nebulae exceeds that available from the central star. See Shocks from Eta Carina.

    The electrical energy released in supernova fissioning is prodigious, so it is no surprise that there is an abundance of heavy elements and neutrinos dispersed into space by the stellar “lightning flash.”

    The crucial evidence for the electrical nature of supernovae must come from experiment and observation.

    Anthony L. Peratt, Fellow, IEEE, published a seminal paper in the IEEE Transactions on Plasma Science, Vol. 31, No. 6, December 2003. It was titled “Characteristics for the Occurrence of a High-Current, Z-Pinch Aurora as Recorded in Antiquity.”

    In it he explained the unusual characteristics of a high-energy plasma discharge. He discussed mega-ampere particle beams and showed their characteristic 56- and 28-fold symmetry. He wrote: “A solid beam of charged particles tends to form hollow cylinders that may then filament into individual currents. When observed from below, the pattern consists of circles, circular rings of bright spots, and intense electrical discharge streamers connecting the inner structure to the outer structure.”

    Initially, the particle beam was cylindrical but after traveling the 15 cm has filamented. In the sub-gigaampere range, the maximum number of self-pinched filaments allowed before the cylindrical magnetic field will no longer split into “islands” for the parameters above has been found to be 56.

    These results verify that individual current filaments were maintained by their azimuthal self-magnetic fields, a property lost by increasing the number of electrical current filaments. The scaling is constant for a given hollow beam thickness, from microampere beams to multi-megaampere beams and beam diameters of millimeters to thousands of kilometers.”

    This scaling of plasma phenomena has been extended to more than 14 orders of magnitude, so the bright ring of supernova 1987A can be considered as a stellar scale “witness plate” with the equatorial ejecta sheet acting as the “plate” for the otherwise invisible axial Birkeland currents.

    Peratt adds, “Because the electrical current-carrying filaments are parallel, they attract via the Biot-Savart force law, in pairs but sometimes three. This reduces the 56 filaments over time to 28 filaments, hence the 56 and 28 fold symmetry patterns. In actuality, during the pairing, any number of filaments less than 56 may be recorded as pairing is not synchronized to occur uniformly. However, there are “temporarily stable” (longer state durations) at 42, 35, 28, 14, 7, and 4 filaments. Each pair formation is a vortex that becomes increasingly complex.”

    The images of SN 1987A shows the Birkeland currents around the star have paired to a number close to 28. The bright spots show a tendency toward pairing and groups of three. This witness plate model explains why the glowing ring is so nearly circular and is expanding very slowly – unlike a shock front. It is more like a cloud at night moving through the beams of a ring of searchlights.

    If the equatorial ring shows the Birkeland currents in the outer sheath of an axial plasma current column, then the supernova outburst is the result of a cosmic z-pinch in the central column, focused on the central star. It is important to note that the z-pinch naturally takes the ubiquitous hourglass shape of planetary nebulae. No special conditions and mysteriously conjured magnetic fields are required.

    It is also the shape of SN1987A with its three rings. It will be instructive for plasma cosmologists to watch closely the development of SN1987A’s “necklace of incandescent diamonds.” I do not expect the ring to grow as a shock-wave-produced ring would be expected to. Some bright spots may be seen to rotate about each other and to merge. It is an opportunity more rare and valuable than a diamond to be able to verify the electric discharge nature of a supernova. Supernova 1987A will be illuminating the future of plasma cosmology!

    Plasma cosmologists have not ignored the pulsar, sometimes found in a supernova remnant. Healy and Peratt in “Radiation Properties of Pulsar Magnetospheres: Observation, Theory and Experiment,” concluded, “the source of the radiation energy may not be contained within the pulsar, but may instead derive either from the pulsar’s interaction with its environment or by energy delivered by an external circuit…. [O]ur results support the ‘planetary magnetosphere’ view, where the extent of the magnetosphere, not emission points on a rotating surface, determines the pulsar emission.”

    In other words, we do not require a hypothetical super-condensed object to form a pulsar. A normal stellar remnant undergoing periodic discharges will suffice. Plasma cosmology has the virtue of not requiring neutron stars or black holes to explain compact sources of radiation.

    This completes the electrical sketch of supernova 1987A.

  149. 149

    I wrote:

    Again, the only ingredients required for a Darwinian process are reproduction, heritable random variation, and selection. Avida has all of these.

    j wrote:

    You’re misusing the word “Darwinian.” Intelligence must not be involved in deciding what survives for the process to be Darwinian.

    I disagree, and merriam-webster.com backs me up:

    Darwinian

    2 : of, relating to, or being a competitive environment or situation in which only the fittest persons or organizations prosper.

    Nothing about that definition stipulates that intelligence cannot be involved in setting the criteria for ‘fittest’.

    But don’t get hung up on the word ‘Darwinian’ — it’s just a label, after all.

    The real issue is that you (and many other IDers) believe that intelligence is sneaking into Avida via the fitness function, because the fitness function is designed, and that this renders Avida fundamentally dissimilar to biological evolution.

    I’m a bit baffled by this objection. Of course the fitness function influences the behavior of a Darwinian process. If it didn’t have an effect, it wouldn’t be considered a necessary ingredient, would it?

    So yes, a fitness function will favor certain evolutionary directions and discourage others. But this is just as true of natural selection as it is of Avida. I can confidently predict that natural selection will not produce neon-green Arctic hares that stand out like beacons against a background of snow. If you think the Avida fitness function is “sneaking” information into the model, then the same must be true when natural selection favors white Arctic hares that blend in with the snow and rules out the neon-green variety.

    A criticism often levelled against the Avida ‘EQU’ experiment is that the fitness function was ‘rigged’ to reward intermediate steps. As the argument goes, Avida was only able to reach the EQU goal because of the intermediate rewards. But this is exactly what the Avida folks intended, as they acknowledge in their Nature paper:

    Some readers might suggest that we ‘stacked the deck’ by studying the evolution of a complex feature that could be built on simpler functions that were also useful. However, that is precisely what evolutionary theory requires, and indeed, our experiments showed that the complex feature never evolved when simpler functions were not rewarded.

    Their whole point was to show that if there were intermediate rewards, an IC result could be achieved.

    The argument then comes down to this: are there intermediate rewards in nature’s fitness functions?

    It would appear that the only option left to ID supporters is to contend that such intermediate rewards will only occur in a fitness function that is designed by an intelligence, and never in a “natural” fitness function.

    How would you support this contention?

  150. BC wrote above, “What happens in Genetic Algorithms is this: Start with some organisms. There is some mutation…

    In other words, start with a massive biological set of information, followed by mutation and so on.

    Starting with existing complexity begs the question, doesn’t it?

  151. Karl,

    I think DaveScot’s point about rocks to the moon is where your myopia originates. Can you tell me what the search area is in AVIDA? IOW, with how much ‘raw material’ do you start? Do the rules in AVIDA mirror real world physics?

    You know, I don’t have a problem with you or other darwinists inferrring the complexity and origin of life had no intelligent cause and ‘just is’. But you can in no way model all the forces involved in the onward march of life and haven’t the lifespan or manpower to observe what you claim happened to produce life as we know it.

    ID does the same thing. If ID isn’t science because intelligence is inferred, then neither is darwinism.

  152. 152

    DaveScot wrote:

    Not only is gate level verification the most robust it’s not even good enough anymore.

    Dave,
    You’re confusing timing verification with simulation. Obviously, timing analysis has to take low-level physical properties of the circuits into account. This is done piecemeal on small portions of the design.

    But I repeat, nobody simulates an entire microprocessor at the transistor level. It is simply the wrong level of abstraction to use, just as it makes no sense to model beach erosion by tracking each grain of sand individually.

  153. 153

    Todd:
    Starting with existing complexity begs the question, doesn’t it?

    Yes, it would, if that is what really happens. BC was giving a very loose description of GAs. The initial population of most GAs is random. The biggest concern is making sure that every possible allele is available for every gene. In a binary GA that would mean making sure that every position in the bit string had 50% 1s and 50% 0s in the population.

    There is a small literature about building the initial population in some “optimal” way in GAs, and in GP there are a lot of heuristics people use, but it comes down to ensuring diversity is available.

    A researcher that includes some specifically designed individuals in the initial population is betting they are smarter than the algorithm. In some uses of GAs for purely practical tasks of optimising something, this is fine. In a research program to discover the native power of the algorithm, it is cheating and un-ethical behavior.

    OT – congrats on leaving comment 150! This one of the best discussions I’ve seen here at UD.

  154. 154

    Scott wrote:

    Now, your challenge is to demonstrate how Avida proves that blind, comatose, natural mechanisms can build highly complex, specified, cellular machinery which requires all of it’s components simultaneously to function.

    Scott,
    Why on earth is it “my challenge” to explain something I don’t believe and which I explicitly disavowed earlier in the thread?

    I wrote:

    Avida has shown that a Darwinian process is capable of producing irreducible complexity. Does that prove that biological evolution can also do so? Of course not. But what it does do (and this is extremely significant) is to show that nothing about IC is inherently unreachable by a Darwinian process.

    Scott, debate is much more productive when you don’t invent your opponent’s views out of whole cloth.

    DaveScot wrote:

    Any complexity produced in a stepwise fashion by a computer is by definition not irreducible.

    Dave,
    I’ll let you fight it out with Behe and Dembski, who have different ideas:

    Behe:

    By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning.

    Dembski:

    A functional system is irreducibly complex if it contains a multipart subsystem (i.e., a set of two or more interrelated parts) that cannot be simplified without destroying the system’s basic function.

    IC precludes a stepwise buildup while maintaining the same function. It does not preclude a stepwise buildup via different intermediate functions, as in Avida’s path to the EQU function.

  155. BC: “You can read what I wrote again if you want, but if you don’t understand it, you’re not going to understand it.”

    I had read the entire thread before commenting. What I lack is your faith in the power of a chance and necessity, not understanding. The only one of your comments (25, 60, 121, 129) that is relevant is the one I took your quote from, #129. In that comment you said, “the mutated organisms that survive in their environment (sometimes because they have the best genes) tend to produce the most offspring.” This leads to the classic Darwinian tautology: the survival of those who survive. (What does the word “best” mean your sentence?)

    BC: “I already described the role goals play in genetic algorithms, and why natural selection acts as a perfectly good replacement for the predefined goal in genetic algorithms.”

    Your claim that the nebulous “evolution towards better survival” is equivalent to an “externally-defined goal” is groundless. You believe it. Prove it. Show me a computer program that can generate irreducible complexity or complex specified information without intelligently-designed fitness functions (or the equivalent).

    BC: “My description is a more detailed explanation of what Altenberg says; they aren’t incompatible.”

    They’re incompatible. To be effective GA’s must have fitness functions that are correlated with a specific goal. “Better survival” is vacuous.
    _____

    Karl Pfluger: The print edition of M-W defines “Darwinian” as “of or relating to Charles Darwin, his theories esp. of evolution, or his followers.” I sense definition creep.

  156. 156

    todd asks:

    Starting with existing complexity begs the question, doesn’t it?

    What question does it beg? If the object of the simulation is to see whether a Darwinian process can endow “organisms” with a novel feature not present in their ancestors, it would be pointless to start from scratch. We don’t include the Big Bang each time we simulate an assembly line, do we?

    I think DaveScot’s point about rocks to the moon is where your myopia originates.

    What unwarranted extrapolation do you think I am making? (Please refer to something I’ve actually written, and not just to your assumptions about what I believe).

    …you can in no way model all the forces involved in the onward march of life and haven’t the lifespan or manpower to observe what you claim happened to produce life as we know it.

    As I pointed out before, nobody claims to be doing detailed “photorealistic” simulations of evolution as it actually unfolded on earth. What they are mostly trying to do is to determine the capabilities and limitations of Darwinian processes, in hopes that (among other motivations) this will shed light on the process and trajectory of biological evolution.

  157. 157

    j wrote:

    The print edition of M-W defines “Darwinian” as “of or relating to Charles Darwin, his theories esp. of evolution, or his followers.” I sense definition creep.

    As I said, the label you apply to the process is unimportant. The important questions are the ones I raised at the end of my comment:

    The argument then comes down to this: are there intermediate rewards in nature’s fitness functions?

    It would appear that the only option left to ID supporters is to contend that such intermediate rewards will only occur in a fitness function that is designed by an intelligence, and never in a “natural” fitness function.

    How would you support this contention?

  158. Karl Pfluger wrote:

    There’s much to disagree with in your last few comments, but let me concentrate on your most egregious statements. . . . Here’s the source of your confusion: . . .

    Karl, I’ve already spoken about the condescending attitude you’ve displayed on UD. Are you completely unable to curb it? To presume that it is I who is confused, and to proceed to lecture me like one of your undergraduates, is not only inappropriate, but presumptuous.

    You quote me as saying/writing:
    “Karl, you seem to be missing the point that if the proteins don’t fold properly, then biochemical function comes to an end. I hope you understand that. And, of course, NS can’t act on something that is not biologically active. Hence, the massive amount of computer power required to search out the proper solution to folding is an undertaking that “random mutation and NS” would in some way have to deal with.”

    To this you reply:

    . . . you think that NDE must “understand” or “compute” protein folding in order to find proteins that fold “properly” and enhance the fitness of the organism possessing them. That is a complete misunderstanding of how NDE works.

    Tell me, how did you arrive at such a conclusion? What did I write that would substantiate this claim? I make a very simple and straightforward point: if the random computer search for proper folding proteins involves tremendous computing power, this implies that the “search space” for such proper folding is huge; thus, in the real world, RM+NS must find its way through this same amount of search space, which in turns requires a huge amount of time. This huge amount of time represents a severe constraint on the viability of a RM+NS scenario. Your suggestion that NDE doesn’t need to “know” if a protein folds properly is completely beside the point. That in no way affects the size of the “search space”. What is to the point is that if the “search space” is so huge, the odds of nature randomly coming up with just the right one is hugely small. And (and this was also part of the point I was making) in the meantime properly folding proteins are needed for life, a fact that militates even more against such a random solution.
    ___________________________________
    I wrote:

    You seem to be suggesting–and quite earnestly it appears–that Avida is more “real” than nature, and that if Avida can produce IC, then this proves that nature ought to be able to do it as well, leaving IDers in the position of having to prove that nature can’t produce IC.

    You respond:

    I’d be curious to know where you got that impression, for I’ve said nothing of the kind.

    But, of course, you did. You said:

    ‘If you want to argue that biological evolution cannot produce IC, you can no longer simply say “Of course evolution can’t produce IC, because no mindless Darwinian process can ever produce IC.” You have to come up with specific reasons why biological evolution, unlike Avida, cannot generate IC.’

    How else can you possibly interpret your rejection of the phrase: “Of course evolution can’t produce IC, because no mindless Darwinian process can ever produce IC.” In rejecting this phrase, you’re making absolutely no distinction between the world of biology and the world of computer simulation. You’re, in fact, equating them. This is patently clear, no matter how much you protest it isn’t so.
    You wrote: “Abstraction is part and parcel of my work as a compute engineer.” Have you taken it too far ? (And I don’t say this to attack you.)

  159. Karl

    As I pointed out before, nobody claims to be doing detailed “photorealistic” simulations of evolution as it actually unfolded on earth. What they are mostly trying to do is to determine the capabilities and limitations of Darwinian processes, in hopes that (among other motivations) this will shed light on the process and trajectory of biological evolution.

    Fair enough. Let’s recap what was actually done.

    Avida proved that a trial and error algorithm can cobble together an EQU instruction out of microcode.

    I already knew it could be done. Programmers and indeed every person on the planet with a pulse uses trial and error to find solutions to problems. [yawn]

    Let’s be clear about what it did not produce. It did not produce irreducible complexity. It used a stepwise process to produce what it did. If a structure can be produced by a stepwise process, where at each step the structure functions in some meaningful way that makes it worth keeping, then it is not an irreducible structure.

  160. Karl Pfluger: “As I said, the label you apply to the process is unimportant.”

    Compared to the issues, I agree. However, I would still maintain that to avoid confusion, one shouldn’t use “Darwinian” for a process that utilizes intelligence. Just as one shouldn’t use “evolutionist” when one means “Darwinist.”

    Karl Pfluger: “The important questions are the ones I raised at the end of my comment: The argument then comes down to this: are there intermediate rewards in nature’s fitness functions? It would appear that the only option left to ID supporters is to contend that such intermediate rewards will only occur in a fitness function that is designed by an intelligence, and never in a “natural” fitness function. How would you support this contention?”

    My reply to BC addressed your questions. It’s you who need to support your contention that non-intelligent processes can do what you claim. We know that intelligent processes are capable of generating IC and CSI. We don’t know that non-intelligent process are, and yet you maintain that thay are.

  161. Karl

    I never said anyone modeled microprocessors at the transistor level. That’s a straw man. Tom English put those words in my mouth. He said modeling evolution at the protein level is like modeling processors at the transistor level. I replied with an article talking about modeling processors at the gate level. I presumed Tom knew that gates are just a few transistors each and wouldn’t quibble. But of course to save your egos both of you did continue to quibble.

    In point of fact electronics are modeled and understood even at the quantum scale as necessary. I suspect both you and English knew that but are simply too intellectually dishonest to admit that biological systems are not well enough understood to model them like a microprocessor.

    You’re done here, Karl. I find your dishonesty offensive.

  162. 162

    You can’t reason with a Darwimp. They are congenitally deaf, like most pure white cats, to what Einstein called “the music of the spheres.”

    The only problem with banning them is it deprives unimpaired minds of the great pleasure of openly ridiculing them. It is neither fitting nor proper to lampoon them if they aren’t here to absorb and digest it.

    I love it so!

    “A past evolution is undeniable, a present evolution undemonstrable.”
    John A. Davison

  163. (Off Topic)

    Well, I rather enjoy reading the interchanges among many differing views, even if certain parties argue in the circular, with strawmen, by association, false dilemma or using misrepesentation.

    John Stuart Mill, writing in On Liberty:
    But the peculiar evil of silencing the expression of an opinion is, that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error. (emphasis mine)

    The moderators here are certainly ‘the government’ of this blog and commenters are free to post elsewhere, so the context of this quote (political free speech) doesn’t apply – the emphasis above is what I’m getting at.

    Tom and Karl did come off at times as condescending however, neither were vulgar nor defamatory. We still lose even though DS may be right about intellectual dishonesty because other discussions with those individuals are cut off, so probes from different angles are lost. Those who read but don’t post are robbed as well.

    The funny thing about people is how belief shapes perception – the theory of intelligent design presents a threat to the believing materialist’s deepest held convinctions about reality, just as darwinism did to believing theists. The biggest difference is that many materialists sneer at faith and refuse to acknowledge their world view is also shaped by faith. What DS calls intellectual dishonesty is more a defensive mechanism – for a materialist to ‘see’ design requires a willingness to be humbled.

    Anyway, seeds are sown in discussions like this, despite bluster and denial, to all who follow the thread. Germination varies by individual and removing two willing and mostly polite foils effectively throttles reason’s water.

    My $0.02, anyway, take it or leave it.

  164. The emphasis above showed on the preview but not in the post – here’s the quote again, I’ll use bold instead of underline this time:

    But the peculiar evil of silencing the expression of an opinion is, that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error. (emphasis mine)

  165. Todd: “Tom and Karl did come off at times as condescending however, neither were vulgar nor defamatory.”

    Right. I think Tom is rather arrogant at times. And he wacked me with a personal zinger once (which I no doubt deserved, since I am given to impetuous posts on occasion, and rarely proof or edit what I write.) But I’d like to see them both continue here. Their involvement has been valuable overall.

  166. Todd: “The funny thing about people is how belief shapes perception – the theory of intelligent design presents a threat to the believing materialist’s deepest held convinctions about reality, just as darwinism did to believing theists.”

    Well put. Materialism really is a philosophical reality to them, not merely the “methodology” as many of them assert. From what I can tell, most of them that claim to hold a methological naturalism really hold a philosophical materialism. They simply hate hate hate the idea of a designer/creator/god/higher power. Simple as that. It’s visceral. It’s emotional. It’s non-rational. When you get past all the surface arguments, like Dawkins latest whining, these is what neutral, genuinely agnostic, people are up against. And it’s an entrenched ideology and it’s going to take some doing to bring it down. What is at stake is basically a religion. Humans have been known to go to war over such things.

  167. 167

    Studies with separated identical twins have made it very apparent that a belief or lack of same in a Creator has a congenital basis. The extent to which this can be reversed is problematical. Atheism, political,liberalism, ethical and moral relativism and Darwinism are all clearly correlated and may be pleiotropic expressions of the same congenital condition. Once recognized, this immediately explains why internet debates are hopeless enterprises and a huge waste of everybody’s time. We are all victims of our prescribed fates. As the title of William Wright’s book proclaims, we are “Born That Way.”

    It is hard to believe isn’t it?

    “A past evolution is undeniable, a present evolution undemonstrable.”
    John A. Davison

  168. 168
    Reciprocating Bill

    PaV wrote to Karl:

    “You seem to be suggesting–and quite earnestly it appears–that Avida is more “real” than nature, and that if Avida can produce IC, then this proves that nature ought to be able to do it as well, leaving IDers in the position of having to prove that nature can’t produce IC.”

    PaV – I think you have misread the logic of Karl’s statement.

    By analogy:

    We have the Smith brothers: Bob and Bill.

    It has been asserted that Bill will never make good, because no Smith will ever make good.

    But Bob worked hard and became a billionaire.

    Hence you can no longer maintain that Bill will never make good because no Smith will ever make good. You must show why Bill, specifically, will never make good.

    However, to assert this is not to prove that Bill will make good.

    Similarly, Karl did not assert that Avida’s success in creating IC *proves* that biological evolution can create IC.

    Rather, Karl asserted that Avida shows that blind evolutionary processes generally can produce IC. Hence you can no longer maintain that biological evolution, a subset of the class of blind evolutionary processes, can’t produce IC because blind evolutionary processes can’t produce IC. You must now show why biological evolution, specifically, can’t produce IC.

    However, this does not *prove* that biological evolution *can* produce IC, and Karl has not maintained that.

  169. 169

    j:
    By your defintion, dog breeding would be a Darwinian process.

    Yes, I think that Darwin in his own writings used some examples from animal and plant husbandry – artificial selection. I’m not a big enough Darwin aficianado to give you a quote, though. Sorry I didn’t post earlier, we might have avoided an unecessary definition war.

    So dog breeding is a Darwinian process that uses artificial selection pressure to manipulate dog shapes, and evolution is a Darwinian process that uses natural selection. The core Darwinian idea is heritable variation and any kind of selection. That is why Darwin also worked on sexual selection, it isn’t survival of the fittest, its survival of the cutest!

    For the most part Darwinian evolution doesn’t head in any particular direction, IMHO. Lots of current species occupy the same niche as lots of extinct species, all the way down to convergent evolution of specific body features. Too many of the mechanisms just lead to drifting populations. But there are a few mechanisms that do give a gradual directionality to life’s parade – adding more diversity in the form of more complex species over time. One of the most interesting uses of GA/GP to me is to discover which mechanisms are responsible for which kind of effects.

    OT – DaveScot, a question if I may? If someone is “outtahere” is that relative to the thread or the blog? I’m relatively new to UD and I don’t know the ground rules very well.

  170. Recip Bill,

    I think Davescot’s point on scale effectively answered Karl – nevermind that AVIDA has a comparatively miniscule search space and nevermind that what Karl calls IC in AVIDA is built step-wise, which means the complexity was built gradually and can therefore be reduced gradually, thus isn’t really IC.

  171. DvK

    Out of here is the whole blog.

  172. I believe Karl was booted by Bill Dembski and then let back in by popular demand (including mine). He then proceeded to annoy Admin Scott with unsubstantiated crappola and finally me. Three strikes and you’re out. Everyone is entitled to their own opinion of Karl’s expertise, of course, but in the end only a few of those opinions matter in deciding whether or not his contributions are a net gain or loss in producing substantial discussion. It was simply too much work correcting vacuous claims he failed to relinquish and kept reiterating. There’s only so much we can tolerate. He said what he wanted to say (more than once) and none of it has been removed. It’s there for posterity.

  173. Recip

    Avida DID NOT generate IC. Because of the way Avida operates every structure it produces is by definition not IC because it was generated by stepwise path. If Avida were working with true biochemical structures it could in principle find a reduction pathway that generates a bacterial flagellum and thus prove that a bf is not irreducibly complex. It can never generate an IC structure because its principles of operation unavoidably and exclusively find only Darwinian pathways.

    If you continue to posit that Avida produced IC you’re going to be the next so-called “formidable” (read bullheaded Darwinian dogmatist) opposition to get banned. We’re all getting really tired of explaining over and over why Avida can’t produce IC structures.

  174. Recip Bill:

    Hence you can no longer maintain that Bill will never make good because no Smith will ever make good. You must show why Bill, specifically, will never make good.

    Bill and Bob have the same parents. Their nature is the same. I have human nature, but I don’t have Bill and Bob’s nature, except by analogy. Hence to say that no Smith will ever come to any good says nothing about me at all.

    Avida and nature–at best!–share analogies. (Has it been proven that RM+NS can lead to macroevolution? This is what is argued on this blog.) That Avida supposedly has developed IC says very little, if anything at all, about nature. First, Dave Scot has pointed out that if Avida reaches IC in a stepwise fashion, then logically, this IC unfashioned in a stepwise fashion. Secondly, they have very expensive computer models trying to predict the weather. And they’re not very successful at it over longer periods of time. There is no one-to-one correspondence between their models and what nature does, so why should we accept that now, with Avida, we have such a correspondence?

    These kinds of assertions–that what a computer supposedly does and what life does is the same–won’t fly here. No one is buying that argument.

  175. me: “By your defintion, dog breeding would be a Darwinian process.”

    DvK: “Yes, I think that Darwin in his own writings used some examples from animal and plant husbandry – artificial selection.”

    How about if we invent a definition of “intelligent design” that includes unintelligent processes. After all, Dembski mentions them in his writings. Then the two sides will be equal, and the whole issue will be resolved. ;-)

  176. 176
    Reciprocating Bill

    “If you continue to posit that Avida produced IC you’re going to be the next so-called “formidable” (read bullheaded Darwinian dogmatist) opposition to get banned. Your name is now on the moderation list.”

    My post was written to illustrate that PaV (IMBDO) did not understand the logic of a particular comment made by Karl. Hence the Smith family. In doing so I reproduced Karl’s assertion to underscore the analogy between my illustration and Karl’s statement. I don’t otherwise have familiarity with Avida or its products, nor have I made any statements about this particular simulation and IC.

    That said, It is obviously an empirical question whether IC structures as defined by Behe and Dembski (quoted above) can arise by stepwise means. Karl’s assertions were on point (although open to debate vis correctness) vis these definitions.

    In contrast, the sacred cow definition of IC in 170 – essentially, “complex structures that cannot be built step-wise” places the possibility of IC structures built by NS out of reach by definitional fiat. As the Church Lady said, “How conveeeeenient.”

    It would be helpful if ID would settle on one definition or the other.

  177. recip

    Both Behe and Dembski have conceded that exaptation may produce what otherwise appears to be irreducible complexity. Again, Avida proves nothing new. It did not produce irreducible complexity, it merely demonstrated what was already conceded. The bottom line remains that Avida can only produce structures which have a stepwise reduction path which unavoidably means they were not irreducible. Avida can, in principle, prove that a structure is not irreducible but it cannot prove that irreducible structures can be produced by Darwinian pathways. This is why I stated that the model must be testable against reality to have any real meaning. It must have a target that is an actual biological structure, such as flagellum that ID posits is irreducible, and then expose a reduction pathway. A structure as complex as a flagellum, with some 40 proteins that in and of themselves are complex, is much more complicated by the fact that millions of those component parts must be assembled in a precise fashion to become a functional end product. The origin of the assembly procedure is the thornist issue, not the origin of the few components that go into making it. An EQU opcode cobbed together from a few microcode primitives is hardly comparable.

    If modeling real biochemistry is too difficult at this time that doesn’t make it valid to equate a comparatively simple digital organism with real biological organisms.

    A more impressive demonstration of Darwinian pathways would be to begin with component library of basic gears, levers, pistons, water, coal, fire, etcetera, suitably changeable by random mutation, and see if a Darwinian process can modify and assemble them into a steam engine. There’s no obstacle in not being able to adequately model the component parts as there is in modeling protein based organic machinery. I wouldn’t dismiss that demonstration of the power of Darwinian evolution with a yawn, that’s for sure. I won’t hold my breath waiting for such a demonstration because I don’t think it’s possible in any environment with finite bounds. Only intelligence can assemble such structures within reasonably limited bounds of time and space. Given infinite time and/or space then not only can Darwinian processes produce anything, it must produce everything physically possible. That’s simply the nature of infinity.

  178. Recip Bill:

    In response to Dave Scot:

    My post was written to illustrate that PaV (IMBDO) did not understand the logic of a particular comment made by Karl. Hence the Smith family.

    I hope my response to you points out to you the lack of logic in Karl’s assertion. Only in a completely “abstract” way is such a statement correct. But biology is not completely “abstract”. So when Karl wants to go one step further–a step he kept insisting on–that those who propose ID as a better interpretive explanation of biological complexity must now accede that “nature” can arrive at IC in a random, Darwinian fashion, he’s overstepped reason and logic. He’s made, not a step in logic, but a flat-out assertion. And it is pure hubris to then make this assertion and accuse everyone who doesn’t accept it as being “too dumb” to understand this vaunted logic that is being proposed. You seem to have joined him in this enterprise based on your statement above.

    I now understand how this particular thread got started, and why Gil started it with the example he did. Obviously Karl, and probably yourself, were arguing with Gil on an earlier thread about the implications for “nature” that Avida has now demonstrated (conceding for the moment that Avida has built an IC structure.) Again, no one here buys that argument.

    But there’s more. And that has to do with Avida. If you’re a computer scientist/engineer, and you have some deep-seated desire to demonstrate that a “Darwinian process” can produce IC, my suspicion is that when the first models don’t produce this IC, that some tweaking takes place. And, after enough trials and tweaking, lo and behold, IC appears (of course, I suspect that their definition of IC and my definition probably won’t be the same). But even conceding that this is “real” IC, it is almost 100% certain that in the “tweaking” that has been done, some kind of information has been snuck in.These same computer scientists/engineers would most presumedly say, “We didn’t do anything of the sort.” My experience so far is that unless information is snuck in, nothing happens.

  179. 179

    Why are Darwimps allowed to spout here when they won’t allow rational people to spout at their own little “groupthink” internet citadels?

  180. 180

    I didn’t believe that anyone could still be so weak minded as to imagine that the production of dog varieties ever had anything to do with creative evolution, but here we have David vun Kannon claiming exactly that.

    It is hard to believe isn’t it?

    I love it so!

    “A past evolution is undeniable, a present evolution undemonstrable.”
    John A. Davison

  181. 181
    Reciprocating Bill

    DS said:

    “Both Behe and Dembski have conceded that exaptation may produce what otherwise appears to be irreducible complexity.”

    I want to parse what you are saying correctly in the context of this exchange. So what follows is a real (not rhetorical) question:

    Is it correct to express their concession as, “We concede that stepwise processes (exaptation, scaffolding, etc.) can create structures that are indistinguishable from true IC structures, when evaluated in terms of the Behe/Demski definitions quoted above. However, these structures are not, by definition, truly IC because they were created by stepwise processes.”

    Is that correct?

  182. 182
    Reciprocating Bill

    PaV said:

    “I hope my response to you points out to you the lack of logic in Karl’s assertion. Only in a completely “abstract” way is such a statement correct.”

    Well, you dismiss the “abstract,” logical structure of such assertions at your peril. The hazard is that you will come away with a meaning other than was intended by the speaker. As I read it, and again IMBDO, you’ve parsed (great word) Karl’s statement incorrectly and attributed to him an assertion that I don’t read him as having made, at least not in the exchange to which I was referring.

    But, why thrash it further?

  183. 183

    Dr. JAD:
    I didn’t believe that anyone could still be so weak minded as to imagine that the production of dog varieties ever had anything to do with creative evolution, but here we have David vun Kannon claiming exactly that.

    Er, no. What I said was that Darwin, himself, used artificial selection as an example of a process.

  184. 184
    Reciprocating Bill

    PaV:

    “I now understand how this particular thread got started, and why Gil started it with the example he did.”

    It started with the thread entitled “A Realistic Computational Simulation of Random Mutation Filtered by Natural Selection in Biology.” If you haven’t read that, you’ve started in the middle.

    “my suspicion is that when the first models don’t produce this IC, that some tweaking takes place…. ”

    Just so we are clear: You just made all that up.

  185. recip

    No, that’s putting words in their mouths. What I said requires no parsing into other words.

  186. Recip B -

    That isn’t correct. The ‘irreducable’ part of IC is the key. They concede complexity can arise step-wise. When one encounters a complex system which will not function without interacting essential components, one encounters something that cannot be reduced and still retain function.

    A wind up clock is an example. Core gears work in unison, driven by a loaded spring. Pop open a wound clock a and start removing parts and see how long it keeps time. For this type of clock to work the core components must be assembled at the same time. Once assembled it is irreducably complex.

    And that’s just the base consideration – as it applies to biology, one also has to factor how the complex structure fits into the overall scheme. Consider how ATP in required for cellular life and is produced in a complex machine – where is the step-wise explanation/demonstration of how cells lived while this complex assembly was gradually arising, being ‘selected’ by natural forces? Does ‘Science’ fully understand the genetic blueprints for this assembly? Which parts of the DNA code for the proteins needed for ATP synthase? Do we even know :?:

  187. Dang, I wish you guys let us edit our posts like Mike Gene does….

    Recip, the above refers to #181

  188. 188

    PaV:
    I now understand how this particular thread got started, and why Gil started it with the example he did. Obviously Karl, and probably yourself, were arguing with Gil on an earlier thread about the implications for “nature” that Avida has now demonstrated (conceding for the moment that Avida has built an IC structure.)

    Actually, it got started in a very different way. Go back to Gil’s entry of Sep 28 to see the gory details. Avida didn’t enter seriously into the conversation on that thread or the first 50 comments on this one.

    That’s actually pretty sad, because upon rereading Gil’s blog entry, it actually makes more sense when applied to Avida and its ilk than anything else. Tom, Kurt, and someone else jumped on Gil for making a broad statement about simulation that was just silly, and Gil eventually stated that his initial blog entry on that thread should have been taken as sarcasm.

    That is sad, because applied to Avida, there is a certain sense talking about mutating the CPU instructions – it is the kind of meta-GA speculation (use a GA to tune the GA parameters!) that occasionally pops up. If that had been Gil’s point, and if it had been clearly stated, we could have avoided this thread entirely!

    I personally try to steer clear of most Avida discussion. My knowledge of its workings are extremely rudimentary. I think a lot of A-Life work is still at the digital antfarm level and it is way too easy to take results from Avida (in evolution) or Sugarscape (in social theory) and make broad generalizations from them.

  189. It’s nothing short of hilarious that KeithS and others at ATBC that have obviously not done a single bit of gate level hardware design in their lives are talking about how simulations of gate logic intended to verify a design prior to laying copper need only be modeled with boolean logic. The poor ignoramuses know nothing about analog considerations such as supply rail loading, bus loading, propagation delays, and race conditions just to name a few show stoppers that aren’t covered in simple boolean logic. :lol:

    In a demonstration of either total cluelessness or dishonesty, in all the commenters there, not a single one has stepped up to correct them. Surely Wesley or someone there knows enough about digital hardware design to tell them there’s a lot more to it than boolean algebra. That’s called a lie of omission. Shame on them.

  190. >>Pav: “my suspicion is that when the first models don’t produce this IC, that some tweaking takes place…. ”

    >>Reciprocating Bill: “Just so we are clear: You just made all that up.”

    Qualifying it as a suspicion implies it is something he doesn’t know to be true.

    And just so we are clear, that’s the last bit of stupidity you’re going to be posting here. Hasta la vista, baby.

  191. 191

    Who is left?

  192. “Who is left,” you asked John?

    Dave Scott, of course. And you. And if I post any more stories that contradict the “big bang”, and upset Mr. Scott, I’ll be gone too. Probably he’s programming the server as I type. Then you’re next, John.

    Well, without us both for comic relief, I think U.D. will stand for “Uncommonly Dull”!

  193. “Dog varieties” — John, have you read Sheldrake on poodles? Really?

    Of course, the question is this verifiable? And before you scoff, Dr. Dembski himself views consciousness as a potentially external phenomenon.

    http://www.sheldrake.org/Artic.....intro.html

    I suggest that morphogenetic fields work by imposing patterns on otherwise random or indeterminate patterns of activity. For example they cause microtubules to crystallize in one part of the cell rather than another, even though the subunits from which they are made are present throughout the cell.

    Morphogenetic fields are not fixed forever, but evolve. The fields of Afghan hounds and poodles have become different from those of their common ancestors, wolves. How are these fields inherited? I propose that that they are transmitted from past members of the species through a kind of non-local resonance, called morphic resonance.

    The fields organizing the activity of the nervous system are likewise inherited through morphic resonance, conveying a collective, instinctive memory. Each individual both draws upon and contributes to the collective memory of the species. This means that new patterns of behaviour can spread more rapidly than would otherwise be possible. Foe example, if rats of a particular breed learn a new trick in Harvard, then rats of that breed should be able to learn the same trick faster all over the world, say in Edinburgh and Melbourne.

    Bill wrote, and that’s why I think Shakespeare wise in this well worn quote from Hamlet:

    And therefore as a stranger give it welcome.
    There are more things in heaven and earth, Horatio,
    Than are dreamt of in your philosophy.

    http://www.designinference.com.....chines.htm

    Certainly, if we knew that materialism were correct, then supervenience would follow. But materialism itself is at issue. Neuroscience, for instance, is nowhere near underwriting materialism, and that despite its strident rhetoric. Hardcore neuroscientists, for instance, refer disparagingly to the ordinary psychology of beliefs, desires, and emotions as “folk psychology.” The implication is that just as “folk medicine” had to give way to “real medicine,” so “folk psychology” will have to give way to a revamped psychology grounded in neuroscience. In place of the psychologist’s couch, where we talk out our beliefs, desires, and emotions, tomorrow’s healers of the soul will ignore such outdated categories and manipulate brain states directly.

    At least so the story goes. Actual neuroscience research is by contrast a much more modest affair and fails to support such vaulting ambitions. That should hardly surprise us. The neurophysiology of our brains is incredibly plastic and has proven notoriously difficult to correlate with intentional states. Louis Pasteur, for instance, despite suffering a cerebral accident, continued to enjoy a flourishing scientific career. When his brain was examined after he died, it was discovered that half the brain had atrophied. How does one explain a flourishing intellectual life despite a severely damaged brain if mind and brain coincide?

    Or consider a more striking example. The December 12, 1980 issue of Science contained an article by Roger Lewin titled “Is Your Brain Really Necessary?” In the article, Lewin reported a case study by John Lorber, a British neurologist and professor at Sheffield University:

    “There’s a young student at this university,” says Lorber, “who has an IQ of 126, has gained a first–class honors degree in mathematics, and is socially completely normal. And yet the boy has virtually no brain.” The student’s physician at the university noticed that the youth had a slightly larger than normal head, and so referred him to Lorber, simply out of interest. “When we did a brain scan on him,” Lorber recalls, “we saw that instead of the normal 4.5–centimeter thickness of brain tissue between the ventricles and the cortical surface, there was just a thin layer of mantle measuring a millimeter or so. His cranium is filled mainly with cerebrospinal fluid.”

  194. PaV: “I now understand how this particular thread got started, and why Gil started it with the example he did.”

    Bingo. PaV figured it out.

  195. ROFLMAO!

    KeithS responds (paraphrased):

    “I knew all along about analog issues. No really, I DID!”

    And he still insists I said microprocessors are modeled at the transistor level when I clearly said gate level. Tom English and Karl Pfluger made up that straw man about transistor level.

    This is basically why KeithS is no longer here. He’s a lying sack without a clue.

  196. Oh Goody! Now 2ndClass wants to be the next clown I knock down. These people have not done any hardware design. They have not drawn schematics for many complex digital designs then sat thousands of hours in the drivers’s seat of a logic analyzer and oscilloscope debugging their own designs. That and programming is all I did for almost 25 years and I was really, really good at it.

    2ndclass sticks his foot in his mouth thusly:

    - “Simulations of gate logic” are only done with boolean logic. What other kind of logic do you think is simulated?

    - Contrary to your strawman, nobody here said that analog considerations aren’t important. They just aren’t part of gate-level modelling.

    But this article in EDN says:

    The most common form of logic simulation is event-driven, in which the simulator sees the world as a series of discrete events. When an input value on a primitive gate changes, the simulator evaluates the gate to determine whether this change causes a change at the output and, if so, schedules an event for some future time (Figure 3).

    Most event-driven logic simulators allow you to attach minimum, typical, and maximum delays to each model (Figure 4). When you run the simulator, you can select one of these delay modes, and the simulator uses that mode for all of the gates in the circuit. Also, some simulators allow you to select one delay mode as the default and then force certain gates to adopt another mode. For example, you might set all the gates in your datapath to use minimum delays and all the gates in your control path to use maximum delays, thereby allowing you to perform a “cheap and cheerful” timing analysis.

    What a dope. There’s much more at the EDN link.

  197. [(Clown's) coat on in readiness]

    I never said anyone modeled microprocessors at the transistor level. That’s a straw man. Tom English put those words in my mouth. He said modeling evolution at the protein level is like modeling processors at the transistor level.

    I was just looking over the thread, you had previously written:

    I’ve given you examples of real models in biology (protein folding), mechanical design (aircraft), and electronics (microprocessors). These all model the real world and can be tested by seeing if they duplicate the results obtained in the real world.

    I can find earlier references to aircraft and biology by you, but I can’t find the example you had previously given about microprocessors (So don’t know if you mentioned transistors, gates, neither or both). Should I suspect foul play here?

    I’m not convinced that Tom was putting words in your mouth. You said evolution simulations should work down at the level of protein folding and, AIUI, he says (in 37) that that would be like modelling computers at the level of transistors. He was, I believe, using a simile in fitting with your

    primary area of expertise. He didn’t say that you said “computers are modelled at the level of transistors” (or should be). Another generous interprestion was that he was using a BarryAesque “rhetorical flourish”, but I’d go with the first explanation.

    After Tom said modeling at the level of proteins and/or transistors is inappropriate, you told him “Your ignorance is showing again” (77) and posted a link and an extract from a page which basically said “IC’s contain a lot of gates and it costs a lot to verify them”. — that in no way contradicted what he had said or implied that he was showing ignorance.

    I think at this point, we bad guys assumed that you hadn’t read or understood what you had copied, and were just telling him that he was wrong: i.e transistors are the right level of granularity. Perhaps some one should have asked to elaborate a little.

    The next link you gave (in 143) said that existing tools model at the gate level, but that new models allow entire IC’s to be modelled at the transistor level (http://www.techonline.com/comm.....icle/21478)

    Reliable prediction requires detailed analysis of large blocks and even on the complete design — a problem well beyond the ability of conventional verification tools. Based on hierarchical analysis engines and advanced parasitic-reduction techniques, the newest generation of tools, such as Nassda’s HSIM and LEXSIM, are able to achieve the kind of full-chip, transistor-level, post-layout analysis required to accurately predict nanometer effects. Although this additional analysis will slightly extend time to tapeout, the ability to reduce or eliminate the risk of silicon

    re-spins more than compensates for slightly longer design time, offering cost savings that heavily outweigh those incremental engineering costs.

    You quoted a different part from the above, but I think it’s as (or more) relevant. It’s difficult for me to judge what point you were trying to make though, given the past history of this thread.

    2ndclass sticks his foot in his mouth thusly:

    - “Simulations of gate logic” are only done with boolean logic. What other kind of logic do you think is simulated?

    - Contrary to your strawman, nobody here said that analog considerations aren’t important. They just aren’t part of gate-level modelling.

    But this article in EDN says:

    AFAICT, those simulations are still using boolean logic but allowing for the fact that gates don’t switch instantly. Karl Plfuger mentioned the timing aspects ages ago. They are not analog simulations (using variable voltages). They mostly seem to allow inputs and outputs to be in one of two states. However there’s a short mention of a new three-state model which uses true,false, and unknown for dealing with cases where input pulses are short compared to the switching time).

    It’s not completely clear cut though – one of the graphs, #6, shows something which reminds me of a transistor load graph (or whatever), but I must admit I wasn’t really playing attention on the day we did those at college. My knowledge of transistors really ended at some water operated sluice gate metaphor from a junior electronics kit)

  198. KeithS continues the uncorroborated handwaving. I provide a links from sources like Electronic Design News and IEEE Proceedings that corroborate what I say and the ATBC clowns, in the true way of Darwinian chance worshippers, provide nothing but just-so stories. Color me unimpressed. The bottom line STILL remains that electronic designs are simulated as required at any level right down to the quantum scale while such simulation is quite impossible for biological systems because we don’t know how to model one of the most critical aspects of biological systems – predicting how an arbitrary string of amino acids will fold into the characteristic, unique, and oh-so-important 3D shape of the protein. Just as in simulating electronic designs doesn’t have to ALWAYS be done at such a basic level it’s extremely important that it CAN be done at that basic level.

  199. SteveS

    Electronic logic gates don’t change states instantaneously. The length of time it takes them to transition is an analog quantity called propagation delay. The actual delay is effected by many factors including bus loading, temperature, supply voltage, RC time constants of transmission paths, and manufacturing variances. Any simulation that incorporates propagation delay is not strictly boolean anymore. In order to successfully predict what any given arrangment of logic gates will do the prop delays must be accounted for. Often this is done by using synchronous designs and making the clock cycle time generous enough to encompass all conceivable delays. In many situations this is either impossible or impractical. Asynchronous inputs may be unavoidably present and/or clock cycle time made to safely work with the least common denominator is competitively or otherwise impractical.

  200. 2ndclass continues his uncorroborated enlightenment as well…

    2ndclass – as I’ve already stated, maybe it was too subtle for you, modern microprocessors use mosfets to construct gates. Moreover, the mosfets only operate in two states, on or off. Unlike older silicon transistors, mosfets require no resistive or capacitive elements. Thus instead of modeling a mosfet like a transistor capable of biased operation through a wide range of input/output voltages the mosfet can be treated like a simple on/off switch. As also stated, logic gates made of these basic components require 2 mosfets for an invertor and 4 for a nand gate. All other logic elements can be constructed of nand gates.

    I had assumed that I was talking with people who were sufficiently knowledgable to recognize that the difference between modeling a CMOS processor at the transistor level and the gate level is a quibble because the individual logic gates are composed of just a few simple on/off mosfet (transistor) switches.

Leave a Reply