Home » Intelligent Design » FEA, PR, E, Ro, EOS (Or, Why Darwinian Computer Simulations are Less than Worthless)

FEA, PR, E, Ro, EOS (Or, Why Darwinian Computer Simulations are Less than Worthless)

FEA = finite element analysis
PR = Poisson’s Ratio
E = Young’s modulus
Ro = mass density
EOS = equation of state

Darwinian computer simulationists have no idea what I’m talking about, but they should.

A thorough understanding of FEA, PR, E, Ro, and EOS is a prerequisite for any computer-simulationist who hopes to have any confidence that his computer simulation will have any validity concerning the real world (and this just concerns transient, dynamic, nonlinear mechanical systems — nothing that even approaches, by countless orders of magnitude, the complexity, sophistication, and functional integration of biological systems).

Even with all of my understanding and years of experience, I would never expect anyone to accept the results of one of my FEA computer simulations without empirical verification. However, with a consistent track record of validated simulations within a highly prescribed domain (which I have) I can at least save much wasted effort pursuing what the simulations suggest will not work.

It is for this reason, and many others, that I consider Darwinism to be not just pseudoscience, but perhaps the quintessential example of junk science since the advent of the scientific method and rational inquiry concerning how things really work in the real world.

Darwinists have no idea what rigorous standards are required in the rest of the legitimate engineering and science world, and how they have been given an illegitimate pass concerning empirical or even rational justification of their claims.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

68 Responses to FEA, PR, E, Ro, EOS (Or, Why Darwinian Computer Simulations are Less than Worthless)

  1. Gil,

    A thorough understanding of FEA, PR, E, Ro, and EOS is a prerequisite for any computer-simulationist who hopes to have any confidence that his computer simulation will have any validity concerning the real world…

    Seriously? You think that someone modeling an optical system needs to worry about Young’s modulus? Do you take color into account when you’re simulating a mechanical system? Of course not.

    Every simulation is an abstraction, Gil, including your beloved finite element analysis. Some details are always left out.

    Darwinists have no idea what rigorous standards are required in the rest of the legitimate engineering and science world,..

    Sure they do, since many ‘Darwinists’, including me, are scientists and engineers in fields other than evolutionary biology. This supposed lack of rigor pervading evolutionary biology exists only in your fevered imagination, Gil.

  2. I find it amusing that ID proponents simulate chemistry with simplistic references to information theory, completely ignoring the fact that there is no general model of protein folding, no theory of folding that would support design of folds, no database of coding sequences that would support design, and no knowledge at all of the functional — a prerequisite to any claim that incremental evolution is improbable.

    All the probability calculations published by ID advocates are complete BS, because there is insufficient knowledge of the functional landscape.

    When the landscape is tested in specific instances, as by Thornton, it supports evolution.

  3. Petrushka,

    What you tacitly assume is that the burden of proof is with the ID proponents and others who question the truth of Darwinism. What the probability calculations and consideration of the informational requirements of living organisms do is shift the burden of proof to Darwinism, given that there is no actual direct evidence that Darwinism is true (ie., it has never been observed to generate one single macroevolutionary advance, not in nature, in the laboratory, or in the fossil record).

    Also, how can you say that Darwinism is any more than an unproved hypothesis, given your claim that there is no theory of folding and no way to calculate the probabilities?The problem with proponents of the theory is that they never do the math; they never attempt to calculate whether the changes required for the emergence of any novel body plan, organ, organ system, or process (such as blood clotting, sexual reproduction, or insect metamorphosis) are remotely probable, given the probabilistic resources available in the history of the Universe. Yet they claim that Darwinism is established. Yes, the calculations are difficult, maybe even impossible, but that doesn’t change the fact that they haven’t been done, and without the calculations to support the theory, it is no more than an idea waiting for confirmation or refutation.

  4. petrushka

    an average protein length is about 300 aa and have a sequence space of a 20^300 .how much functional sequence do you think that perform a functional protein?

  5. champignon:

    This supposed lack of rigor pervading evolutionary biology exists only in your fevered imagination, Gil.

    Really? Then perhaps you can produce this alleged rigor pervading evolutionary biology.

    Heck you can’t even tell us how to determine if a bacterial flagellum evolved via stochastic processes.

    IOW your alleged rigor exists only in your fevered little mind.

  6. Darwinian computer simulations are worthless because no one knows if they simulate real-world biology.

  7. Gil, I know what all those terms mean, and I have devised (simple) Darwinian simulations.

    Please explain how those concepts are relevant to Darwinian simulations.

  8. Exactly. It seems that demonstrably inadequate simulations can be used to disprove evolution, while all simulations that attempt to demonstrate the power of evolution are regarded as, well, inadequate.

    I spy moveable goalposts.

  9. I know what all those terms mean as well, and so do most of the scientists I know who write evolutionary simulations.

    Your claims about evolutionary scientists are fallacious, and based on your own prejudices and lack of knowledge.

  10. I think Gil may be mistaking engineering simulations as to how a machine will perform in real life, with the Darwinian models of evolutionary processes, which are not fundamentally “simulations” at all. They are actual examples of evolutionary algorithms in action.

    What we can do, however, is to use such models to see whether we can reproduce real-life behaviour. If we can, and if the match is precises, we have support for the model as a model of the real-life process.

    For example, I work with learning models. They are not “simulations” – my models really learn. They can solve problems at the end that they couldn’t solve at the beginning. What is particularly interesting is the mistakes they make while learning, and the way they adapt (or do not) to changed contingencies.

    If the pattern of errors and flexibility (or not) in the behaviour of the model resembles the pattern of errors and flexibility in the system I am modelling is a good match, I have support for my my model as a model of that system.

    In other words, we test the model as a model for the real life system under investigation by comparing outputs.

    This is quite unlike the engineering context. In the engineering simulations you refer to you are making a model of known mechanisms (hence your need for Young’s modulus, etc) in order to predict real life behaviour. In contrast, we are testing a hypothesis about unknown mechanisms in order to find out whether our model is a good one for the mechanisms underlying our real-life behaviour.

    The two cases are very different.

  11. The concepts of simulation are a perennial source of confusion for Gil, as the following two threads from 2006 reveal. Plus ça change…

    A realistic computational simulation of random mutation filtered by natural selection in biology

    Gil has never grasped the nature of a simulation model

  12. I find it much more helpful to frame the issue in the following manner:

    Any evolutionary algorithm (EA) instantiated through a computer programme is necessarily searching a finite space of possible “solutions” defined by the parameters of that programme. This is no different than biology, as we see, being defined by a finite (though vastly larger) number of possible arrangements of DNA, and whatever higher-order structures encode the information necessary for the existence and replication of life.

    The ability of the EA to move through that space to arrive at some “final” or “target” solution is dependent on a number of things which have or are expected to have direct analogs in real life; these are, chiefly:

    1) The “connectedness” of the “viable” solutions.
    2) A gradient in fitness exists between the solutions, such that one solution can be chosen among a number of alternatives.
    3) Means exist to escape local maxima in fitness space in order to find yet more advantageous solutions.
    4) The velocity at which the solution evolves is dependent upon the rate at which, from a given location, that adjacent solutions can be tested and compared against the fitness of the current solution.

    In any “successful” EA, I think you can show that the properties 1-4 exist; you can draw a road-map of solutions from the start to the end (connectedness), and the solutions produced are inevitably those possible for that programme. In fact you get from A to B as inevitably as water, flowing downhill, overcomes local obstacles. The rate and route can be adjusted by varying the fitness definition and gradients.

    My opinion is that ID arguments saying EA’s are an inadequate and indeed misleading model of real evolution should be concerned with showing that in real biology points 1-4 are not nearly as favourably present, briefly:

    1) Solution space is not well-connected. This is basically the argument of irreducible complexity at one level, the argument of the sparseness of viable folded proteins among the combinations of DNA on another – that there is no way to get from “A” to “B” in tiny steps, and only intelligence can make the jump.

    2) ID generally argues that the gradient is not nearly so powerful in generating novelty as it is generally attributed. It is not even always able to operate as efficiently as might be expected, even if a path can be shown to exist. Hence the knock-out experiments which introduce a simple deleterious mutation and see if an organism can recover it’s original functionality.

    3) ID argues that biology, as we observe it, is “trapped” in the vicinity of existing fitness maxima. Everywhere we see optimized systems. Even in cases (such as bacteriological resistance) where there is some movement away from the “norm” it is achieved due to degradation and loss of function, and only in the face of a severe stress that upsets the current optimization. Remove the stress, and the “solution” tends to move back toward the original maximum. Nothing really new or useful has been created.

    4) The rate-determining step in evolution in biology is the rate of single and (much less likely) double mutatiions. Here, again Behe’s “Edge of Evolution” sums up this idea, using resistance to anti-malarial drugs as one of a number of examples.

    The evolutionist’s response, quite reasonably, is to try to demonstrate that these objections are overstated or inconsequential.

    I hope that is fair-minded explanation of the problem — Elizabeth?

  13. Exactly. I think we both agree that abstract numerical models whether singly analytical or iteratively produced are in no manner any grant of truth, adequate substitute or replacement for empirical results.

    The stochastic mutation camp needs to pipe down in back until they can show unassisted and significant mutations that are unquestionably a case of macro-evolution on the lab table. And the agency driven mutation camp needs to put a sock in it until biologists can demonstrate the same by human engineering practices.

  14. What is inconsequential from the teleological and ID viewpoints, is the whole question of evolution, which seems to make totally-obsessive idiot non-savants out of otherwise thoroughly decent, journeymen scientists.

    If protons, neutrons and electrons developed from photons on issuing from the Singularity, or in any case, the point of Creation, ex nihilo, (now scientifically confirmed, as reported, here), surely, the subatomic particles that quantum mechanics are able to empirically study today, being of the self-same provenance as their ‘advance party’ trail-blazers, existed prior to the development of space-time? And belong to an all together different dimension – or, rather, ‘immension’.

    As the fundamental particles of physical matter, therefore, they indicate that as far as teleology is concerned, their configuration at the grosser level into vegetable or animal organisms is of sovereign irrelevance. Indeed the same can be said in the matter of abiogenesis.

  15. Dear French Mushroom,

    My imagination is not fevered.

    If you want to talk about fevered imagination, let’s put the abiogenesis/Darwinian-evolution thesis to the test.

    Please don’t give me the hackneyed “Darwinism has nothing to say about materialistic abiogenesis” line. Darwinists clearly assume materialistic abiogenesis as the basis of their philosophy.

    Fevered imagination is required to assume that dirt spontaneously self-generated and produced the first self-replicating cell, with all its highly functionally integrated information-processing systems, protein-synthesis machinery, and error-detection-and-repair algorithms.

    But the fever of the Darwinian imagination rises even further with the assumption that randomly-infused errors (whether filtered or unfiltered by natural selection) can magically transform that first (hopelessly improbable) self-replicating cell into Mozart.

  16. Gil,

    Before you hijack your own thread, could you address the objections that commenters have raised to your claims about simulations?

  17. “Darwinists have no idea what rigorous standards are required in the rest of the legitimate engineering and science world,”

    I can always tell within a few sentences when I’m reading something that will turn out to be written by Gil. There seems to always be a proud rendering of Gil the engineer, one without any formal training in engineering, and here he refers to some mystery collection of engineering standards. Many books for engineers (and scientists) have been written by mathematicians, for example: Kreysig’s Advanced Engineering Mathematics also Probability And Statistics For Scientists And Engineers by Walpole and Myers. Funny how these non-engineers can teach engineers without reference to this mystery collection of standards. I’ve done countless simulations using SPICE based software tools, Verilog, and Mathcad, without incorporating any of the above elements listed by Gil. I’ve also written C++ simulations for some of the most difficult problems and these typically perform an iterative process that cannot be be conceptualized and setup for tools such as Matlab and Mathcad. Actually this is why Bell Labs invented C++, for the sole purpose of tackling immensely complex simulations needed for their immensely complex networks.

    In reality FEA simulators are mostly written in C++ because as the object oriented language of choice, it is naturally suited to the complexity of those classes of problems. What I think Gil does not understand is that C++ is suitable for tackling unimaginably complex problems that have nothing in common with FEA or anything with which Gil has familiarity. So it is entirely likely that a hyper-motivated Darwinist could master C++ and construct a simulation that escapes Gil’s above categories and likely will escape his understanding.

    While I’m at it, is it possible for Gil to relate to us how far you progressed in higher mathematics? Did you study vector calculus or differential equations?

  18. It would appear that Thornton and others have taken up the challenge of demonstrating the connectedness of functional space.

    What I fail to understand is how a designer would get around a disconnected functional space. All the actual inventions by humans that I am aware of are either incremental or involve horizontal transfer, or both.

  19. That’s because the designer is God really, really intelligent.

  20. Petrushka: The mere fact you can string together a paragraph in a single iteration is evidence that human designers can get around a disonnected function space. Or did you create the response above by beginning with a single letter or word and applying a process of single mutations and duplications? Foresight is just one huge advantage. And yes, a designer can skip steps and make choices, champignon. What is your definition of intelligence?

  21. In any case, my point was not really to argue the validity of any of the points I raised with Petrushka and champignon (I expect them to disagree), but to show that these are valid points of argument. The form that Petrushka used in his response I think indicates that I have been successful, as he both raised a counter-argument and posed a question. I think it is impossible to have a really good discussion unless you can come to some agreement about where the areas of contention actually lie.

  22. groovamos,

    Yes mathematicians do write math books and yes engineering requires math.

    OK you have done countless simulations, but that ain’t what Gil is referring to- he is taklking about simulations that alegedly simulate biological evolution.

    Have you written any of those types of simulations?

  23. Given a formally defined alphabet, syntax, semantics and control as a means to drive search for solutions towards improved utility, an algorithm is a formalism that performs optimisation of some kind or another. Nature does not solve, want, intend or choose anything. All of these are, observably, tasks that are formulated and solved by agents via what is called choice contingency as opposed to chance contingency or law-like necessity. For these tasks to be solved one needs not physical laws but rules arbitrarily defined and instantiated into (although totally independent of ) physical reality. Failure to recognise that is a stopper to any discussion about the origins, and in particular discussions about free will. Given a deck of cards, all evolution can do is shuffle or remove some of them. It can never produce new cards simply because it has neither purpose, nor intent, nor foresight and operates within the a-priori set bounds.

  24. Petrushka: The mere fact you can string together a paragraph in a single iteration is evidence that human designers can get around a disonnected function space. Or did you create the response above by beginning with a single letter or word and applying a process of single mutations and duplications?

    The problem of language production is a bit above my pay grade. Both Chomsky and B.F. Skinner wrote extensively about it and both failed to provide convincing analyses.

    It is one of the central problems in artificial intelligence, and an unsolved one.

    But I would argue that language is not conceptually more difficult to explain than the origin of any complex chain of behavior, including the famous chains of chemical behavior illustrated in the cell videos.

    Please explain what you mean by the claim that designers can “skip steps.” On an earlier thread there was a discussion of the invention of the light bulb, and many people seemed unaware that hundreds of years of incremental experimentation preceded the commercial bulb.

    Perhaps you will take up my challenge to produce the theory of protein design that doesn’t require any form of incremental evolution.

  25. Perhaps you should reread my post, and specifically refute my points which were well constructed, summarized:

    1. Gil says (in effect) here are the buzz words (some terms) regarding some things I do in my work.

    2. Gil refers to unidentified “engineering standards”.

    3. Gil says that Darwinists seem to know nothing of the above, thus are not capable of running simulations.

    4. I make several points as illustrations as to why Gil’s little corner of experience gives him no cause to say that people not formally schooled in engineering or hard sciences cannot run simulations. Especially since he is not formally trained as such himself.

    5. I show how that I myself have run simulations which have nothing in common with Gils “protected categories” of buzz words. I also maintained that it is possible that a Darwinian can run simulations that may have merit that Gil would not be able to understand.

    6. I made no value judgement regarding simulations of biological evolution. Just for the record, I consider them bogus for the reasons which have been well covered in the I.D. blogoshpere.

    Now please refute my points if you would.

  26. Here’s an article describing how chains of behavior can evolve. It’s not language, but as an experiment it’s pretty elegant.

    http://www.plosbiology.org/art.....io.1000292

  27. Petrushka: “Please explain what you mean by the claim that designers can “skip steps.” The point I am trying to convey is that designers are able to employ a number of tools not available to evolutionary algorithms, among them foresight, and the ability to create intermediate parts which have no purpose on their own until combined into a larger whole. In fact, the entire construction may be completely useless or non-functional until the last part is correctly installed.

    My larger point, however, was not to argue any of the 4 points I raised above, but merely to try to shift the discussion to try to come to some agreement about what properties might exist (or not exist) that would allow you to judge the accuracy with with an EA models real-life biology. If you read most of the discussion above (and below), much of it has to do with whether this or that statement is a valid objection or not. If you were completely unbiased in this discussion, what properties would you come up with the evaluate EA’s as a model of reality? I gave my four suggestions, and tried to relate them to the larger debate. Do you see others? Do you at least agree that IF I were right about 1-4 then EA’s as we see them would NOT be a valid model?

  28. One more point. You said: “Perhaps you will take up my challenge to produce the theory of protein design that doesn’t require any form of incremental evolution.”

    That’s not my point. Design often requires or includes “evolutionary increments”. It’s just not limited to them.

    Prototypes are rarely a single “mutation” away from some previously existing device, but once you have a prototype it’s time to tinker.

  29. That seems pretty fair, SCheesman :) Thanks! Let me go through:

    I find it much more helpful to frame the issue in the following manner:

    Any evolutionary algorithm (EA) instantiated through a computer programme is necessarily searching a finite space of possible “solutions” defined by the parameters of that programme. This is no different than biology, as we see, being defined by a finite (though vastly larger) number of possible arrangements of DNA, and whatever higher-order structures encode the information necessary for the existence and replication of life.

    Not entirely sure about “finite”. In biology, and in some EAs, the evolving population becomes part of the environment, so solution-space itself is constantly changing (because so is the problem space). So in biology, something that gives a phenotype the edge in one generation may be totally inadequate several generations down the line. Also, some EAs are designed to respond to changes in input – learning algorithms, where the EA needs to respond to changed contingencies.

    The ability of the EA to move through that space to arrive at some “final” or “target” solution is dependent on a number of things which have or are expected to have direct analogs in real life; these are, chiefly:

    1) The “connectedness” of the “viable” solutions.
    2) A gradient in fitness exists between the solutions, such that one solution can be chosen among a number of alternatives.
    3) Means exist to escape local maxima in fitness space in order to find yet more advantageous solutions.
    4) The velocity at which the solution evolves is dependent upon the rate at which, from a given location, that adjacent solutions can be tested and compared against the fitness of the current solution.

    Let me get this straight. If we represent fitness on a vertical scale (with higher fitness higher up) and phenotypic change on, say, two horizontal scales, for high points to be reachable it is important that they are not separated from other high points by lower intervening points (although narrow “ravines” and gentle downward slopes can sometimes be traversed). If such local maxima (high points surrounded by lower points) do exist, then the fitness landscape will be “unconnected” in those dimensions. However, the higher the dimension of the landscape, the more likely it is that some traversable connecting path will exist along some dimension (i.e. one without wide ravines or long downward slopes), So I think 1:3 are restatements of the same thing, really. 4 is more or less equivalent to the rate of drift. Yes?

    In any “successful” EA, I think you can show that the properties 1-4 exist; you can draw a road-map of solutions from the start to the end (connectedness), and the solutions produced are inevitably those possible for that programme. In fact you get from A to B as inevitably as water, flowing downhill, overcomes local obstacles. The rate and route can be adjusted by varying the fitness definition and gradients.

    Well, a “successful” EA, by definition, gets “inevitably” to a solution! If it didn’t, it wouldn’t be “successful”! So I’m not quite sure of your point here. However, I do agree with your water analogy, and I actually prefer to plot fitness landscapes upside down, with fitness “wells” or “sinks” rather than “peaks”. And yes, if the population is reasonably fluid (fair amount of drift), then it should work its way down any downhill slope the “fitness wells” pretty well inevitably. However, it’s possible to make an EA that doesn’t always end up in the same well, if there is more than one.

    My opinion is that ID arguments saying EA’s are an inadequate and indeed misleading model of real evolution should be concerned with showing that in real biology points 1-4 are not nearly as favourably present, briefly:

    Yes, I think so, except that I’d say they are much more obviously present in biology than in GAs! Because biology is far higher-dimensioned than any human-built GA.

    1) Solution space is not well-connected. This is basically the argument of irreducible complexity at one level, the argument of the sparseness of viable folded proteins among the combinations of DNA on another – that there is no way to get from “A” to “B” in tiny steps, and only intelligence can make the jump.

    Yes, that’s the argument.

    2) ID generally argues that the gradient is not nearly so powerful in generating novelty as it is generally attributed. It is not even always able to operate as efficiently as might be expected, even if a path can be shown to exist. Hence the knock-out experiments which introduce a simple deleterious mutation and see if an organism can recover it’s original functionality.

    I think there is an analogy glitch here. I don’t think “the gradient” is what “generates novelty”. Perhaps there’s a factor missing from your list. Imagine the version of the landscape image where fitness lies in wells, and flows down to meet it. If there are barriers in the way (ridges; long rising hillsides), the population will be trapped behind them. However, if there are merely flat plains in the way, what may slow things down (what I thought you were getting at with your rate of testing) is what you might regard as “viscosity”. If the population doesn’t change much, it will stick in a lump on the plain. However, if there is lots of novelty, in the form of genetic variation being created, then it will tend to “spread out” over the plain, by drift, and eventually find a sink. In other words, it’s not the gradient that generates variety – the gradient is what is also called “natural selection”, and it actually reduces variety, it doesn’t increase it. What generates variety is the “RM” part, in old parlance – the degree to which neutral variants drift through the population, bringing parts of it, as it were, to the lip of downward slopes.

    3) ID argues that biology, as we observe it, is “trapped” in the vicinity of existing fitness maxima. Everywhere we see optimized systems. Even in cases (such as bacteriological resistance) where there is some movement away from the “norm” it is achieved due to degradation and loss of function, and only in the face of a severe stress that upsets the current optimization. Remove the stress, and the “solution” tends to move back toward the original maximum. Nothing really new or useful has been created.

    Well, “degradation and loss of function” are concepts somewhat alien to your nice model. Where do they fit? What they would seem to mean is that once you’ve embarked on the journey down towards a fitness well, you can’t easily climb back up. But that’s OK, isn’t it? But then you say it can. I think you may be confusing phenotype with population here. The population can “explore” a temporary fitness sink. If it gets too bedded in (no longer interbreeds with the rest of the population, becomes, in fact, a separate species), and the fitness sink vanishes (fills in?) then you may get extinction. If it doesn’t, then, as you say, it can go back up.

    The individuals can’t, but we are talking about population movement here, not individuals.

    And this exactly matches observation. Highly specialised populations (Giant pandas?) are highly vulnerable to habitat change, whereas generalisers, like rats, are pretty invulnerable.

    4) The rate-determining step in evolution in biology is the rate of single and (much less likely) double mutatiions. Here, again Behe’s “Edge of Evolution” sums up this idea, using resistance to anti-malarial drugs as one of a number of examples.

    Single mutations yes. Double mutations are irrelevant, if we consider drift.

    The evolutionist’s response, quite reasonably, is to try to demonstrate that these objections are overstated or inconsequential.

    ta-daa!!!!

    I hope that is fair-minded explanation of the problem — Elizabeth?

    Very :)

    Cheers

    Lizzie

  30. The point I am trying to convey is that designers are able to employ a number of tools not available to evolutionary algorithms, among them foresight, and the ability to create intermediate parts which have no purpose on their own until combined into a larger whole.

    Give me an example of foresight. I know that seems obvious, but I don’t find it obvious. It gets less and less obvious as you move farther away from copying with modification.

    I’m not trying to be difficult. I just think that words are tossed around in this discussion without much thought being given to their implications. In particular I see the word design being used without any thought being given to what designers do.

  31. I’d say both “foresight” and “side sight”. What human designers can do is to simulate before execution (foresight) and not bother with things that obviously won’t work) and also bring in solutions from other “design lineages (“side sight”) like adding an engine to a carriage, or a computer chip to a washing machine.

    Evolution has to execute all the intermediate steps (can’t simulate), and can’t transfer solutions from one lineage to another.

  32. If such local maxima (high points surrounded by lower points) do exist, then the fitness landscape will be “unconnected” in those dimensions. However, the higher the dimension of the landscape, the more likely it is that some traversable connecting path will exist along some dimension…

    This is precisely the point I’ve tried to make with gpuccio, when I say that natural selection is more powerful than directed evolution.

    It is not unrelated to the point Adam Smith tried to make regarding the robustness of market economies vs command economies.

    One never knows in advance where utility will arise, and your search is more likely to fail if your targets are narrow and your direction is one-dimensional.

  33. Human invention is certainly faster in some ways than biological invention. Brains embody a form of evolution that learns more quickly than populations.

    But I think the concept of foresight is fuzzy to the point of being useless. Just examine what you wrote about avoiding things that don’t work, and think about how many inventors have credited success to ignoring conventional ideas about what doesn’t work. Then consider the percentage of inventions that actually last and give rise to new species of inventions. The percentage is pretty low. Most new things don’t work.

    Even with foresight, from the macro viewpoint, invention is cut and try.

  34. Yes indeed. That’s why evolutionary algorithms can be so powerful of course – they explore solutions that a human designer would dismiss as dead ends.

    It’s also why I’ve been saying for years that evolutionary processes are pretty intelligent. What makes them different from human brain processes (foresight/intention) isn’t even all its cracked up to be. Saves time, but if you have a vast number of iterations at your disposal, that doesn’t matter, and what you lose in iterations you gain in creativity.

    So it’s not surprising that biological systems look intelligently designed. In that sense, they are.

  35. Prototypes are rarely a single “mutation” away from some previously existing device, but once you have a prototype it’s time to tinker.

    Then it shouldn’t be difficult to provide examples of inventions that are not incremental.

  36. Double mutations are irrelevant, if we consider drift.

    And, indeed, sex (if double-mutation means change in two separate loci). Sex is distributed processing – the whole population is ‘working on’ solutions. Individually neutral or recessive mutations must drift, but however they travel, when two complementary ‘solutions’ meet … kaboom. Or rather, they each gain a selective boost by their mutual effect, even though recombination can act to break them up again – such broken-up combinations are simply ‘wild-type’. The result of these selective boosts, even in the presence of disruptive recombination, is to increase frequencies such that the ‘favoured’ combination occurs more and more often, and breaks up less and less (the commoner the alleles, the more likely that recombination will have no effect). Each time combination occurs, both alleles get extra tickets in the ‘lottery’. The end result looks fortuitious, but does not require serial fixation.

  37. SC:

    You are right on.

    When someone appeals to infinite resources, by direct implication, we know the jig is up!

    Just for fun:

    10^80 baryons [fr astronomy] * 10^25 s [th/d lifespan, about 50 mn times 13.7 BY] * 10^45 PTQS/s [rounded up] ~ 10^150 states of the particles (“atoms”) in the cosmos we observe across its thermodynamic lifespan. (Where BTW fastest Chem rxns are ~ 10^30 PTQS’s. This is of course where Dembski’s number comes from, where also 2^500 ~ 3.27*10^150.)

    Just 1,000 bits have 1.07*10^301 possible configs:

    b1-b2-b3- . . . b1000

    That is, the observed universe, acting as a monkey at the keyboard, could not search 1 in 10^150th of the possibilities.

    So, a sample is a fairly skinny zero of the field of possibilities.

    If that sample is blind/random and or blindly mechanical (i.e. nature here does not act as a purposeful algorithm) sampling theory tells us that with maximum likelihood, we will pick up only the bulk, dominant feature of the config space, i.e. gibberish.

    How do we know the bulk will be gibberish?

    Easy, multipart, specific functionality demands well matched and organised parts, like letter-strings in this post. And, a 3-D complex combination is reducible to a nodes-arcs wiring diagram, thence, strings structured according to a further specification. That is, a language.

    The requisites of specific function rule out most of the space. We can easily see that for text, or for say a motor or an indicating instrument (which is a specialised motor.)

    But doesn’t self replication get us away from that and create a CONNECTED CONTINENT of possibilities?

    Nope, as the relecant facility has to store wiring diagram and component specifying rules in a data structure, and has to set up a processing facility to implement the algors and stored data. It is an additional bit of irreducibly complex info.

    At macro level, complex components — e,g. the avian lung in life, do not have close, incrementally connected intermediates.

    So, the entire theory and its precursor to get to first life, is based on something that is all but zero possibility absolutely, and in operational terms is tantamount to zero. Unobservable.

    But, of course, to the indoctrinated, the above MUST be false, and sounds like “assertions.”

    Nope, it is easily empirically confirmed, just look all around. It is also analytically reasonable.

    The best explanation of a moving coil meter is a D’Arsonval. The best explanation of the cockpit panel and the 747 in which we find it is a Boeing. The best explanation of the vNSR in the living cell is a designer of cell based life, and the best explanation of an Avian lung etc etc is a designer of body plans.

    None of which is acceptable to the evolutionary materialist establishment.

    But, bit by bit, as we move ever deeper into the information era, it will be clearer and clearer that their frame of thought has crashed and burned.

    But, that establishment and those who look to them to shed light, will be the last to realise it.

    Then, they will try to ride the tide of chaotic change to land on their feet and come out on top yet again. they may even reformulate to blunt the force of the collapse.

    That is what the marxist apparatchiks did.

    Good day,

    GEM of TKI

    PS: You may want to read here on, where it has been laid out in summary, in steps.

  38. PPS: I suspect, many objectors, at root, don’t really believe in a freely thinking intelligent mind. Which is self referentially incoherent. D S Robertson aptly sums up:

    AIT and free will are deeply interre-
    lated for a very simple reason: Informa-
    tion is itself central to the problem of
    free will. The basic problem concerning
    the relation between AIT and free will
    can be stated succinctly: Since the theo-
    rems of mathematics cannot contain
    more information than is contained in
    the axioms used to derive those theo-
    rems, it follows that no formal opera-
    tion in mathematics (and equivalently,
    no operation performed by a computer)
    can create new information. In other
    words, the quantity of information out-
    put from any formal mathematical op-
    eration or from any computer operation
    is always less than or equal to the quan-
    tity of information that was put in at the
    beginning. Yet free will appears to cre-
    ate new information in precisely the
    manner that is forbidden to mathemat-
    ics and to computers by AIT.
    The nature of free will—the ability of
    humans to make decisions and act on
    those decisions—has been debated for
    millennia. Many have disputed even the
    very existence of free will. The idea that
    human free will cannot oppose the de-
    crees made by the gods or the three
    Fates is a concept that underlies much
    of Greek tragedy. Yet our entire moral
    system and social structure (and, I would
    venture to guess, every moral system and
    social structure ever devised by human-
    kind) are predicated on the existence of
    free will. There would be no reason to
    prosecute a criminal, discipline a child,
    or applaud a work of genius if free will
    did not exist. As Kant put it: “There is no
    ‘ought’ without a ‘can’ ” [4, p. 106].
    The Newtonian revolution provided
    an even stronger challenge to the con-
    cept of free will than the Greek idea of
    fate, which at least allowed free will to
    the Fates or the gods. The Newtonian
    universe was perfectly deterministic. It
    was commonly described in terms of a
    colossal clockwork mechanism that,
    once it was wound up and set ticking,
    would operate in a perfectly predictable
    fashion for all time. In other words, as
    Laplace famously noted, if you knew the
    position and velocities of all the par-
    ticles in a Newtonian universe, as well
    as all the forces that act on those par-
    ticles, then you could calculate the en-
    tire future and the past of everything in
    the universe . . . .

    Of course,
    a deterministic universe could produce
    an illusion of free will. But for this dis-
    cussion I am not interested in illusory
    free will.
    Around the beginning of the 20th
    century, the development of quantum
    mechanics seemed to provide a way out
    of the Newtonian conundrum. The
    physical universe was found to be ran-
    dom rather than perfectly deterministic
    in its detailed behavior. Thus although
    the probability of the behavior of quan-
    tum particles could still be calculated
    deterministically, their actual behavior
    could not. Physicists as prominent as
    Sir Arthur Eddington argued that ran-
    dom quantum phenomena could pro-
    vide mechanisms that would allow the
    operation of free will [4, p. 106].
    Eddington’s ideas are not universally
    held today. A perfectly random, quan-
    tum universe no more allows free will
    than does a perfectly deterministic one.
    A “free will” whose decisions are deter-
    mined by a random coin toss is just as
    illusory as one that may appear to exist
    in a deterministic universe.

  39. Who appealed to infinite resources?

  40. GV:

    By definition, a sim is not reality, though its ops may mimic it and give rise to correct outputs. Op amps are wonderful!

    As prof N, long ago taught us, for implication-bases systems, ex falso quodlibet. False premises do not guarantee true conclusions.

    Models have to be validated and cross checked against experienced reality. In effect, they are useful theories that may be empirically reliable in a domain or case, but do not fool yourself that they are reality.

    Reality is real, that is why we need direct empirical test if we can get it.

    Which is exactly where the key breakdowns are.

    GAs etc may model what happens within islands of function, as discussed above, but they don’t put us there from arbitrary initial configs, without a lot of intelligent input.

    GEM of TKI

  41. GAs are not necessarily simulations, kf. They are used to produce actual products.

    And yes, they often do start from “arbitrary initial configs”.

    For instance in the learning algorithms I write, I usually start off with the parameters set to give a result that is no better than guesswork, and it learns from there.

    When used to “simulate” something, the test of the model is a test of the output of the model against actual data. If the model output is a good match for the output of the process being modeled, we can conclude that the model is a good one.

  42. For record: Onlookers, kindly search for the phrase and context of: “Not entirely sure about “finite” . . . ” above. KF

  43. OK, how about the light bulb? Minimum components: enclosure, vaccuum pump, filament, electrical supply. Add to that configuration information and construction order (not necessarily the same things). This is not necessarily an “irreducibly complex” recipe, but removing, say the vacuum requirement shortens the life of the filament so much, short of adding additional control systems on the power supply to prevent this by dropping the voltage, it’s questionable that you have a functional bulb.

    Note that configuration and construction are as important as the parts list. Given all the parts, a light bulb does not assemble itself.

  44. Thank-you, Elizabeth. Once you can split an issue into parts, it’s much easier to talk about it by focussing on each in turn. I have ideas, counters, explanations (and often agreements) with many of your points, but this thread really isn’t the place for it. In fact each of those points, I’m sure, could be split into several threads much more productively. My main hope was to get over the “that’s not even relevent” objection!

  45. Finite and dynamic are two different concepts. The number of parameter combinations is a Cartesian product of n relevant sets of parameters. E.g. for X={x1,x2} and Y={y1,y2}, the Cartesian product X o Y = {(x1,y1), (x2,y1), (x1,y2), (x2,y2)}. Consequently, on finite domains, it is finite.

  46. OK please reference ONE Darwinist simulation that correctly simulates biological evolution. That is what Gil is referring to- oh wait you think they are bogus also, just as Gil does.

    Strange…

  47. Petrushka,

    The question is what drives search towards increased utility. Nature does not care about utility, does it? I think evolutionist models suffer from making evolution anthropomorphic. Without agency nothing will choose anything. Algorithms are formal. Physical reality is not. I doubt very much that NS or RV or their combination of any sort are capable of driving anything anywhere. Function, because it is a formal concept, testifies to choice contingency. There is a conceptual chasm between spontaneous redundant low-info regularity at the physical level and systems exhibiting control, formalism and meaning.

  48. Should of course read “the set of parameter combinations”.

  49. Hi Elizabeth, SCheesman and Petrushka,

    You might like to have a look at my latest post, at http://www.uncommondescent.com.....nt-design/

    as it overlaps in content with some of the points you’ve been discussing.

    Petrushka, you maintain that natural selection is more powerful than directed evolution, and you compare this insight to Adam Smith’s point about the robustness of market economies vs command economies. I’m dubious of the analogy between choices made by agents and supposedly random events (mutations in neo-Darwinian evolution).

    You might like to have a look at this earlier post of mine:

    http://www.uncommondescent.com.....evolution/

    In this post, Darwinian mathematician Gregory Chaitin acknowledges that as far as we know, Intelligent Design is much faster than Darwinian evolution (order N vs. order N squared), although the latter is far, far more rapid than exhaustive search (order 2 to the N). Of these three kinds of evolution, Intelligent Design is the only one guaranteed to get the job done on time.

    I also see no reason why the direction of Intelligent Design would need to be one-dimensional. Indeed, I would expect it to be multi-dimensional.

  50. Strictly speaking, O(N^2) was an upper bound estimate proposed by Chaitin. If it had been the lower bound, then yes, we would have waved good-bye to Darwinian evolution under the time constraints of 5 bln years. However, Chaitin acknowledges that O(N^2) is rather a lot. I don’t personally think him a Darwinian mathematician. In what I saw him write he just had nothing against it. On the other hand, I also saw pointers to materialist criticisms of some of his (for want of a better word) not-terribly-materialistic conclusions from his own algorithmic complexity theory. Anyway, it is my personal impression. But whatever his position, I hold him in high esteem as a true scientist.

  51. There we go. #8.2 by Dr Liddle. I responded to it earlier on in #8.2.4. Anyone can compare posts #8.2 and #11.1. Shocking.

  52. Ah. Thanks.

    But that was not “an appeal to infinite resources” kf. Did you not read the context?

    It was merely a comment that in a search algorithm, the solution space may not be static.

    Nothing “shocking” there, Eugene.

  53. Interesting. If you cannot simulate biology, how do you design?

    Can you point to any technology more advanced than pottery making that does not involve simulations and model building in the design process?

  54. Hello vjtorley

    Yes, I noticed your post covered some common ground. Lots of interesting facts and points.

  55. I doubt very much that NS or RV or their combination of any sort are capable of driving anything anywhere.

    Sorry to butt in, but against the intuition underlying this doubt I would offer the mechanism underlying both NS and drift. The essence is sampling. Each generation loses some of the variation in the prior population. This must be so because organisms produce variable numbers of offspring. Overrepresentation for some means underrepresentation for others. It is readily demonstrable mathematically. Successive generations iteratively lose more and more variation. The inevitable result of this is loss of ALL variation. So in that sense, NS and drift drive populations towards fixed points, at which points evolution would stop. But what acts against this tendency is the input of new variants through mutation. Mutations get fixed by exactly the same process as described. Any neutral sequence has probability 1/N (total population size) of being the sole sequence at its locus following fixation. Each fixation removes all trace of the ancestral allele. The picture is more complicated for a selective differential, and the existence of thousands of loci, but the underlying process is the same.

    With this mutation-fixation process in operation, at the background of all generations, populations will wander away from any point in the space, even if selection is acting on some loci to keep them at their current location. Selection acts differently at each locus, and is conditioned by what is going on at every other locus, as well as environmentally. If selection shifts or diminishes, an anchored locus will shift. Having wandered, a sequence is probabilistically almost certain not to return to that same place.

    The searchability of the space is a different matter, conditioned by the rules by which ‘well-formed’ strings are evaluated, but the role of the population sampling process is highly significant, and the result – a play-off between addition of and loss of variation – generates a memoryless random walk, effectively ‘driving’ the population everywhere but its current location.

  56. It appears that the theme of my post has been misinterpreted by some. This is perhaps my fault for not being sufficiently explicit.

    It is not my assertion that genetic algorithms are worthless. In fact, they can be a very useful computational tool for finding approximate solutions to difficult problems that defy brute-force search (e.g., the traveling salesman problem).

    What I’m objecting to is the fact that a program like Avida can be approved for publication in a prestigious international science journal with the claim that it validates the creative power of the Darwinian mechanism (random errors filtered by natural selection) in the history of biological evolution — when such a claim has no empirical or even rational justification.

    There is no way such an unjustified extrapolation would be accepted or even taken seriously in the computer-simulation world in which I operate.

  57. Gil,

    I think your post was quite clear, and also quite clearly wrong, which is why commenters have lodged so many objections against its content.

    Perhaps you could address those, including the one I raised in the very first comment on this thread:

    Gil,

    A thorough understanding of FEA, PR, E, Ro, and EOS is a prerequisite for any computer-simulationist who hopes to have any confidence that his computer simulation will have any validity concerning the real world…

    Seriously? You think that someone modeling an optical system needs to worry about Young’s modulus? Do you take color into account when you’re simulating a mechanical system? Of course not.

  58. It is shocking, Dr Liddle. You conflated dynamic with finite, which I pointed to. I think I understood the context well enough. How else should I or anyone else interpret your posts 8.2 and 11.1 which go one after the other?

  59. In my opinion, an engineer must be aware of lots of things. And by the way, when modelling a real optical system, I do believe one has to have an understanding of mechanical properties of lenses. Many mechanical systems must also be therally insulated, therefore color considerations may also be relevant.

  60. I did not “conflate dynamic with finite”. I made a fairly simple and tangential point. I did not “appeal to infinite resources” as kf alleges.

    I have no idea how you and kf are interpreting my posts. We seem to be reading different languages.

    I’m at a loss.

  61. That is thermally.

  62. But it does have both empirical and rational justification, Gil.

    I’m not sure what you are not seeing here.

  63. Hi Elizabeth,

    I think I see the source of Eugene’s and KF’s confusion. When SCheesman mentioned a finite solution space, you commented:

    Not entirely sure about “finite”. In biology, and in some EAs, the evolving population becomes part of the environment, so solution-space itself is constantly changing (because so is the problem space).

    Your point was that the solution space could grow without bound as the population evolved, but KF and Eugene seem to think that you were claiming that the search could completely cover an infinite solution space, which would of course require infinite resources.

    Sloppy thinking on their part, since that would amount to an exhaustive search, when the whole point of using an EA is to avoid an exhaustive search.

  64. Good thinking, Champignon. Thanks.

    Does that make more sense now, kf and Eugene?

  65. Guys,

    Variable solution space and infinite solution space are two different things, which is especially important in this thread. As to sloppy thinking, I read what is written. If it is not clearly written, it is not my fault.

  66. But the universal creative power of the Darwinian mechanism in biology has neither empirical nor rational justification, Liz.

    I’m not sure what you are not seeing here.

  67. This is the closest I’ve seen on UD to “neener neener” or I’m like rubber you’re like glue.

    I’m still wondering what the connection between tensile strength and elastic and viscous moduli and Darwinian evolution is.

    Please, demonstrate how to treat Young’s modulus in the change of allele frequencies in a population.

  68. What I’m not seeing is your argument!

    Leaving aside Young’s modulus, which does seem to be a red herring, you say:

    What I’m objecting to is the fact that a program like Avida can be approved for publication in a prestigious international science journal with the claim that it validates the creative power of the Darwinian mechanism (random errors filtered by natural selection) in the history of biological evolution — when such a claim has no empirical or even rational justification.

    First of all, what AVIDA shows (or what that paper showed) was two important things:

    Firstly (already known) that Darwinian mechanisms can create novel solutions to problems (in this case, algorithms that perform logic functions).

    Secondly, the authors showed that that even when functions were Irreducibly Complex, they still evolved

    AVIDA is a model of Darwinian evolution (theoretical) that empirically (see the results) demonstrates the creative power of the Darwinian algorithm, and the fallacy of the Irreducible Complex counter-argument to it.

    It doesn’t, obviously, show that life evolved by Darwinian mechanisms, but it does show that the IC argument that it can’t have done is fallacious.

Leave a Reply