Home » Intelligent Design » Evolutionist: Our Best Defense Against Anti-Science Obscurantism

Evolutionist: Our Best Defense Against Anti-Science Obscurantism

Evolutionists say undirected, random events, such as mutations, accumulated to create the entire biological world. An analogy once used for this claim is that of a room full of monkeys pounding away at typewriters and producing Hamlet. Today the analogy needs to be updated from typewriters to computer keyboards, but otherwise remains apropos. When the letters are selected at random, a page (or screen) full of text is going to be meaningless. And the problem is no easier in the biological world. Whether English prose or molecular sequences, the problem is that there are relatively few meaningful sequences in an astronomically large volume of possibilities. Nor does selection help because the smallest sequence that could be selected—such as a small gene—is not very small. All of this is rather intuitive and for centuries evolutionists have been trying to solve the problem. Their latest solution is being called natural genetic engineeringRead more

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

99 Responses to Evolutionist: Our Best Defense Against Anti-Science Obscurantism

  1. Whether English prose or molecular sequences, the problem is that there are relatively few meaningful sequences in an astronomically large volume of possibilities.

    Nor anywhere near the time nor resources that would be needed to search the realm of possibilities.

  2. We have a case of Ayala’ing it seems on CreationWiki sr editor and founder.

    Creation Wiki editing debate @ http://knownquantity.wordpress.com/

    Let’s hope once confronted with the truth he will replace edits. I post this as a heads up for the wiki, so you know that it just doesn’t seem like a consistent and fair editing policy based on the clear evidence presented.

  3. An analogy once used for this claim is that of a room full of monkeys pounding away at typewriters and producing Hamlet. Today the analogy needs to be updated from typewriters to computer keyboards, but otherwise remains apropos.

    What parts of that analogy represent inheritance and selection? As far as I can see the monkeys at typewriters analogy is just an example of a random number/symbol generator.

    Evolution has a number of non-random mechanisms at work like reproduction with inheritance and variable reproduction rates in a competitive environment (Reproduction and selection)

    Monkeys at typewriters are just monkeys at typewriters. I don’t know any biologists who would take this claim seriously, it just indicates a total failure to understand the basics of evolution.

  4. 4
    Elizabeth Liddle

    Whether English prose or molecular sequences, the problem is that there are relatively few meaningful sequences in an astronomically large volume of possibilities. Nor does selection help because the smallest sequence that could be selected—such as a small gene—is not very small.

    A couple of additional points:

    I agree the proportion of “meaningful” sequences to “possible” sequences, is important, but there is no good reason (and a fair bit of reason not) to assume that the proportions are similar for English text and DNA.

    Secondly, selection can, and does, work well below the level of the gene, which is one of the reasons why alleles vary in frequency. Alleles with single nucleotide substitutions can result in selectable phenotypes.

  5. Author: “When the letters are selected at random, a page (or screen) full of text is going to be meaningless.”

    It is not only that. More importantly, there must also be an “agent” who decides the semantics, as is true in any other semantic information processing system. Without this, all is jibberish, even Hamlet.

  6. Elizabet objects;

    ‘I agree the proportion of “meaningful” sequences to “possible” sequences, is important, but there is no good reason (and a fair bit of reason not) to assume that the proportions are similar for English text and DNA.’

    But does the objection have merit?? After decades of research, the answer is a resounding NO!!! The objection has no merit!!

    Stephen Meyer – Functional Proteins And Information For Body Plans – video
    http://www.metacafe.com/watch/4050681/

  7. Dr BOT:

    Before you can get to the possibility of selection on differential functionality, you have to first get to an island of function. Or rather, the implied selections will all be fails.

    This is critical.

    For, the numerical considerations tell us that with just 143 ASCII characters worth of text (one full length tweet)the search resources of the observed cosmos would be hopelessly inadequate.

    On the evidence we have in hand, the first viable, metabolising and self-replicating body plans will require 100 – 1,000 kilo bits of information, two to three orders of magnitude beyond the CSI limit.

    Novel body plans will require 10+ million bits of such information, dozens of times over.

    So, we are well warranted to raise the infinite monkeys, needle in the haystack challenge. It does not start with a jumbo jet being formed in a junkyard by a tornado, it starts at about he level of forming one of the instruments on its instrument panel, or, just try to get the moneys to type out one paragraph from the operation manual.

    There is no easy step by step process from “See Spot run” to such a manual, and there is no credible, observed easy small incremental functional step path from a Hello World program to an operating system.

    All the evidence strongly points to deeply isolated islands of function.

    So, the analysis applies, despite the dismissive talking points that try to brush it aside.

    If you or anyone else wishes to overturn it, simply produce an observed case where chance and necessity without intelligent direction, produces FSCI.

    And, GA’s are all intelligently designed, targetted searches preloaded with active information crucial tot heir success and working within islands of function, so they are irrelevant. Indeed, they are examples of how it takes intelligence to generate FSCI.

    GEM of TKI

  8. Evolutionists say undirected, random events, such as mutations, accumulated to create the entire biological world.

    And ID says that not all mutations are undirected, random events.

  9. DrBot:

    Evolution has a number of non-random mechanisms at work like reproduction with inheritance and variable reproduction rates in a competitive environment (Reproduction and selection)

    What is the evidence that those are non-random?

  10. 10
    Elizabeth Liddle

    Depends on your definition of “random” (not an easy concept to define, in fact).

    Selection events are of course stochastic (whether you survive to breed can depend on many factors other than your inheritance, as does whether a lucky inheritance actually happens to come in handy for a given individual), but selection itself is “non-random” in the most common sense, by definition. A trait is called selected if it biases the probability that you will survive to breed.

    A dark peppered moth may well meet its end in a candle-flame, and its dark-colour may have nothing to do with its fate. Similarly, another dark peppered moth may breed magnificently, even though it lives on a silver-birch, because it just happened to get lucky.

    That doesn’t mean that statistically, the darker moths won’t have a better chance of breeding, if being spotted by a predator on a sooty tree is a major hazard for peppered moths.

    So while the probability of survival, for a moth, may be stochastic, the biasing effect of survival of camouflage remains an important factor in the prevalence of dark-moths in each generation, in other words in the (micro-)evolution of the population.

    That’s the sense in which selection is often described as “non-random”.

    In contrast, mutations are often described as “random”. But this is also misleading, in the opposite sense. Mutations also have a non-flat probability distribution, and it may well be that the kind of processes that tend to result in the kind of mutations that tend to be non-catastrophic have themselves been subject to selection (indeed there is evidence that this may be the case). In addition, not all mutations are going to be equiprobable, because of fundamental physical and chemical laws.

    Incidentally, I think the issue of “randomness” is one of the many that have “Darwinists” and “IDists” talking past each other, so it’s worth sorting it out, I think.

    Dawkins BTW often gets the issue extremely muddled!

  11. Before you can get to the possibility of selection on differential functionality, you have to first get to an island of function. Or rather, the implied selections will all be fails.

    Cornelius stated:

    Evolutionists say undirected, random events, such as mutations, accumulated to create the entire biological world. An analogy once used for this claim is that of a room full of monkeys pounding away at typewriters and producing Hamlet.

    Cornelius is referring to evolution, not biogenisis, as was I. You are referring to biogenisis which is not evolution, and not the topic of discussion.

    there is no credible, observed easy small incremental functional step path from a Hello World program to an operating system.

    Quite right.

    All the evidence strongly points to deeply isolated islands of function.

    In computer software, yes. How about electronics?

    And, GA’s are all intelligently designed, targetted searches preloaded with active information crucial tot heir success and working within islands of function, so they are irrelevant. Indeed, they are examples of how it takes intelligence to generate FSCI.

    1 -> By definition if there is nothing one could call a target (either in the problem or solution domains) then it isn’t a search – how can you perform a search if there is nothing to find? More generally a targetted search refers to a situation where a specific target has been specified in advance. Many GA’s use fitness heuristics that change during evolution, or where there exist many unknown possible solutions. In these cases there are no explicit targets, just a set of variable criteria for measuring the effectiveness of a solution. ‘targeted search’ in the context of a GA has a quite specific meaning but yes, you can ingore that specific terminology and the word targeted will apply to any search.

    2 -> GA’s are designed. Could biological evolution be the product of design?

    3 -> GA’s mine information from the environment they operate in. That is where ‘Active Information’ comes from. In biology the environment exists – does it have to be designed for an evolving system to extract information from it?

    4 -> Some GA’s are models of evolutionary mechanisms, they are not irrelevant to studying evolution. Weather simulations are models of weather, are they relevant to the study of weather?

  12. DrBot:

    Cornelius is referring to evolution

    Cornelius is referring to mutations and other random events or processes.

    Evolutionists say undirected, random events, such as mutations, accumulated to create the entire biological world. An analogy once used for this claim is that of a room full of monkeys pounding away at typewriters and producing Hamlet.

  13. Elizabeth:

    I agree the proportion of “meaningful” sequences to “possible” sequences, is important, but there is no good reason (and a fair bit of reason not) to assume that the proportions are similar for English text and DNA.

    And that’s just a red herring.

    No one is even attempting to argue that they are proportional.

    The proportion of “meaningful” sequences to “possible” sequences, is important.

    So a couple (hopefully simple) questions.

    Given say, the length of a single codon (3 bp), how many possible sequences do we have?

    Each time we add an additional codon (+3 bp), how many new possible sequences are added?

    At what sequence length (in terms of how many base pairs or codons) is it no longer reasonable to believe that there has been enough time since the beginning of the universe to test/try/find every possible sequence of nucleotide bases that could appear given the the length of that sequence?

    [Assume that we have a memory that knows not to try the same sequence more than once, even though we know of no such memory.]

    Once we actually sit down and do the math we can begin to get some idea of the true enormity of the problem.

  14. Mung. Did you understand my comment?

    Evolutionists say undirected, random events, such as mutations, accumulated …

    Where, in the monkey typewriter analogy, is the analogy to a mechanism for accumulation?

  15. Sure I noticed it. But don’t you think it’s first important to understand the actual argument?

    You claimed that Hunter was referring to evolution. I showed that he wasn’t.

    And now you want to go on pretending like he was. Go figure.

    Now if it was up to me I’d say that the accumulation is the reams of paper piling up on the floor full of meaningless sentences and covered in monkey feces.

    Gravity works just fine for a law-like mechanism in my book.

    Today the analogy needs to be updated from typewriters to computer keyboards, but otherwise remains apropos. When the letters are selected at random, a page (or screen) full of text is going to be meaningless. And the problem is no easier in the biological world.

    I’m not going to pursue the question of accumulation here because it’s a pointless exercise. It doesn’t matter.

    Whether English prose or molecular sequences, the problem is that there are relatively few meaningful sequences in an astronomically large volume of possibilities. Nor does selection help because the smallest sequence that could be selected—such as a small gene—is not very small.

    There is a plain statement of the problem, and the reason why selection doesn’t help.

    Where, in the monkey typewriter analogy, is the analogy to a mechanism for accumulation?

    That’s a red herring. It misses the point of the argument.

    That’s why I originally ignored it.

    And rightfully so.

  16. So, back to the OP.

    The problem is, finding things for selection to select.

    Why is Hunter wrong?

  17. 17
    Elizabeth Liddle

    Why put it that way?

    (namely: “The problem is, finding things for selection to select”)

    Why not: “The problem is finding things that result in differential reproduction”?

    It comes to the same thing, after all, but is much easier to solve :)

  18. You mean like genetic drift?

  19. 19
    Elizabeth Liddle

    No – it’s just another way of expressing “natural selection” (avoiding redundancies like “selection selects”).

    Variants that result in differential reproduction automatically means that selection has found things to select.

    So the more fundamental question is:

    Why do some variants reproduce better or worse than others?

    There are lots of answers to that, of course.

  20. EL:

    Why not: “The problem is finding things that result in differential reproduction”?

    To get to reproduction, you have to first have an embryologically viable body plan. And the problem is that novel body plans credibly require at least 10 – 100+ million new base pairs of bioinfo, starting weiththe sort of level we see at the Cambrian explosion.

    That is where Hunter’s observation that Mung cited bites home:

    Whether English prose or molecular sequences, the problem is that there are relatively few meaningful sequences in an astronomically large volume of possibilities. Nor does selection help because the smallest sequence that could be selected—such as a small gene—is not very small.

    In short, you are looking at deeply isolated islands of function in vast config spaces, to search which is beyond the quantum state resources of the solar system or even the observed cosmos.

    A theory that starts with already reproducing populations is fine as a theory of adaptation of existing body forms to novel niches and conditions in the environment. But that only gets you to a theory of what has been called microevolution. Which is accepted by all, including modern Young Earth Creationists.

    The question being begged is how do you get to a viable body plan, given what we know about the information requisites of such complex, functionally specific organisation?

    Information demands high contingency, and that in turn has two credible sources: chance vs choice contingency. We know FSCI is routinely produced by choice, as posts in this thread demonstrate (and as Venter et al have given proof of concept for for living cells). Can you show observed cases of chance doing so, especially in the biological world?

    Do you see why, on inference to best, empirically anchored explanation, we infer the best explanation for what we see in cell based life is design?

    GEM of TKI

  21. 21
    Elizabeth Liddle

    Darwinian theory certainly starts with already reproducing populations.

    So if ID is a critique of how that first reproducing population came into existence, then fair enough, but then its not a critique of Darwinian theory!

    But even with that caveat, Darwinian theory does explain both “microevolution” if by which you mean incremental adaptation of a single population over time, and speciation, if by that you mean (and I do) the divergence of a single population into two populations that each follow a different adaptive trajectory.

    And I (and standard biological theory) would argume that over time, those incremental changes (either within a single population, longitudinally, or differences between two daughter populations) can amount to “macro” changes.

    So I would dispute your claim that Darwinian theory can’t result in “novel body plans”. I would agree with your claim that Darwinian theory can’t account for the original self-replicating population, but then it doesn’t attempt that.

  22. F/N: I FWD the following from the polarisation of debate thread where the points were tangential. I think they better fit in here and have suggested that onward discussion comes here:

    _____________

    KF, 133 >> Perhaps this animation will help clarify. Notice the way tRNA folds up its polymer string to give the position-arm device that carries the AA.

    (This one will help see how it is charged up. Notice how the common CCA tail “charging” end — in effect a universal connector-socket for tRNAs — bonds to the COOH end of the AA [AA's have a COOH end and a NH2 end], and the particular AA loaded is based on a key-lock fit to the charging enzyme, the relevant aminoacyl tRNA synthetase. The charged tRNA then adds the AA to the elongating protein based on key-lock fit to the codon triplet.)

    We can see the way information controls the actual step by step processing that makes a protein. >>

    134 >> Perhaps I should show the number of ways information rather than chemistry is controlling what is going on:

    1 –> DNA (and RNA) chains on a standard sugar-phosphate coupling, the 3 – 5 coupling, and chaining any to any. Wiki, testifying against interest, on DNA:

    “The main role of DNA molecules is the long-term storage of information. DNA is often compared to a set of blueprints, like a recipe or a code, since it contains the instructions needed to construct other components of cells, such as proteins and RNA molecules. The DNA segments that carry this genetic information are called genes, but other DNA sequences have structural purposes, or are involved in regulating the use of this genetic information . . . . DNA consists of . . . polymers of simple units called nucleotides, with backbones made of sugars and phosphate groups joined by ester bonds . . . Attached to each sugar is one of four types of molecules called nucleobases (informally, bases). It is the sequence of these four nucleobases along the backbone that encodes information. This information is read using the genetic code, which specifies the sequence of the amino acids within proteins. The code is read by copying stretches of DNA into the related nucleic acid RNA, in a process called transcription.”

    2 –> And, on Chaining:

    The sugars and phosphates in nucleic acids are connected to each other in an alternating chain (sugar-phosphate backbone) through phosphodiester linkages.[10] In conventional nomenclature, the carbons to which the phosphate groups attach are the 3′-end and the 5′-end carbons of the sugar. This gives nucleic acids directionality, and the ends of nucleic acid molecules are referred to as 5′-end and 3′-end. The nucleobases are joined to the sugars via an N-glycosidic linkage involving a nucleobase ring nitrogen (N-1 for pyrimidines and N-9 for purines) and the 1′ carbon of the pentose sugar ring.

    3 –> RNA is similar:

    Like DNA, RNA is made up of a long chain of components called nucleotides. Each nucleotide consists of a nucleobase (sometimes called a nitrogenous base), a ribose sugar, and a phosphate group. The sequence of nucleotides allows RNA to encode genetic information. For example, some viruses use RNA instead of DNA as their genetic material, and all organisms use messenger RNA (mRNA) to carry the genetic information that directs the synthesis of proteins . . . . The chemical structure of RNA is very similar to that of DNA, with two differences – (a) RNA contains the sugar ribose while DNA contains the slightly different sugar deoxyribose (a type of ribose that lacks one oxygen atom), and (b) RNA has the nucleobase uracil while DNA contains thymine (uracil and thymine have similar base-pairing properties).

    Unlike DNA, most RNA molecules are single-stranded. Single-stranded RNA molecules adopt very complex three-dimensional structures, since they are not restricted to the repetitive double-helical form of double-stranded DNA. RNA is made within living cells by RNA polymerases, enzymes that act to copy a DNA or RNA template into a new RNA strand through processes known as transcription or RNA replication, respectively.

    4 –> Proteins, of course, chain amino acids, which have a standard structure: H2N – CHR – COOH (where R is the side-branch functional group), and they are chained by the peptide bond, again the acids may chain any to any.

    5 –> In the case of tRNA, at the 3′ end, there is a CCA “universal coupler” that ties to the COOH of the AA. The correct AA is transferred by a key-lock fitted enzyme, but the coupling itself is standard (and implies that he amine end is the one that clicks on to the elongating AA chain).

    6 –> At each stage we see that we are dealing with standard couplers and chains, so that there s a maximum flexibility. Itis information imposed on the chemistry and expressed in key-lock shape patterns that controls the functional roles.

    7 –> DNA stores the genetic info in codes (a code, BTW, which is optimised for robustness against likely single point variations in AAs) based on a 64 state triple base system that translates to AAs and to start/stop procedures, elongation being an implied instruction once the new codon is in the chain.

    8 –> MRNA, ribosomes and tRNA, with support enzymes etc, are the keys to the protein assembly process. mRNA is transcribed (and may be edited) based on DNA, and is communicated tot he ribosome, where it is used as a step by step discrete state controller.

    9 –> As a rule AUG is the first codon, implying both start and load methionine. tRNAs loaded with the AAs then add in sequence, per the codon control.

    10 –> The standard CCA -COOH coupler would mean that the chemistry of the bond to the AA does not control which AA is loaded. That is done by the corresponding aminoacyl tRNA synthetase (an enzyme that fits the tRNA and transfers the right AA based on the specific tertiary config, i.e. the tertiary bent arm folded form of the nucleic acid chain).

    11 –> The tertiary functional form results from folding the chain into a cloverleaf secondary form, then further folding into the L-arm.

    12 –> The anticodon is at one end, and the coupled AA at the other,so, again this is informationally controlled, not physically controlled.

    13 –> The correct tRNA, with its elongation support molecule, loads to the next available codon, with key-lock fitting based on codon-anticodon complementarity, controlling the match.

    14 –> This is a digital code control point.

    _________

    In short, we can see how the protein assembly process is informationally controlled based on the code in the DNA. To function, the protein is folded and sent to the use site, based on further information (often coded into an end of the AA chain).

    Protein assembly is in effect carried out through an automated nanofactory based on discrete state asynchronous control. >>

    137 >> That there are indeed dialects of the genetic code (and the shift from DNA to RNA forms is the first such!), with variant forms and even ways to stick in extra amino acids beyond the usual twenty, tells us that the code is not driven by laws of mechanical necessity.

    That should have been evident, from the high contingency involved in a code; after all, a linguistic entity. (And yes, I am saying that language plainly was antecedent to C-chemistry, cell based life as we observe it on earth.)

    There are two and only two known causes of highly contingent outcomes: chance contingency or choice contingency.

    We have here a code of symbols and rules, with dialects, universal connectors and sockets and various clever tricks (like the wobble base pairing in the anticodon). It is used in an algorithmic context to control the automated assembly line production of proteins — proteins which are BTW also intimately involved in the operation of the factory (making for a chicken-egg situation).

    That brings us right back to the key issue: what is the empirically known, best supported cause of codes, algorithms and assembly lines? [ANS: Obvious, intelligence.]

    Is there any direct observational evidence of such things coming to be (including the chicken-egg causal loop we just identified) by chance contingency? [ANS: No, and we know on thermodynamic reasoning [config spaces], that such functionally specific, complex, organised and information rich systems will be maximally isolated in configuration space.]

    These questions point, strongly to the most credible causal explanation, to the point of obviousness. Obviousness, save to those who are so committed to a priori materialism (or are fellow travellers) that Lewontin’s observation applies:

    It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [From: “Billions and Billions of Demons,” NYRB, January 9, 1997.

    I dare to say: this is mind-closing ideology, not science. At least if science is to be understood as having the integrity of being committed to an open-minded observational evidence-led assessment of possibilities, towards learning and warranting the truth about our cosmos, including its origins.

    Perhaps it is time we all paused and looked at this video. >>

    _____________

    I trust this is helpful

    GEM of TKI

  23. Dr Liddle:

    Absent the root, the darwinian tree of life has no place to begin.

    And, the issue of getting to first life is an example of the same problem: the need for a viable self-replicating entity with whatever body plan. to get to body plans, we need upwards of 10 mn bases of new info. That exhausts the search capacity of the cosmos.

    The problem, in short is to get to shores of islands of function, not how to climb hills in such islands. And since until you are on such a shoreline — the tree analogy is grossly misleading as it suggests easy continuity of variation — you do not survive embryologically etc you cannot reproduce so there is no reproduction to play off.

    That’s why the infinite monkeys analysis is so telling. (And BTW, it was formerly used by those who wanted to pretend that even very improbable things would eventually happen by chance.)

    Until you climb the shores of an island of function, having got there by random walk through the vast seas of non-functional configs, you have no base for hill climbing. No reproduction t6o select.

    So, you cannot dodge this issue by locking off getting to the first body plan and saying our theory does not look at that. For the self-same problem is compounding again and again as you try to get to new body plans.

    Let me clip Meyer, in PBSW, on the key challenge that the establishment does not want us to be talking about:

    the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types . . . New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information. Thus an increase in the number of cell types implies (at a minimum) a considerable increase in the amount of specified genetic information. Molecular biologists have recently estimated that a minimally complex single-celled organism would require between 318 and 562 kilobase pairs of DNA to produce the proteins necessary to maintain life (Koonin 2000). More complex single cells might require upward of a million base pairs. Yet to build the proteins necessary to sustain a complex arthropod such as a trilobite would require orders of magnitude more coding instructions. The genome size of a modern arthropod, the fruitfly Drosophila melanogaster, is approximately 180 million base pairs (Gerhart & Kirschner 1997:121, Adams et al. 2000). Transitions from a single cell to colonies of cells to complex animals represent significant (and, in principle, measurable) increases in CSI . . . .

    In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types, but also for the origin of new body plans . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes–the very stuff of macroevolution–apparently do not vary. In other words, mutations of the kind that macroevolution doesn’t need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don’t occur.6

    GEM of TKI

  24. 24
    Elizabeth Liddle

    kairosfocus @ #23

    And, the issue of getting to first life is an example of the same problem: the need for a viable self-replicating entity with whatever body plan. to get to body plans, we need upwards of 10 mn bases of new info. That exhausts the search capacity of the cosmos.</blockquote.

    A couple of comments;

    It is not evident to me that you need "upwards of 10 mn bases" to get a "body plan" (especially if that body plan includes unicellular organisms), so I'd be interested in the source of that figure.

    Secondly, I'm not sure how you are computing "search capacity" but evolutionary processes are not of course blind search.

    Once you have a winner, however slight, the search space collapses hugely.

  25. Lizzie:

    Secondly, I’m not sure how you are computing “search capacity” but evolutionary processes are not of course blind search.

    Of course it is. Natural selection is blind and mindless and mutations are said to be accidents/ mistakes/ errors.

    Once you have a winner, however slight, the search space collapses hugely.

    Possibly but there isn’t any evidence that stochastic processes can find a winner.

  26. EL:

    So if ID is a critique of how that first reproducing population came into existence, then fair enough, but then its not a critique of Darwinian theory!

    Actually if Darwinian theory cannot tell us how the first living organism(s) came to be then it cannot say all subsequent diversity is due solely to accumulations of genetic accidents as the origins ad subsequent diversity are directly linked. Designed life would also be designed to evolve.

  27. 27
    Elizabeth Liddle

    Joseph:

    Lizzie:

    Secondly, I’m not sure how you are computing “search capacity” but evolutionary processes are not of course blind search.

    Of course it is. Natural selection is blind and mindless and mutations are said to be accidents/ mistakes/ errors.

    Once you have a winner, however slight, the search space collapses hugely.

    Possibly but there isn’t any evidence that stochastic processes can find a winner.

    Well, again we have problems with an ambiguous word. Let me rephrase:

    Natural selection does not involve random sampling of every possible “solution” to the problem of successful self-replication.

    Every time a variant happens to self-replicate better than its peers, that variant will increase in prevalence (by definition).

    So however rarely that slightly-better-variant occurs, once it has occurred, it then, by virtue of its superior self-replication capacity, generates large numbers of opportunities to build on that “find”.

    In other words, every very slightly successful “solution” hugely reduces the search space. In effect, only solutions that build on that first step are “searched”.

    That has interesting consequences of course, as it means that if a solution is found in one lineage (e.g. flow-through lungs in birds) that solution is only easily accessible to populations that follow that same lineage.

    And some lineages (our own, for instance) have already travelled far enough down a different, but less promising, path to make the search for a better lung (one as good as a bird lung) even less probable that the original bird lung was.

    It’s a bit like the game of hangman (do kids play hangman in the US)?

    You start by guessing letters at random, but once you’ve got a couple – a vowel, say, in a particular spot, then the search space starts to collapse. Eventually the number of possible solutions reduces to perhaps one or two words.

    As for your last point – well the game of hangman is a case in point. So are GAs. Stochastic processes can be very good at finding winners when part of an evolutionary algorithm.

  28. 28
    Elizabeth Liddle

    joseph @ #26

    Actually if Darwinian theory cannot tell us how the first living organism(s) came to be then it cannot say all subsequent diversity is due solely to accumulations of genetic accidents as the origins ad subsequent diversity are directly linked. Designed life would also be designed to evolve.

    There is nothing in Darwin’s theory that says that a minimal “seed” organism wasn’t intelligently designed, and designed in such a way that subsequent diversification would inevitably follow by Darwinian principles.

    So yes, Life could have been Designed to evolve. But that is compatible with Darwin’s theory.

    His theory was on the Origin of Species, not the Origin of Life. He specifically says so at the very end of the book.

  29. Dr Liddle:

    Pardon, but you just saw the calculations, as sourced, and as backed by an observation of an arthropod, with an order of magnitude taken off to give wiggle room, for novel body plans.

    A realistic independent unicellular form takes more like 1 mn bases than 100 k. That is enough to set an insurmountable barrier [by 2 - 3 ords of mag . . . and each bit extra doubles the config space], even at the low end — parasitical micro organisms.

    Going beyond, the strong evidence is that meaningful codes [cf 22 above on how that plays out with tRNA] are highly specific, confined to isolated zones in the space of possible configurations. You don’t move from say Hello World to an operating system by one step letter changes, or by duplicating strips of text and varying at random. Nor can you convert “See Spot run” into even a blog post the same way.

    It is those who argue otherwise who by now need to show empirical cases; as is so with those who argue for perpetual motion machines. And, BTW, that repeatable demonstration would probably be a Nobel Prize, as the 2nd law of thermodynamics (statistical form) rests on pretty much the same grounds.

    The fundamental biological challenge posed by the design view is that we have good reason to see that for the functionally specific complex organisation and associated information [FSCO/I] in C-chemistry, cell based life, we are dealing with incredibly large config spaces for viable body plans, well beyond the reach of random walks and trial and error on the gamut of the observed cosmos.

    Remember, just 500 base pairs or 1,000 bits or 125 bytes or 143 ASCII characters [~ 20 typical English words] worth of info storage capacity is 1.07*10^301 possible configs. The ~ 10^80 atoms of our cosmos, for its thermodynamically credible lifespan will not have more than about 10^150 Planck-time Quantum states, where the fastest Chemical interactions take about 10^30 P-times, and the fastest strong force ones about 10^20. (Cf Abel here for a general discussion on this.)

    At the same time, for control of a serious process, 125 bytes of info is tiny.

    That is why in the log reduced form of the Chi metric, I am very comfortable with

    Chi_500 = Ik – 500, bits beyond the solar system threshold [48 ords of mag of possibilities in hand over the available number of P-time states]

    or if you want to go for the cosmos, we can use

    Chi_1,000 = Ik – 1,000, bits beyond the observed cosmos threshold [150 ords of mag in hand]

    A typical protein takes up 300 AA’s, or 900 4-state bases, or 1,800 bits of info carrying capacity. If you want to go to an adjustment based on the way AAs are distributed in functional proteins as observed, we are still coming up beyond the threshold.

    Let’s do a fresh calc. 50 new tissue types to make up the organs for a new body plan would take up probably 10 proteins [including enzymes etc] per type, i.e we are looking at 500 proteins as a conservative estimate — VERY conservative. 500 * 300 = 1.5 *10^5 codons, or 4.5 *10^5 bases, plus regulatory, let’s say about 10% more, 1/2 mn bases.

    But this is way too low:

    Arabidopsis thaliana [flowering plant] 115,409,949 bases

    Anopheles gambiae [mosquito] 278,244,063

    Sea urchin 8.14 x 10^8

    Amphibians 10^9–10^11

    Tetraodon nigroviridis (a pufferfish) 3.42 x 10^8

    In short, 10 – 100 mn is reasonable, even generous. And in any case the config space of 500 k bases is: 9.9 *10^301,029 possibilities.

    In earlier discussions we saw where spaces of order 10^50 are demonstrably searchable within available resources, but of course those of order 10^150 are beyond the solar system threshold and beyond 10^300, the resources of he observed cosmos.

    Once you are within an island of function, you can indeed hill climb as small changes will farm ore likely be still functional and may even be improved — not really a well observed thing, but plausible. No one seriously objects to that. That’s called adaptive variation or micro evo.

    Getting to the islands of function is a very different kettle of fish. First, catch your fish . . .

    GEM of TKI

  30. Dr Liddle:

    Joseph is right. We already have good reason to believe that highly contingent outcomes will trace to chance or choice.

    If we already see several good reasons to infer to choice contingency for the design of first life, that already points to the best candidate for onward major departures in life: design.

    In that context, the adaptability of living forms — micro evo — would be best explained as a part of the design; i.e a certain degree of front loading. Such plainly makes for a more robust design.

    Then when we back up a bit and see that the cosmos we inhabit has a cosmoslogy that is exquisitely fine tuned to fit it to C-chemistry, cell based intelligent life, that further adds to the picture. For that too points to design as best explanation.

    A designed cosmos fitted for life, designed cell based life [integrating metabolic machines and a von Neumann information driven self replicator] and designed major body plans with adaptability as a built-in feature.

    That adds up to a pretty coherent view of a design pattern.

    GEM of TKI

  31. 31
    Elizabeth Liddle

    I see a reference to Koonin (2000) which is presumably this paper:

    http://complex.upf.es/~andreea.....oncept.pdf

    From that paper:

    Is it possible to combine comparative genomics with biochemical and molecular-genetic data to determine the minimal number of genes required to make a modern-type cell?

    my bold

    It’s a theoretical paper, and its conclusions may well be correct. But having a lower bound on the minimal gene-set for a modern type cell tells you nothing at all about the minimal gene-set, or even the mininal pre-gene-set for an archaic cell, or proto cell, or protobiont.

    To claim it does seems to be completely circular! No-one is claiming that the common ancestor of living things was a “modern-type cell”. What that paper does is to estimate what the minimal gene-set might have been for the first “modern-type cells”, which would certainly have marked an important milestone in the descent of modern living things.

    But there is no reason to suppose it was the beginning.

  32. 32
    Elizabeth Liddle

    Kairosfocus:

    Could you explain this sentence:

    We already have good reason to believe that highly contingent outcomes will trace to chance or choice.

    I am not clear what you mean.

    Thanks

    Lizzie

  33. Dr Liddle:

    Have you seen any other sort of life based on cells than that which is based on DNA, RNA, proteins, enzymes, ribosomes, etc, and which is jointly metabolising and self-replicating?

    Indeed, there was a hope that they had found such in the cut down tiny genome forms recently so much discussed, e.g. Mycoplasma pneumoniae. I will not spoil your surprise by clipping from here on what turned out to be the case for such a “stripped down” organism.

    Let’s just say, that Dawkins’ “replicator” molecule, is a paper molecule, not a real world observation. Nor would it, if found, address the origin of the sort of genetics based functional organisation found in cell based life. THAT is the key challenge of origin of life.

    When it comes to the issue of low vs high contingency outcomes and necessity, choice and chance, I am essentially saying this:

    1: We often see situations where under similar initial conditions, we strongly tend to see similar outcomes, i.e low contingency, e.g. a dropped heavy object near earth’s surface strongly tends to fall at 9.8 m/s^2. We explain this by laws of mechanical necessity, e.g gravity.

    2: By contrast, we also see where under similar initial conditions, we find quite diverse outcomes, i.e. high contingency. If the dropped heavy object is a fair die, it will tumble and settle to read from 1 to 6 with more or less equal frequencies, or odds of 1 in 6. We ascribe such statistically distributed contingency to chance, tracing to all sorts of roots.

    3: but also, we can see cases where the outcome is similarly highly contingent, but not merely chance. For instance, as the houses in Las Vegas know, dice can be loaded. This is choice contingency, or design.

    4: The three causal patterns often appear together, but we routinely analyse the aspects of a situation to ascribe causes across the factors, e.g. in a control and treatment blocks experiment design. There, we ascribe some variation to chance and some to treatments. And of course we are interested in underlying laws that may be manifesting themselves in the patterns.

    5: Similarly, with say a pendulum experiment, we look at that pattern of behaviour which is due to law, that to chance scatter, that which is due to personal equations or biases introduced by experiment design etc.

    So, we explain highly contingent outcomes on chance and choice. And we tend to trace natural regularities to laws of necessity like the Newtonian F = G Mm/r^2, which famously united the heavens and the earth, and explained the orbits of planets and comets alike, with perturbation analysis allowing the prediction of then undiscovered planets. (BTW, couldn’t they have grandfathered in Pluto recently?)

    GEM of TKI

  34. PS: The results of studying parasitical vs independently living organisms suggests strongly that the genome for the latter is more like 500 – 1,000 k bases, not 100 – 300 k bases.

  35. PPS: And in any case 100 k bases is well beyond the reach of the observed cosmos, as can be seen. (Worse, I am leaving out of the reckoning the results over the past 60 years that point strongly to the epigenetic contribution. In effect the egg cell provides the actual machinery to carry out genetic instructions, and that seems to dominate embryonic development. We have genetics and metabolic machinery, jointly acting in the cell.)

  36. F/N: JM discusses some of that here.

  37. 37
    Elizabeth Liddle

    No, kairosfocus, I haven’t seen such an alternative form of life.

    But I assume I’m several billion years too late :)

    As for self-replicating pre-biotic systems, there’s a nice recent paper here:

    http://www.ncbi.nlm.nih.gov/pubmed/20811777

  38. 38
    Elizabeth Liddle

    Again, kairofocus, you seem to be assuming a blind search, not a search in which even a marginal improvement to reproductive success by definition multiplies the opportunities for finding an improvement that builds on it, i.e. collapses the search space.

    Evolution isn’t looking for a needle in a haystack, it’s climbing, step by step, up Mount Improbable, in a search in which at every step, the opportunities for the next are, by definition, multiplied.

  39. This is a question that I have been recently trying to answer and wanted to hear people’s opinions.

    Is natural selection falsifiable?

  40. 40
    Elizabeth Liddle

    Depends what you mean.

    I happen to be of the view that falsification isn’t, in general, how science proceeds, except in the sense of “falsifying the null”, and even there, it is probabilistic falsification.

    So we can falsify the null hypothesis that “natural selection doesn’t result in evolution” by demonstrating that it does (and this has of course been done).

    But in order to falsify a positive hypothesis, the hypothesis has to be very specific.

    So the hypothesis that “natural selection results in evolution” can’t be falsified – all we can do is “retain” the null.

    However we can falsify the specific hypothesis: “natural selection is the sole determinant of allele frequency change in a population”.

    And that has been falsfied. We can also falsify the hypothesis: “all living things inherited their genome from a common ancestor” and again this has been falsified by horizontal gene transfer.

    So I’d say that very few scientific hypotheses are actually falsifiable, in the Popperian sense, and even then, only probalistically. What science does instead is fit models to data, and select the models that best fit the data, and reject those that fit worse. We can also compare the fits of alternative models directly, and reject the less-well-fitting model. This is probably as close as science regularly gets to falsification.

    So that’s my take. Hope that helps :)

  41. Thank you Elizabeth for your thoughtful response.

    -“So we can falsify the null hypothesis that “natural selection doesn’t result in evolution” by demonstrating that it does (and this has of course been done).”
    But in order to falsify a positive hypothesis, the hypothesis has to be very specific.
    So the hypothesis that “natural selection results in evolution” can’t be falsified – all we can do is “retain” the null.”

    That is precisely what my question was. If treated as the null, is natural selection falsifiable? The answer is obviously no, which means that it is a heuristic and not a scientific term.

    What bothers me from a philosophical stand-point about natural selection is that in this sense it can be morphed to describe anything. There is also an inherent circularity in the definition that compounds the problem even more. Finally, if you consider the numerous different definitions of ‘nature’, natural selection can become even more vague.

    I too believe that science does not progress via falsification alone. I think that scientific theories are very contextual and even incommensurable as my thinking about science is somewhat influenced by Khun, Lakatos and Feyerabend. I also think that the scientific community is also ridden with a lot of politics, which makes evaluation of theories even more difficult.

    If you don’t mind me asking, are you a biologist?

  42. As I type, 5 of the 6 entries under “Recent Posts” are by or about my friend Elizabeth. That makes even KF and BA77 look like pikers.

    KF–thanks for the encouragement. Next time save the manure for the shy and retiring plants.

  43. Dr Liddle:

    I repeat, the form of biological life to be explained is the only one we observe.

    Secondly, until you have something capable of reproduction — until you are on an island of funciton, selection on differential reproductive success is simply off the table. Yes, within such an island, adaptation can happen, but that is irrelevant to the real challenge.

    Until and unless you can show and warrant that you are able to get to those islands of function based on body plans including the first, with empirical observational support, what you have is just so stories, not true science. WE KNOW THAT FSCO/I IS ROUTINELY PRODUCED BY INTELLIGENCE. And, analytically we know that functionally specific organisation and information, will be deeply isolated in the space of possible configs. Very easily, beyond the reach of chance based random walks backed up by trial and error success/failure.

    THAT IS THE CONTRAST ON INFERENCE TO BEST, EMPIRICALLY ANCHORED EXPLANATION THAT HAS NOT BEEN MET. What we see is a priori materialism imposed by the backdoor route of so-called methodological naturalism, and then defended by power tactics. In some cases, pretty dirty power tactics.

    I will comment on your paper later.

    GEM of TKI

    PS: A, please fix your tone.

  44. EL:

    There is nothing in Darwin’s theory that says that a minimal “seed” organism wasn’t intelligently designed, and designed in such a way that subsequent diversification would inevitably follow by Darwinian principles.

    So yes, Life could have been Designed to evolve. But that is compatible with Darwin’s theory.

    His theory was on the Origin of Species, not the Origin of Life. He specifically says so at the very end of the book.

    KF:

    I repeat, the form of biological life to be explained is the only one we observe.

    Secondly, until you have something capable of reproduction — until you are on an island of funciton, selection on differential reproductive success is simply off the table. Yes, within such an island, adaptation can happen, but that is irrelevant to the real challenge.

    So just to be clear, you are not discussing evolution, you are discussing biogenisis, correct?

    Until and unless you can show and warrant that you are able to get to those islands of function based on body plans including the first, with empirical observational support, what you have is just so stories, not true science.

    only if you are discussing biogenesis, not if you are discussing evolution because as EL has already said, evolution relies on the existence of self replicators (something that has been empirically observed to exist) and evolution is entirely compatible with intelligently designed proto cells.

  45. “evolution is entirely compatible with intelligently designed proto cells.”

    Evolution is likely entirely compatible with occasional intelligent intervention.

  46. likely = likewise

  47. 47
    Elizabeth Liddle

    kairosfocus, @ #43:

    I repeat, the form of biological life to be explained is the only one we observe.

    Yes indeed, but absence of evidence, especially when the absence of evidence is to be expected, is not evidence of absence. We may never know for sure what the forms of life (if any) were like that preceded the forms with which we are familiar,but we cannot conclude that they did not exist, and we may well find viable pathways that make their existence perfectly plausible.

    Secondly, until you have something capable of reproduction — until you are on an island of funciton, selection on differential reproductive success is simply off the table. Yes, within such an island, adaptation can happen, but that is irrelevant to the real challenge.

    Well, I don’t think it is “off the table”. In Darwin’s words:

    There is grandeur in this view of life, with its several powers, having been originally breathed by the Creator into a few forms or into one; and that, whilst this planet has gone circling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved.

    Darwin’s theory is about what happened once life got started, not how it got started. Just because it is not a theory of something else does not mean it is not a perfectly valid and testable theory for what it is actually about.

    And, (IMO) it explains very well how variety could have evolved from “a few forms or one”, whether or not they had had life breathed into them by a Creator.

    If ID is correct, one theory that is perfectly compatible with Darwinian evolution is the scenario that Darwin himself evokes – that of an Intelligent creator who first breathes life into a few forms, and then leaves them to evolve.

    Although the interesting thing to me,now that we know so much more about heritability than Darwin could have dreamed, is that his principle also applies to things that we would hesitate to call “alive”, including crystals, algorithms, and postulated proto-bionts.

    Until and unless you can show and warrant that you are able to get to those islands of function based on body plans including the first, with empirical observational support, what you have is just so stories, not true science.

    No, I don’t think we have “just so stories”. We may well have as yet unsupported hypotheses, but there are lots of those in science. If there weren’t, science would stop!

    And in fact we have a lot of evidence as to how genetic variation produces different “body plans”. So I wouldn’t even agree that the hypothesis is unsupported. We know what genes specify bilateral symmetry, for instance, as opposed to, say, radial symmetry, and what specifies numbers of segments, numbers of limbs, numbers of digits, etc.

    So there is plenty of empirical science going on, as we speak, to support specific evolutionary hypotheses, including hypotheses concerning the genomic changes that resulted in divergent body plans.

  48. 48
    Elizabeth Liddle

    Mike1962:

    Evolution is likely entirely compatible with occasional intelligent intervention.

    Yes indeed, as is shown by the viable results of genetic engineering.

  49. mike1962

    “evolution is entirely compatible with intelligently designed proto cells.”

    Evolution is likely entirely compatible with occasional intelligent intervention.

    Yes, it’s called selective breeding humans use their brains to influence the evolution of other animals.

  50. Elizabeth:

    -”Although the interesting thing to me,now that we know so much more about heritability than Darwin could have dreamed, is that his principle also applies to things that we would hesitate to call “alive”, including crystals, algorithms, and postulated proto-bionts.”

    Which principle? Can you be a little specific?

  51. 51
    Elizabeth Liddle

    The interesting thing, of course, is that to some extent, intelligent interference with Darwinian processes in the form of genetic engineering leaves a fingerprint in the form of non-nested genetic hiearchies.

    But that unfortunately is not the unique fingerprint of intelligent interference – there are other natural (in the sense of non-intelligent) mechanisms of horizontal gene transfer.

  52. oops, my slash key seems to be misbehaving

  53. 53
    Elizabeth Liddle

    above:

    The principle that if things replicate with variance,and some variants replicate better than others, the better replicators will come to dominate the population.

  54. How does that apply to algorithms? Algorithms replicate? Or crystals?

  55. above,

    algorithms describe a process, which can include replication, for example a genetic algorithm. Replication only happens when the algorithm is implemented in some form, for example as a piece of software.

  56. 56
    Elizabeth Liddle

    Well, some algorithms self-replicate with variance, which is what a GA is.

    And crystals often form repetitive patterns from a random seed.

  57. 57
    Elizabeth Liddle

    DrBot gave a better answer, above :)

  58. EL:

    Variants that result in differential reproduction automatically means that selection has found things to select.

    So if you have 3 children, and your neighbor had only two, selection chose you? Why did selection choose you?

    Perhaps your neighbor’s husband died and she chose not to remarry.

  59. 59
    Elizabeth Liddle

    Selection doesn’t “choose” anything, as I’m sure you are aware :)

    You can’t (as I’m sure I’ve said elsewhere today, maybe not on this thread) tell other than statistically whether the reason a variant propagate is because of luck or real benefit.

    Some variants will propagate by sheer luck, and others, though beneficial, won’t, again, through sheer luck.

    But if the effects in question are small we won’t know which is which unless we run carefully controlled experiments.

    With larger effects, then yes, it’s possible to figure out why an allele is resulting in greater fecundity. Often it isn’t.

    But a sample size of 2 won’t be enough :)

  60. 60

    The dumbfounding lack of curiosity, along with the level of sheer blind faith, is simply amazing.

  61. 61
    Elizabeth Liddle

    What are you referring to, Upright BiPed?

  62. 62

    Nothing Liz, feel free to ignore me.

    Ignore me, in the same way in which you ignored the onset of recorded information in the thread yesterday.

    The real details get messy… and they’re harder to scrub clean with that Darwinian Dishsoap you’re selling.

  63. F/N: The paper, abstract:

    __________

    >> Abstract

    The paper presents a model of coevolution of short peptides (P) and short oligonucleotides (N) at an early stage of chemical evolution leading to the origin of life.

    a –> At outset, a speculative model, but in a context of presumed abiogenesis on blind chance plus mechanical necessity

    The model describes polymerization of both P and N types of molecules on mineral surfaces in aqueous solution at moderate temperatures.

    b –> More or less the old clay bed model, but skips over the need to inform the polymer sequence relative to function in life.

    c –> a 300-monomer protein has 20^300 possibilities, and a comparable 900 monomer R/DNA would have 4^ 900 possibilities, i.e. functional states are going to be maximally isolated in the possibilities space

    de –> there are also some serious challenges to get the monomers in concentrations, and there are issues on chirality to be addressed [the two chiralities are energetically the same as a rule, on Enthalpy of formation, so the strong tendency is to form racemic 50-50 mixes.

    d –> Also, there are interfering cross-reactions from other substances likely to be present; in life forms, the chemistry is specifically constrained, e.g. in the ribosome, or basically no proteins would form.

    It is assumed that amino acid and nucleotide monomers were available in a prebiotic milieu, that periodic variation in environmental conditions between dry/warm and wet/cool took place and that energy sources were available for the polymerization.

    e –> Each of which is seriously questionable, and we have the problem of cross reactions etc as well.

    An artificial chemistry approach in combination with agent-based modeling was used to explore chemical evolution from an initially random mixture of monomers.

    f –> I.e intelligently designed

    It was assumed that the oligonucleotides could serve as templates for self-replication and for translation of peptide compositional sequences, and that certain peptides could serve as weak catalysts.

    g –> more of same

    Important features of the model are the short lengths of the peptide and oligonucleotide molecules that prevent an error catastrophe caused by copying errors and a finite diffusion rate of the molecules on a mineral surface that prevents excessive development of parasitism.

    h –> More questionable assumptions and constraints; NB: real world proteins and D/RNA are as a rule LONG chain . . . but a short chain gets you out of the problem of combinatorial explosion.

    i –> So, you have begged the key question, the exercise is fallacious

    The result of the simulation

    j –> This is chemistry on the computer, not in the real world

    was the emergence of self-replicating molecular systems consisting of peptide catalysts and oligonucleotide templates.

    k –> Doubtless, as a result of all the fine tuning and setting up for success above: Intelligent design works

    In addition, a smaller but significant number of molecules with alternative compositions also survived due to imprecise reproduction and translation of templates providing variability for further evolution.

    l –> All on a model that has begged the key questions

    In a more general context, the model describes not only peptide-oligonucleotide molecular systems, but any molecular system containing two types of polymer molecules: one of which serves as templates and the other as catalysts.The presented coevolutionary system suggests a possible direction towards finding the origin of molecular functionality in a prebiotic environment.

    m –> By begging the question. >>
    ___________

    See the problem?

    I suggest instead a comparison of the materials here.

    GEM of TKI

  64. Dr Liddle:

    By the time you are on an island of function, you are already looking at micro evo. Starting there with a functioning body plan begs he exact central questions of macroevo.

    I do not think we will agree on this matter, as I have stated it over and over, but you keep on wanting to start with “assume a can opener,” when that is precisely what you cannot assume, to open the can on the desert island.

    We can simply note the deadlock, and note the reasons why I point to islands of function in beyond astronomically large config spaces as the critical unanswered issue faced by macro evo theories, starting with the first body plan, and going on to the multicellular body plans across the various kingdoms of life.

    I simply note for record that it is established that intelligence is empirically known to be capable of making objects exhibiting FSCO/I and it is the only such known entity. Similarly, Venter et al have given proof of concept that engineered life forms are possible, specifically.

    So, I can rest comfortably on the conclusion that the design inference is the superior explanation for the FSCO/I wee see in life forms.

    Onlookers can see for themselves that the crucial issue is that one has to get to an island of function before there can be hill-climbing by small variations within the island. The can also see that there is simply no empirically well warranted evolutionary materialist account of the origin of such body plans based on being on those islands of function for the implied information. In addition, they can see that the problem is basically assumed away.

    That is enough for the onlooker to see what is the true balance on the merits.

    And that after 150 years of trying, and billions in expenditure all around the world.

    So, the matter is not deadlocked, nor is there a want of warrant for a reasonable conclusion that design is the best scientific explanation on the table for the origin of life and major body plans. (Of course, that is in a context where a molecular nanotech lab several generations beyond Venter would be a sufficient causal force for what we see. This is not the same as an inference to a designer within or beyond the cosmos as we observe it. However, once the finetuning of the observed cosmos for C-chemistry cell based life is also in the stakes, it tends to support the worldvieew level conclusion that a designer beyond the cosmos is a reasonable position to hold.)

    GEM of TKI

  65. KF

    See the problem

    Not really, unless you were expecting this paper to provide a complete explanation of everything. It is, like most good science, an interesting piece of work that opens up many new avenues for investigation and begs many questions, not least: are their assumptions valid?

    j –> This is chemistry on the computer, not in the real world

    If they simulated monkeys at typewriters, and the simulation failed to produce Shakespeare, you would cite it as evidence in support of your own position.

    Simulation can be a very powerful tool but the fact that you can simulate physics, and make a simulated ball bounce in a realistic way, does not imply that the balls behavior can only be explained in terms of an intelligent agent.

    The whole point of a model of reality is that it is designed to accurately models reality. Pointing out that the model was designed is either missing the point, or illustrating a failure to understand the methods being employed.

    Simulations of weather are not evidence that God caused storms.

    k –> Doubtless, as a result of all the fine tuning and setting up for success above: Intelligent design works

    OR, that their model was a good model and they found something interesting that may be possible in real chemistry – if you believe they cheated, they got their result by fiddling the numbers, then don’t just make the accusation – provide some evidence!

    The authors of the paper made several assumptions, as you pointed out. It’s hard not to when doing exploratory research like this, maybe their assumptions were wrong but simply pointing out that they made assumptions contributes nothing unless you can give some solid reasons why their assumptions are invalid.

    This is a route to demonstrating that their work is not a good account: Demonstrate why their assumptions are invalid, don’t just assume, demonstrate why they are wrong.

    This is one of the ways in which science advances.

  66. but you keep on wanting to start with “assume a can opener,” when that is precisely what you cannot assume, to open the can on the desert island.

    So you believe that starting with the assumption ‘self replicating life exists’ is unwarranted!

    The can also see that there is simply no empirically well warranted evolutionary materialist account of the origin of such body plans based on being on those islands of function for the implied information.

    assuming different body plans exist on isolated islands of function. The problem is that, as EL has indicated, this may not be a valid assumption you are making.

  67. 67

    Dr Bot, what general areas of missing knowledge remain in the pathway to demonstrating abiogenesis? And, has this pathway already been sufficiently explored so as to be able to confidently publish books and papers, as well as producing endless public media accounts promoting the idea? Has the space become well enough understood to the point that belief/non-belief in the paradigm can be held as randsom for anyone who might wish to pursue a doctorate degreee in the field?

    Would you care to enumerate for us the demonstrations of evidence that lead to such confidence?

    - – - – - –

    In other words, its a freakin joke to tell KF to stop making assumptions and to get down on demonstrations instead. His point is that the materialists haven’t demonstrated squat – except an incredible capacity to delude themselves past the details.

  68. Would you care to enumerate for us the demonstrations of evidence that lead to such confidence?

    Nope, and as I already said many times, I’m an abiogenesis skeptic, but I’m also a rational empiricist who doesn’t have any ideological objection to a method of creation where life emerges as a result of the universes design – I’m waiting with interest to see if the OOL research ever bears fruit.

    materialists haven’t demonstrated squat

    Apart from all the published research, but I’m pretty certain you consider that irrelevant :)

  69. 69

    Hardly Bot, I think the pursuit of a purely material explanation of Life is a valid scientific endeavor, but what that endeavor cannot do (as it most surely has done) is act as if it has a record of success so overwhelming that any other paradigms should be either forgotten or impugned as a matter of professional discipline.

    - – - – - –

    KF has every right to point out the unsupported assumptions in this paper, and every other paper like it.

  70. Dr Bot:

    Please, stop projecting assumptions onto me — can you show the step by step continuity between say an amoeba and a lobster?

    Similarly, between say a “Hello World” program and an operating system?

    Between a sentence like “See Spot run” and a computing science textbook?

    We can fairly easily show that in the space of configs for any reasonably complex cluster of digital entities [and here note DNA is digital] the vast majority of the space will be taken by nonsense configs, and the meaningful ones will be deeply isolated. Protein fold domains — proteins being essentially 20 state per element systems, are also deeply isolated.

    And, just for capping off, we can look at the observation of Gould on the nature of fossils as collected:

    . . long term stasis following geologically abrupt origin of most fossil morphospecies, has always been recognized by professional paleontologists. [[The Structure of Evolutionary Theory (2002), p. 752.]

    . . . . The great majority of species do not show any appreciable evolutionary change at all. These species appear in the section [[first occurrence] without obvious ancestors in the underlying beds, are stable once established and disappear higher up without leaving any descendants.” [[p. 753.]

    . . . . proclamations for the supposed ‘truth’ of gradualism – asserted against every working paleontologist’s knowledge of its rarity – emerged largely from such a restriction of attention to exceedingly rare cases under the false belief that they alone provided a record of evolution at all! The falsification of most ‘textbook classics’ upon restudy only accentuates the fallacy of the ‘case study’ method and its root in prior expectation rather than objective reading of the fossil record. [[p. 773.]

    In short, islands of function are what we observe.

    The smoothly shaded off tree of life — with conjectured in-dills between the observed discrete points — is what is conjectural.

    Precisely the opposite to the impression commonly communicated.

    GEM of TKI

  71. Well, some algorithms self-replicate with variance, which is what a GA is.

    Um, no. Sheesh.

    I recently re-posted a number of links to introductory free online material on GA’s.

    HERE

  72. Elizabeth Liddle @27:

    It’s a bit like the game of hangman (do kids play hangman in the US)?

    You start by guessing letters at random, but once you’ve got a couple – a vowel, say, in a particular spot, then the search space starts to collapse. Eventually the number of possible solutions reduces to perhaps one or two words.

    As for your last point – well the game of hangman is a case in point. So are GAs. Stochastic processes can be very good at finding winners when part of an evolutionary algorithm.

    We call it WEASEL. As in, how many generations does it take our program to find “Methinks it is like a weasel.”

    And that may be how GA’s work, or WEASEL programs, or hangman, but it’s now how evolution works.

    You should know better. Shame.

  73. Elizabeth Liddle @38:
    Evolution isn’t looking for a needle in a haystack…

    So true. It’s looking for numerous sort-of-needles in numerous sort-of-haystacks. Or rather, it doesn’t really quite know what it is looking for, or where to look for it. Or even better, it’s not really looking for anything at all.

    So why people think it’s a search, or can be modeled as a search, is beyond me.

    Good point Elizabeth.

  74. DrBot @44:

    evolution relies on the existence of self replicators (something that has been empirically observed to exist)

    Actually, it is not the case that self-replicators have been observed to exist.

    When a woman gives birth she is not giving birth to another self of herself. Nor is she giving birth to another self of her husband. Neither she nor her husband “self-replicated.”

    She is giving birth to a new and distinct self.

  75. Please, stop projecting assumptions onto me —

    You are making assumptions, there is nothing being projected – although I appreciate that you sincerely believe that your assumptions are actually facts.

    can you show the step by step continuity between say an amoeba and a lobster?

    Why do I have to do this?

    Similarly, between say a “Hello World” program and an operating system?</blockquote)

    WHAT?

    Between a sentence like “See Spot run” and a computing science textbook?

    You want me to show you how things that don’t evolve, evolve … KF, sorry to be blunt but when you start demanding that people show how one can get from ‘hello world’ to an OS as some kind of proof of evolution all it illustrates is your lack of understanding of the topic – despite the numerous correctives I’ve supplied on these matters.

    Not all search spaces are searchable by genetic algorithm. Many systems, like computer software, are brittle – they break unless you tinker with them in a highly structured way – and in this way they are strikingly different than biology. You can’t evolve everything so selecting examples of things that are not easily evolvable does not disprove evolution.

    In short, islands of function are what we observe.

    Hmm, I wonder what the full quotes from Gould in their context look like ;)

    What you are referring to is the fossil record – a sparse record collected over millions of years. If you looked back at the history of computing with the same granularity you would see that we went from the abacus to the desktop PC in a single step!

  76. Mung:

    Generally speaking, cells self-replicate or may specialise by a regulatory process [methylation and all that]. Organisms reproduce,and in some cases may be cloned.

    getting to the capacity to replicate is a challenge, and getting an embryologically sound body development plan/algorithm is a challenge.

    Until these challenges a re properly faced and acknowledged, there will be no progress.

    And, we need to note that Weasel is targetted search, based on Hamming distance and reward of increments in proximity to target (the Hamming Oracle). GA’s that are based on convenient genomes that specify nice trendy fitness functions, with slopes amenable to hill climbing, are again produces to intelligent design, and assume in effect that one is already on an island of function. In the vast seas of non-functional configs, there will be no nice trend.

    I find that there are a lot of just so stories and a lot of seriously begged questions on evolutionary materialist models of origin of life and of body plans.

    Finally, as a sampler of what adaptation can do, let us think about the dog-wolf kind. Imagine, wolves and dogs of all varieties are now recognised as a single species. And, let us realise just how arbitrary the line “species” can be, e.g. US-style Elks and Red deer interbreed freely in New Zealand where both were stocked. And we must remember the circumpolar gull complex where there is smooth variation around the pole — all within an obvious island of function and with a given body plan. I gather the ring is sufficiently broad that the two gulls in W Europe do not (normally?) interbreed, thought the gradients in the population are said to be smooth. And of course in the Galapagos, the bird varieties apparently can breed across species lines, quite successfully. But, what we see — as opposed to what may be speculated — are limits, generally held to be about the level of the family.

    GEM of TKI

  77. Actually, it is not the case that self-replicators have been observed to exist.

    Good point! Perfect self replicators aren’t often observed – but I’m not sure they haven’t been totally unobserved – imperfect self replicators are all around us, and are exactly what is needed for evolution – perfect self replication doesn’t generate variety so nothing evolves ;)

  78. Dr BOT:

    Your label and dismiss tactics do not impress me, nor do they impress the astute onlooker.

    Observe your key admission:

    Many systems, like computer software, are brittle – they break unless you tinker with them in a highly structured wayand in this way they are strikingly different than biology. You can’t evolve everything so selecting examples of things that are not easily evolvable does not disprove evolution.

    The highlighted shows your own question-begging assumption, underlined by the way you dismissed my challenge to show the link between a unicellular organism and an arthropod. It is held that the one evolved into the other, similarly that several dozen top tier body plans evolved from the unicellular world. The actual observed evidence — and which is as Gould, Patterson and others have admitted DOMINANT in the fossil record — is of sudden appearance, stasis and disappearance and/or continuation into the modern era. This is the actual observed pattern, hence terms like Cambrian life revolution.

    (BTW, I am not amused at your suggestion of incompetent or dishonest quotation out of context. Do you not realise that Gould et al developed their alternative model, punctuated equilibria, because they wanted a theory that better fitted that dominant pattern? That is a matter of well known history of biology. I suggest you read the review in the just linked, shortly after the clip.)

    Let me do a bit more quoting from Gould to see that the above is in the context of his wider work and expresses a point noted all the way back to Darwin, who recognised that the actual pattern of the fossil record was not supportive to his theory, which he hoped would change with further explorations, but in fact the further work has shown that the dominant pattern evident from the beginning is real [a natural lawlike regularity is often evident from the earliest observations and then persists in the face of onward investigations . . . ]:

    “The absence of fossil evidence for intermediary stages between major transitions in organic design, indeed our inability, even in our imagination, to construct functional intermediates in many cases, has been a persistent and nagging problem for gradualistic accounts of evolution.” [[Stephen Jay Gould (Professor of Geology and Paleontology, Harvard University), 'Is a new and general theory of evolution emerging?' Paleobiology, vol.6(1), January 1980,p. 127.]

    “All paleontologists know that the fossil record contains precious little in the way of intermediate forms; transitions between the major groups are characteristically abrupt.” [[Stephen Jay Gould 'The return of hopeful monsters'. Natural History, vol. LXXXVI(6), June-July 1977, p. 24.]

    “The extreme rarity of transitional forms in the fossil record persists as the trade secret of paleontology. The evolutionary trees that adorn our textbooks have data only at the tips and nodes of their branches; the rest is inference, however reasonable, not the evidence of fossils. Yet Darwin was so wedded to gradualism that he wagered his entire theory on a denial of this literal record:

    The geological record is extremely imperfect and this fact will to a large extent explain why we do not find intermediate varieties, connecting together all the extinct and existing forms of life by the finest graduated steps [[ . . . . ] He who rejects these views on the nature of the geological record will rightly reject my whole theory.[[Cf. Origin, Ch 10, "Summary of the preceding and present Chapters," also see similar remarks in Chs 6 and 9.]

    Darwin’s argument still persists as the favored escape of most paleontologists from the embarrassment of a record that seems to show so little of evolution. In exposing its cultural and methodological roots, I wish in no way to impugn the potential validity of gradualism (for all general views have similar roots). I wish only to point out that it was never “seen” in the rocks.

    Paleontologists have paid an exorbitant price for Darwin’s argument. We fancy ourselves as the only true students of life’s history, yet to preserve our favored account of evolution by natural selection we view our data as so bad that we never see the very process we profess to study.” [[Stephen Jay Gould 'Evolution's erratic pace'. Natural History, vol. LXXXVI95), May 1977, p.14.] [[HT: Answers.com]

    The evidence is that micro-variations and adaptations within an island of function are real. Beyond that level, all fades into just so stories and speculations.

    The root reason for this is the point of contact between biology and information systems. Namely, that in the heart of the cell — embryogenesis develops the full organism from a single living cell [lobster zygote to lobster . . . ] — there is an information system.

    The brittleness of info systems against too much random variation is precisely the in-common reason why biological systems and software show islands of function. So, yes, we see adaptation and variation, but also we see boundaries whereby to cross those we need an intelligently directed input to cross a sea of non functional configurations.

    The answer tothe challenge to go from amoeba-like organism to a lobster ofr the like it so observe how it happens every day around us: embryogenesis is an algorithmic unfolding that is based on duplication, controlled specialisation and formation of a tightly integrated structure based on specialised cells, tissues, organs and systems forming a coherent body plan. The level of information involved to do that is plainly of the order of 10 – 100+ million bases. We observe this process in action routinely , day by day.

    Now to move from the hypothetical universal common unicellular ancestor to a lobster of the equivalent in the Cambrian era, would require the origin of that program. That would more or less have to be by a process of duplication and random variation, incrementally rewarded by differential success. This implies a smoothly varying continental structure to the config space for the related information systems, one traversible by a tree of life pattern.

    There is precisely zero observational evidence for such a structure to the config space of genomes, and every evidence that even short mutations are overwhelmingly likely to be deleterious, and even the overwhelming majority of “successful” adaptations work by breaking an existing genetic capacity, not by spontaneously creating one out of chance variation and success.

    As outlined above, islands of function are a commonplace observation for functional configs in large spaces of possibilities. And indeed the deep isolation of protein fold domains are a capital example of this. Such proteins need to take up very particular AA sequences, with some room for variation, but not a lot, to fold and function properly. If you look at how tRNA needs to fold into the cloverleaf then the L-arm, you will see again just how constrained variations will be for function to emerge [the arms have to match for the fold to work]. The tRNA is of course a key component of protein manufacture, which is again a step by step algorithmic process, in a nanotech factory in the cell.

    We know that intelligence can get us to islands of function, per massive observation. In addition the needle in the haystack analysis warns us that within very short order, random walks are not going to be credibly able to do the same, within the probability resources of our solar system or the observed cosmos as a whole.

    There is plenty of reason to infer to design as the best explanation of what we see in life forms, absent a priori imposition of materialism or what is tantamount to materialism. But also, we have ample evidence that such imposition is a material issue.

    I suggest you work through the IOSE units on OOL and OO Body Plans, to see the force of this set of issues.

    GEM of TKI

  79. PS: there are now something like 1/4 million plus fossil species identified, on millions of fossils collected and billions observed. The body forms of something like 1/2 or more of all currently living known forms can be found in the fossils collected. In short, we have good reason to infer to a good cross sectional sample. the dominant pattern of that cross section is sudden appearance without reasonable antecedents, stasis of form, and disappearance and/or continuity into the current era. We OBSERVE islands of function but assume a branching tree of life with conjectural ancestral species, the notoriously and all too evidently peristently missing links.

  80. 80
    Elizabeth Liddle

    Nothing Liz, feel free to ignore me.

    Ignore me, in the same way in which you ignored the onset of recorded information in the thread yesterday.

    The real details get messy… and they’re harder to scrub clean with that Darwinian Dishsoap you’re selling.

    I have no wish to ignore you, Upright BiPed, and if I inadvertently missed a previous post, I apologise. I often follow a link from the “latest post” list on the front page, and fail to notice that there have been a number of intervening posts.

    I’d be grateful if you could link, or give a post reference to the posts in question.

    Cheers

    Lizzie (not Liz, usually, I’ve never felt up to being a Liz, probably because of Liz Taylor, whom I don’t resemble much :))

  81. F/N A: It seems we need to point to Orgel’s key contrast again:

    . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.]

    The vNSR based self replication of metabolising, living forms is utterly different form the process of forming a crystal based on molecular structure and forces of interaction. And the order of a crystal is utterly distinct form the organisation of a living form, and again form the randomness of the crystals in a granite counter top or in a tar.

    F/N B: I need to underscore why the root of the tree of life is so crucial.

    On the evo mat frame, this is claimed to be the result of chance circumstances and blind mechanical forces in some warm pond or undersea volcanic vent or a comet’s dirty snowball, etc.

    But, this is incredible, and as the abstract marked up already shows, there is an utter failure – per needles in haystacks — to credibly account for the complex functionally specific organisation and information involved on chance circumstances [the other main source of high contingency].

    By direct contrast, we know that FSCO/I is routinely produced by intelligence. And Venter has provided proof of concept.

    So, it is highly reasonable, indeed the inference to best empirically supported explanation to conclude that life’s origin is best explained on design. Choice contingency. (Just as, such design best explains the way the observed cosmos is so set to a fine-tuned operating point that supports C-chemistry, cell based life.)

    Design is on the table.

    And when we turn to origin of body plans (which, recall, unfold from a single cell by an algorithmic integrated process of development), we see that we again run into: the origin of FSCO/I. A third inference to design is warranted.

    Design is a coherent explanation and one that accounts for the empirical fact of FSCO/I that is not accounted for on any other grounds.

    And, the same issue of functionally specific and complex contingent organisation is the thread running through the three contexts.

    GEM of TKI

  82. 82
    Elizabeth Liddle

    Mung @ 72

    Elizabeth Liddle @27:

    It’s a bit like the game of hangman (do kids play hangman in the US)?

    You start by guessing letters at random, but once you’ve got a couple – a vowel, say, in a particular spot, then the search space starts to collapse. Eventually the number of possible solutions reduces to perhaps one or two words.

    As for your last point – well the game of hangman is a case in point. So are GAs. Stochastic processes can be very good at finding winners when part of an evolutionary algorithm.

    We call it WEASEL. As in, how many generations does it take our program to find “Methinks it is like a weasel.”

    And that may be how GA’s work, or WEASEL programs, or hangman, but it’s now how evolution works.

    You should know better. Shame.

    Yes, I should have cited WEASEL.

    No, evolution, and indeed, GAs differ from WEASEL programs, but only in the (important however) sense that neither GAs nor evolution is “searching” for a single solution, whereas WEASEL (and Hangman) is.

    Indeed, in WEASEL, the problem is stated in terms of the solution “find the closest match to the word phrase: methinks it is like a weasel”.

    So it’s completely convergent search.

    In the case of GAs, the search is not, typically convergent at all (if it were, we wouldn’t bother with a GA). The problem may be: find the antenna configuration that produces the highest SNR.

    And my point is that the GA, as in evolution, does not have to search every single possible configuration to find every possible solution (and there may be many that it does not find). Instead, it searches hierarchically, and when any part-solution to the problem is found, the search space is than reduced to solutions that build on that part-solution, and so on.

    That was my point. I hope I have made it a little more clearly now.

    Mung @73

    Elizabeth Liddle @38:
    Evolution isn’t looking for a needle in a haystack…

    So true. It’s looking for numerous sort-of-needles in numerous sort-of-haystacks. Or rather, it doesn’t really quite know what it is looking for, or where to look for it. Or even better, it’s not really looking for anything at all.

    So why people think it’s a search, or can be modeled as a search, is beyond me.

    Good point Elizabeth.

    Well, “search” is of course a metaphor, and of course you are correct (as I endorse above) that in evolution (but also in GAs) there is no single solution. However, I think the metaphor works reasonable well, as long as we think in terms of looking for a solution to a problem, rather than looking for a single hidden object.

    The important point, though, is that the number of possible combinations is irrelevant to the issue of whether GAs or evolutionary algorithms can produce solutions to the problem of “persistence” in a given environment, because only an increasingly small series of subsets of “promising” solutions is explored.

  83. Dr Liddle:

    Pardon a repeated emphasis: the problem is not to move around within the body plan island of function. It is to arrive there.

    It is not that the fittest survive [by definition, verging on circularity], but that they arrive, that has to be explained. When that fitness is an embryologically feasible, metabolising, vNSR info based self replicating cell based reproducing entity.

    GEM of TKI

  84. 84
    Elizabeth Liddle

    First: yes “the fittest survive” doesn’t just border on circularity, it is circular. That’s why I think it’s an unfortunate formulation, because it casts as a hypothesis something that is self-evident.

    Secondly: I’m still working on a response to your other thread. I’ll try to get it up today.

  85. Lizzie:
    There is nothing in Darwin’s theory that says that a minimal “seed” organism wasn’t intelligently designed, and designed in such a way that subsequent diversification would inevitably follow by Darwinian principles.

    So yes, Life could have been Designed to evolve. But that is compatible with Darwin’s theory.

    His theory was on the Origin of Species, not the Origin of Life. He specifically says so at the very end of the book.

    That is flat out wrong- teleology is not allowed in Darwin’s theory nor the current theory of evolution.

    Read Darwin’s “On the Origins Of Species…” and see how many times he cites “chance” and how many times he cites a designing agency.

    Geez his whole point about natural selection was it is a designer mimic.

    Also as Dawkins and others have said a designed or creted biologymeans we are lokng at a totly different type of biology.

  86. 86
    Elizabeth Liddle

    Well, I haven’t been clear: I didn’t say that Darwin’s theory was teleological – it wasn’t. But Darwin himself did not rule out a Creator-breathed seed, as is clear from this final paragraph.

    His point was (in essence) that from there on, natural selection was a designer-mimic.

    So the idea that the starting point was designed (with the intention that it would then evolve into many complex and diverse lifeforms) is not anti-Darwin.

    I don’t think it’s correct, though – I think Darwin’s principles can be applied all the way back to self-replicating units that we would hesitate to call “alive”.

    But others differ. But the application of Darwinian principles from a given point onwards doesn’t depend on where you think that initial point came from. That’s what we do with GAs after all – we “design” a system in which “critters” will evolve to find a solution to a problem we want to solve, saving us the trouble of designing a solution ourselves.

  87. Dr Liddle:

    I think, rather, the point is that GA’s are all very carefully designed by knowledgeable and intelligent designers, to search strictly delimited domains of scope suited to search, using algorithms that reward progress towards an implicit goal on some figure of merit [hill climbing].

    Let us clip Wiki, testifying against interest, to make the discussion a bit more specific:

    In a genetic algorithm, a population of strings (called chromosomes or the genotype of the genome), which encode candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem, evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are stochastically selected from the current population (based on their fitness), and modified (recombined and possibly randomly mutated) to form a new population. The new population is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. If the algorithm has terminated due to a maximum number of generations, a satisfactory solution may or may not have been reached . . . .

    A typical genetic algorithm requires:

    a genetic representation of the solution domain,
    a fitness function to evaluate the solution domain.

    A standard representation of the solution is as an array of bits. Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form representations are explored in evolutionary programming.

    The fitness function is defined over the genetic representation and measures the quality of the represented solution. The fitness function is always problem dependent . . . . Once we have the genetic representation and the fitness function defined, GA proceeds to initialize a population of solutions randomly, then improve it through repetitive application of mutation, crossover, inversion and selection operators.

    in short, GA’s are optimisation problems, which makes them inherently goal directed. They depend on a mapping between a variable string or similar structure, and a solution domain which has a fitness function that gives values to points in the domain. Optimisation is by incremental hill climbing on tossing out rings of random samples, i.e. it depends on having trends that lead you uphill to “good” solutions.

    In short, they are ALWAYS within islands of function [start near the goal in a zone where feedback to random sampling of points with fitness values will tell you where to go], and are thus irrelevant to how we get to shorelines of such islands in the midst of vast seas of non-function.

    This is the key issue. An inherently goal directed search, within a set up zone of interest.

    But is the implicit assumption that getting to an island of function is an easy problem correct?

    Not at all.

    For, for first life, credibly we are talking about 100 – 1,000+ kbits of functional information.

    At just 100 k bits, we would be dealing with a config space of 9.99 * 10^30,102, to try to find islands of function in it.

    To be credible, we have to have enough scope of search that it is reasonable that we would hit on an island.(Recall my response to Dr BOT above on the assertion that I am just assuming that functional configs come in islands.)

    Our observed universe has about 10^80 atoms, and these will go through about 10^150 Planck time quantum states. There is no way that any search of the possibility space for relevant molecules of life could sample a fraction of the space appreciably different from zero. That is, an observed cosmos scope search rounds down to zero.

    That is, the key challenge is to get first to shores of function.

    A similar challenge holds for novel body plans, which have to innovate of order of 10 – 100 Mbits of fresh integratedly functional information.

    In that context to focus on how — having arrived at an island of function by intelligent design — a GA is able to hill climb through modest casting out of rings of fresh samples and moving uphill, is to beg the question.

    But then, Johnson — replying to Lewontin — aptly observed:

    For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”

    . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

    So, to really see what is going on, we have to look at the implicit assumptions. In this case, we are being invited to not notice the strings and curtains, paying attention only to the puppets onstage. Yes, GA’s come on stage on an island of function and are able to move towards the uphill direction that is built in as the way to make progress.

    But all of that is set up by someone offstage. Just as, when we hear a song on an MP3 player, it is not coming from the machine, but from the person who recorded and loaded the song in an instrument designed to play it.

    GEM of TKI

  88. 88
    Elizabeth Liddle

    Yes, human-designed GAs are usually designed to solve a very specific problem (so the fitness function is what is designed, chiefly) for a human purpose, usually the solution to a specific problem, usually in a fairly low-dimensional fitness landscape.

    That doesn’t mean we can’t extrapolate to systems where the problem is intrinsic to the environment,and might be written as “how to persist in this environment”, and the fitness landscape is intrinsically high-dimensioned.

    Clearly, in that general condition, the solution space is much larger, and my point is that evolutionary processes are successful because they do not search the entire combinatorial space, but the solution space only.

  89. kairosfocus,

    Thanks so much for your post @87. So timely.

    Once again MathGrrl’s claims about ev are shown to be utterly without merit.

    A typical genetic algorithm requires:

    a genetic representation of the solution domain,

    a fitness function to evaluate the solution domain.

    Also worthy of being repeated, from your @78:

    There is precisely zero observational evidence for such a structure to the config space of genomes, and every evidence that even short mutations are overwhelmingly likely to be deleterious, and even the overwhelming majority of “successful” adaptations work by breaking an existing genetic capacity, not by spontaneously creating one out of chance variation and success.

  90. Clearly, in that general condition, the solution space is much larger, and my point is that evolutionary processes are successful because they do not search the entire combinatorial space, but the solution space only.

    A claim with no basis in fact.

  91. 91
    Elizabeth Liddle

    It’s not supposed to be based in fact, it’s based in logic.

    But supported by fact.

    I’m not really sure why there is an issue here – most people seem to accept, for instance, that “microevolution” occurs (peppered moths, guppies, beaks of Galapagos finches etc), right?

    Or perhaps you don’t?

    In those instances, the population doesn’t have to “search” every possible combination of every allele, or every possible new allele, in order to adapt quickly to new environmental conditions, a change in bark colour; a change in predator prevalence/stream-bed properties; a change in prevalent seed sizes, because once embarked on a minimally advantageous “solution”, only subsets of that “solution space” are “searched”.

    I didn’t even expect this to be controversial. Perhaps you thought I meant something I didn’t.

  92. F/N: Re Dr BOT at 3:

    Monkeys at typewriters are just monkeys at typewriters. I don’t know any biologists who would take this claim seriously, it just indicates a total failure to understand the basics of evolution.

    Actually, the commonest5 use of this illustration in recent decades was to promote the idea that even very unlikely events were “inevitable” once there were enough time and resources to throw at it. And, this was often argued by — you guessed it — promoters of evolutionary materialism.

    It is precisely because of the success of the rebuttal on the actual implications, that we see this backing away and denial. (Sort of like the pretence nowadays that Biologists did not speak about junk dna, or don’t call themselves Darwinists, or don’t use terms like Macro-evolution.)

    Of course, the root problem is that the challenge begins to bite very fast: 125 bytes of info is very short for any meaningful control situation. And yet the resources of the observable cosmos are wholly inadequate to search a space of possibilities to any extent significantly less than zero, if we have 1,000 bits worth of info.

    And of course first life is looking at about 100 – 1,000 k bits worth of info, and novel body plans are looking at 10 – 100 mn bits.

    Nor are you in a position to show empirically that the general pattern of functional information being in isolated islands, is broken in this case. Indeed, we know that protein fold domains — sequences that fold correctly to work — are deeply isolated in AA sequence space. (And more, cf above.)

    GEM of TKI

    GEM of TKI

  93. I’m not really sure why there is an issue here – most people seem to accept, for instance, that “microevolution” occurs (peppered moths, guppies, beaks of Galapagos finches etc), right?

    Or perhaps you don’t?

    1. I define micro-evolution as changes in the frequency of an allele in a population. Is that an acceptable definition?

    2. What do changing frequencies of alleles in populations have to tell us about how those alleles arose in the first place?

    3. IOW, before something can affect the frequency of an allele in a population, the allele must first exist.

    Call it a search for new alleles.

    I’m not really sure why there is an issue here

    Perhaps because you are talking about one thing, and the rest of us are talking about something completely different.

    The search problem continues to exist, and selection can’t help.

  94. Elizabeth:

    Clearly, in that general condition, the solution space is much larger, and my point is that evolutionary processes are successful because they do not search the entire combinatorial space, but the solution space only.

    Mung:

    A claim with no basis in fact.

    Elizabeth:

    It’s not supposed to be based in fact, it’s based in logic. But supported by fact.

    So let’s start from scratch, if you will.

    By combinatorial space you mean…

    By solution space you mean…

    Your basis for claiming that the solution space is much larger is…

    Is it safe to say that the solution space is a subset of the combinatorial space?

    What can we call the space that is within the combinatorial space but which is not within the solution space?

    How do we know what is within the solution space and what is not?

    How do we know the size of the solution space?

    How is it that evolution “knows” not to step outside the solution space and into the non-solution space?

    If evolution can’t tell the two apart, how is it that it manages to stay within a space that has boundaries of which it is completely unaware?

    Take your time.

  95. Dr Liddle:

    I must first say that I appreciate your straightforwardness in your discussion. That makes for real dialogue instead of having to try to rebut cleverly distractive or dismissive talking points or outright abuse.

    In that context, I will pause and look at your remark in 88, inserting my markup on points:

    ______________

    >> Yes, human-designed GAs are usually designed to solve a very specific problem (so the fitness function is what is designed, chiefly) for a human purpose, usually the solution to a specific problem, usually in a fairly low-dimensional fitness landscape.

    a –> Thank you for this frank admission

    That doesn’t mean we can’t extrapolate

    b –> This is of course our old friend the argument by analogy. nature is not writing a code for a GA or running it on a computer.

    c –> Now, oddly enough, I have a lot of respect for well-developed analogies; not least as analogous thinking is key to inductive reasoning, especially the sort of argument on a case by case basis where one reasons by material family resemblance.

    d –> My concern i this case is that the analogies are all too apt in one key sense, and the dis-analogy is precisely tied to the search-space implications of scaling up the size of he haystack in light of the config space of just 1,000 bits or 125 Bytes or 143 ASCII characters.

    e –> So, scaling up by extrapolation may in this case have a qualitative effect, as Abel pointed out in his recent paper on a universal plausibility bound; as has been linked already.

    to systems where the problem is intrinsic to the environment,and might be written as “how to persist in this environment”,

    f –> The problem I have here is how to arrive in the environment, rather than how to persist in it once arrived.

    and the fitness landscape is intrinsically high-dimensioned.

    g –> If you mean that the possibility pace is very large, and the regions of interest are very small by comparison, that is key.

    Clearly, in that general condition, the solution space is much larger,

    h –> The space of possibilities from which a solution is to be found may indeed be very large, but the problem is that that then makes solution sets — islands of complex and specific function — very isolated indeed in large seas of non-functional configurations.

    and my point is that evolutionary processes are successful because they do not search the entire combinatorial space, but the solution space only.

    i –> You are here implying that the “solution space” is a continent of function, and that one needs not concern oneself about the wider set of possibilities.

    j –> But this assumes a start-point within the island of function, and implies so large a connected region of solutions that fit on a nice trend pattern that you dismiss the issue of getting to the island’s shoreline.

    k –> That is precisely what there is no right to assume, indeed, it boils down to an implicit acknowledgement that the searches you are looking at are micro-evolutionary, within an island of existing function

    l –> The real problem on the table is how to get to such islands of function, not how to hill climb within them.

    m –> I have already pointed out above some of the reason why we have good warrant for understanding that complex function of information based systems, technological, linguistic and biological, will show the pattern of relatively isolated islands of function in large configuration spaces that make the topology more or less like a Pacific ocean with isolated islands.

    n –> For instance, to get from a Hello World to an operating system, one cannot step in small, functional increments, but must transform one’s whole design of the software system, and must create whole algorithms, data structures and coding patterns. There are principles in common — why Hello World is a typical “first program” — but there are entire courses of study between a Hello World and designing one’s own operating system.

    o –> Similarly, See Spot Run is not going to change in small steps under spontaneous control of random walk based trial and error rooted in duplicate and vary till function emerges and somehow integrates into a coherent narrative, to become as much as this blog post, not within the resources of the observed cosmos.

    p –> Similarly, in an organism, the code for a typical protein of 300 AA is already beyond the search resources of the cosmos, especially if one has to cross fold domains to get a new function.

    q –> Add in the regulatory requirements to govern when across the lifespan or in response to what external stress the protein must be expressed and transported, and the complexity has exploded.

    r –> Going on, complex organisms must be embryologically feasible, growing form a cell to an integrated organism with distinct cell types, tissues in their proper place, organs, and integrated systems, Other wise it cannot function, whether in the embryo or as a living and reproducing organism.

    s –> And to transform a single celled organism to this, is a huge leap in information requisites well beyond the search capacity of the cosmos again. With no observed sign of an easy small-step continuity from the one to the other. All has been inferred based on the requisites of Darwinism, in a context controlled by Lewontin’s a priori materialism held to be the defining essence of “science.”

    t –> And, this assumes we have the first cluster of unicellular organisms. But to cross the bridge from a complex, racemic mix of chemicals in a pond or a volcano vent or the like, to an organised living cell is just as complex and the intermediate steps are just as missing from the world of observation.

    u –> Mix in that our observed cosmos is fine tuned to sit at an operating point that makes abundant water, abundant carbon, etc possible, and gives these the sort of properties they have, also has in it environments conducive to C-Chemistry, cell based intelligent life.

    v –> This puts design on the table as a very reasonable explanation for the observed cosmos, one anchored in scientific observations. A designer capable of building a very special type of cosmos, in effect.

    w –> Similarly, there is but one observed class of sources for the sort of complex functional integration and associated information systems we observe in life: intelligence. We have not as yet invented a full blown kinematic vNSR [we have some of the key components in hand now], so that our products are self-replicating, but Venter has given us proof of concept.

    x –> So, it is inherently reasonable based on observed cause-effect patterns, to conclude that on best explanation, we live in a designed cosmos, that life is designed, and that complex multicellular organisms are also designed.

    y –> this does not force all to accept this, but it does mean that the discussion should not be censored or poisoned. Both of which are concerns. >>
    _______________

    I also endorse Mung’s slate of questions.

    GEM of TKI

  96. 96
    Elizabeth Liddle

    @ Mung and Kairosfocus:

    Thanks for your detailed responses and questions. Again, I will need a bit of time to do them justice, so if I don’t respond (or only partially) in the next couple of days, don’t think I’ve run away!

    And I also appreciate the opportunity for a real dialogue.

    Thanks!

  97. Elizabeth Liddle:

    …so if I don’t respond (or only partially) in the next couple of days, don’t think I’ve run away!

    Perhaps you’ve bitten off more than you can chew ;)

    Too easy to forget on these blogs just how many irons one has in the fire.

    Some more for you to chew on. Chew slowly. Don’t choke! And don’t wash it down.

    Elizabeth Liddle @82:

    No, evolution, and indeed, GAs differ from WEASEL programs, but only in the (important however) sense that neither GAs nor evolution is “searching” for a single solution, whereas WEASEL (and Hangman) is.

    I think you’re wrong. SURPRISE!

    I say that evolution differs from both Weasel and GA’s and that Weasel is a GA.

    Are you saying that a GA cannot search for a single solution?

    Why not? Why does looking for one solution disqualify an algorithm from being a GA?

    How many solutions must a GA look for, minimum, to qualify as a GA?

    Indeed, in WEASEL, the problem is stated in terms of the solution “find the closest match to the word phrase: methinks it is like a weasel”.

    I have no idea what that means. It sounds like gobblydegook. The problem is stated. Find the closest match to the phrase xxx.

    The GA encodes potential solutions to the problem in the form of candidate strings.

    The GA then employs mutation and selection to create the candidate solutions for the next generation.

    So it’s completely convergent search.

    So what. Aren’t searches in general convergent? What’s the point of having a search that doesn’t converge?

    In the case of GAs, the search is not, typically convergent at all (if it were, we wouldn’t bother with a GA). The problem may be: find the antenna configuration that produces the highest SNR.

    ok, you have a mistaken conception of GA’s. Hopefully you’ll study and catch up. Or I may post more material. We use GA’s precisely because they are convergent.

    Even your example is indicative of that fact and appears to me to be just as convergent as WEASEL.

    And my point is that the GA, as in evolution, does not have to search every single possible configuration to find every possible solution (and there may be many that it does not find). Instead, it searches hierarchically, and when any part-solution to the problem is found, the search space is than reduced to solutions that build on that part-solution, and so on.

    That sounds to me exactly how WEASEL works. Oh well.

    WEASEL doesn’t search every possible string of the same length of the target phrase.

    The search space, you say it gets reduced. But how does that happen?

  98. Well, “search” is of course a metaphor…

    Not in GA’s it isn’t.

    And if in evolution search is only a metaphor then to say that the search space gets reduced is meaningless.

    But then your claim about how evoltion finds potential solutions breaks down. It no longer has a basis.

  99. Clearly, in that general condition, the solution space is much larger, and my point is that evolutionary processes are successful because they do not search the entire combinatorial space, but the solution space only.

    First, let’s clear up a possible misunderstanding. As we’ve seen from the quote from Wikipedia, in a GA, potential solutions are encoded in a genome, and this genome is then mutated and subjected to selection.

    So in that sense every possible solution is part of the solution space. Are we agreed so far?

    Now let’s assume for the sake of argument that the sequences of the genomes in the population are chosen at random, as in WEASEL and ev.

    How have we reduced the combinatorial space?

Leave a Reply