Home » Intelligent Design, Science » ID Predictions: Foundational principles underlying the predictions proposed by Jonathan M. and others.

ID Predictions: Foundational principles underlying the predictions proposed by Jonathan M. and others.

PART I: BASIC PREMISES

Many predictions of ID flow from two underlying hypotheses, both of which are open to scientific investigation and refutation. If you miss these, however, other ID predictions may not make sense, since many arise from them in an important way. It is my belief that much of the puzzlement regarding ID predictions results from not being familiar with these two often unspoken premises.

I consider the first of these to be a basic hypothesis of ID, which is so obvious to ID researchers that they often forget to make explicit mention of it. It is,

1. Creating* integrated, highly functional machines is a difficult task.

This statement seems obvious to many engineers and others who construct complex systems for a living. As an informal statement, it is fairly straightforward. Yet as stated, it is not mathematically precise, since I have not defined “integrated, highly functional machines” nor “difficult task.” ID researchers have long conducted research to define precisely what separates an integrated, highly functional system from one that is not, attempting to define these terms in a quantifiable way. It is a worthwhile research project and their efforts should be applauded, even if these same efforts have met with mixed success. I won’t argue here whether or not they have succeeded in their task, since most of us would agree that if anything represents an integrated, highly functional machine then surely biological forms do. We can also agree that simple things such as rocks and crystals do not. Where exactly the cutoff point lies on that continuum is open to investigation, but hopefully it is reasonable to agree that humans, nematodes, bacteria and butterflies belong to the group labeled “integrated and highly functional machines.”

Now, this hypothesis is contingent, not necessary, since it might have been the case that life (functional machines) could arise easily. Imagine a world where frogs form from mud, unassisted, and where cells coalesce from simple mixtures of amino acids and lipids. We are unaware of anything that logically prevents the laws of nature from being such that they would produce integrated, functional machines quickly, without intelligent guidance. Creating life could have been a simple task. We can imagine laws of nature that allow for life forms to quickly and abundantly self-organize before our eyes, in minutes, and can imagine every planet in the solar system being filled with an abundance of life forms, much like sci-fi novels of decades past envisioned, arising from natural laws acting on matter.

Yet, the universe is not like this. Creating life forms (or writing computer programs, for that matter) is a difficult task. The ratio of functional configurations to non-functional is miniscule. Before the advent of molecular biology, it was believed that life was simple in its constitution, cells being seen as homogeneous blobs of jelly-like protoplasm. If we believe that life forms are simple, then it becomes plausible that a series of random accidents could have stumbled upon life.

We have since learned that life forms are not simple, and creating them (or repairing them) is no simple task. Therefore, where unguided materialistic theories might have received great confirmation, they have instead run afoul of reality.

This basic hypothesis, confirmed by empirical science in the 20th century, underlies much ID thought. I cannot think of a single ID theorist who would disagree with it. If creating life forms were a simple enough task, it would be reasonable to expect unintelligent mechanisms to produce them given cosmic timescales. Conversely, the more difficult this task, then the less plausible unguided, mechanistic speculations on the origin of life become. The difficulty of the task is precisely what places it beyond the reach of unintelligent mechanisms, and leaves intelligent mechanism as the only remaining possibility, since “intelligent” and “unintelligent” are mutually exclusive and exhaustive, much like “red” and “not red” encompass all possibilities of color.

The second hypothesis, much like the first, is so obvious that it often fails to be mentioned explicitly. It is,

2. Intelligent agents have causal powers that unintelligent causes do not, at least on short enough timescales.

Who would argue with such a statement? Even ardent materialists, who view all intelligent agents as mechanical devices, are forced to admit that some configurations of matter can do things that other configurations cannot, such as write novels and create spacecraft, at least when we limit the time and probabilistic resources involved. If this were not the case, then why pay humans to perform certain tasks? Why not simply let nature run its course and perform the same work? Intelligent agents are at very least catalysts, allowing some tasks to be performed much more rapidly than is possible in their absence.

If we hold the second of these premises, namely that intelligent agents can accomplish some difficult tasks that unintelligent mechanisms cannot, and also hold that creating complex machines is a difficult task, then it follows that “creating life” may just be one of the tasks demonstrating a difference in causal powers between intelligent and unintelligent mechanisms. Notice the word “may”; the ID community would still need to demonstrate that intelligent agents, such as humans or Turing-test capable AI, can in fact construct life forms (or machines of comparable complexity and function), and would also need to demonstrate that unintelligent mechanisms are incapable of performing such tasks, even on cosmic timescales. This is where both positive ID work and necessary anti-evolution work arise, as they are required components of such an investigation.

ID theorists place the task of creating integrated, functionally complex machines in the group of tasks that are within the reach of intelligent agents yet outside the reach of unintelligent mechanisms, on the timescale of earth history. We can call this the third basic premise of ID, as currently modeled. It can be restated as,

3. Unintelligent causes are incapable of creating machines with certain levels of integration and function given 4.5 billion years, but intelligent causes are capable.

While we may dispute the truth of this statement, we cannot argue that it isn’t at least hinted at by the other two basic ID hypotheses. Given the first two premises, it naturally presents itself as a conjecture to be investigated.

 

PART II: ID PREDICTIONS

In light of these three premises, Jonathan M’s list of ID predictions appears much less ad hoc. Why should we expect functional protein folds to be rare in configuration space? Because if they were abundant, the task of creating novel proteins would be much easier, and by extension, so would be the higher-level task of creating functional molecular machines. Given the first premise, one expects at least some steps of the design process to present difficulties. True, this may not be the actual step that presents the difficulties, but given what we know about the curse of dimensionality, it is the natural place to begin investigation. And in light of what we now know concerning protein configuration space and the rarity of functional folds, such direction is not misleading.

In a similar vein, the prediction of “delicate optimisation and fine-tuning with respect to many features associated with biological systems” follows from the first basic premise of the model. Assume the contrary, for the sake of contradiction. If most configurations of matter and parameter settings result in integrated, highly functional self-replicating machines, then the problem of finding such configurations would cease to be difficult, by definition. Therefore, there must be a degree of specificity involved in life, such that the vast majority of configurations are incapable of functioning as life forms. ID’s basic premises require that the amount must, at minimum, be enough to place the task of finding such living configurations outside of the reach of unintelligent mechanisms given the probabilistic resources provided by a planetary or cosmic timescale. Therefore, a baseline level of fine-tuning is expected, and given the resources provided by a cosmic timescale, this baseline is predicted to be high. Once more, it could have been the case that most combinations of parameters and states would produce living organisms, making the problem of creating life easier, but this state of affairs would have helped falsify a widely-held basic premise of ID.

Evidence for these ID predictions would help confirm the basic ID hypotheses, and evidence to the contrary would weaken or falsify them. Therefore, it would seem fair to categorize them as predictions based on an ID framework. Without knowledge of the basic premises of ID as currently modeled, it is easy to see how confusion can arise when discussing what conclusions follow or do not follow from ID. If we don’t know the premises, how can we know what follows from them? It is my hope that this explicit spelling-out of foundational ID principles will aid in the discussion.

 

PART III: POSITIVE PREDICTIONS AND COMPOSITE MODELS

Lastly, I would be remiss not to mention the argument from analogy to human design, and how it relates to what is presented here. According to the argument, even if both intelligent and unintelligent mechanisms were capable of producing the effect in question (functional machines), intelligent agency might still serve as the best explanation, based on the similarities between engineered systems and living systems. If we’re fair, we are forced to acknowledge this line of argumentation is possible. Positive ID work could result from such an approach, since knowledge of how intelligent agents design things may shed light on how nature functions, or what to expect in terms of biological system construction. (See, for example, Casey Luskin’s discussion in “A Positive, Testable Case for Intelligent Design” where he describes how knowledge of human designed systems suggests predictions for biological systems.)

The strongest case for ID includes both types of evidence: positive evidence in favor of design, such as similarities to engineered systems and the use of design patterns within biological information systems, as well as evidence of the causal insufficiency of unintelligent mechanisms. If the third basic premise were falsified and intelligent agency was only one of multiple viable explanations, much more positive evidence would be required to make ID the most likely explanation, since we know that nature was operating when life formed, supplying the opportunity. Although ID could survive the falsification of the third basic premise, the case for ID would be severely weakened as a result, and the underlying model would be forced to change significantly, thereby modifying what is predicted by the model.

Some predictions would remain valid, such as those built on positive similarity to human design processes, but many predictions would not, including several presented by Jonathan M. ID as the sole viable hypothesis for the origin of integrated, functional machines differs from ID as one of multiple viable hypotheses, but positive evidence for design is certainly compatible with both models. The ID community currently holds to a model that both includes positive evidence for design and affirms all three basic ID premises outlined above. Therefore, both sets of predictions follow from the model: predictions based on the positive knowledge of human design activity and predictions implied by the causal insufficiency of unintelligent mechanisms.

 


 

* Note: The original post used the word “building” instead of “creating”, which caused some confusion among readers, since it was mistakenly taken to mean “the step-by-step assembly of machines” rather than the intended “design and creation of machines.” I use “create” in the sense of engineering, meaning to design and construct, to select from a realm of possible configurations. Thus, we say that engineers create new machine designs and software engineers create new software systems. The assembly process itself may be easy, but this is different than the task of discovering or creating the assembly instructions, gathering the required components, and setting any sensitive parameter values.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

55 Responses to ID Predictions: Foundational principles underlying the predictions proposed by Jonathan M. and others.

  1. I have long thought that Intelligent Design Theorists ought to spend far more time than they currently do (or at any rate seem to) in looking at design as it currently exists and is practiced.

    Could you imagine if we found parallels in nature?

    For example, in software engineering you have something known as design patterns. What if design patterns were discovered in nature?

    The more rational nature appears to be the more it appears to be the result of a rational mind.

  2. OT:

    The Springer journal Genetic Programming and Evolvable Machines is celebrating its first 10 years with a special anniversary issue of articles reviewing the state of GP and considering some of its possible futures. For the month of July (which ends in two days!) the entire issue is available for free download.

    http://www.springerlink.com/content/h46r77k291rn/

  3. 3. Unintelligent causes are incapable of building machines with certain levels of integration and function given 4.5 billion years, but intelligent causes are capable.

    to wit- another $64,000 question:

    All this talk about information- specified and not- has me searching for an answer to the question:

    How many nucleotides can necessity and chance string together? That is given a test tube, flask or vat of nucleotides, plus some UV, heat, cold, lightning, etc., what can come of that?

    Has anyone tried to do such a thing?

    After Lincoln and Joyce published the paper on the sustained replication- Self-Sustained Replication of an RNA Enzyme, there was an article in Scientific American with one of them (Joyce?)- seems to think that 35 is highly unlikely.

    And for another $64,000:

    How many amino acids can necessity and chance string together?

  4. NM:

    Good to see another exercise in open notebook science here at UD, joining several recent threads.

    A good trend.

    I would suggest a slight modification to your second principle:

    2. Intelligent agents have causal powers that unintelligent causes do not, at least on short enough timescales [and scopes].

    That is, I am underscoring the infinite monkeys point. An infinity of resources would — if it were achievable — swamp all search scope challenges.

    But, we are not looking at an infinite scope, hence the significance of the various calculations on the needle in the haystack challenge.

    In turn, this leads to a modification of your third point:

    3. Unintelligent causes are incapable of building machines with certain levels of integration and function given 4.5 billion years [a gamut of ~ 10^80 atoms, and ~ 10^25 s or the like plausible universal bound on temporal and material resources], but intelligent causes are capable.

    I note that the recent work of Venter and others shows that it is plausible that life as we observe it could be created in a molecular nanotech lab some generations of tech beyond where we currently are.

    I’d say Venter’s start-up bacterium is a basic demonstration of feasibility.

    GEM of TKI

  5. F/N: Applying a modified Chi-metric:

    I nominate a modded, log-reduced Chi metric for plausible thresholds of inferring sufficient complexity AND specificity for inferring to design as best explanation on a relevant gamut:

    (a) Chi’_500 = Ip*S – 500, bits beyond the solar system threshold

    (b) Chi’_1000 = Ip*S – 1,000, bits beyond the observed cosmos threshold

    . . . where Ip is a measure of explicitly or implicitly stored information in the entity and S is a dummy variable taking 1/0 according as [functional] specificity is plausibly inferred on relevant data. [This blends in the trick used in the simplistic, brute force X-metric mentioned in the just linked.]

    500 and 1,000 bits are swamping thresholds for solar system and cosmological scales. For the latter, we are looking at the number of Planck time quantum states of the observed cosmos being 1 in 10^150 of the implied config space of 1,000 bits.

    For a solar system with ours as a yardstick, 10^102 Q-states would be an upper limit, and 10^150 or so possibilities for 500 bits would swamp it by 48 orders of magnitude. (Remember, the fastest chemical interactions take about 10^30 Planck time states and organic reactions tend to be much, much slower than that.)

    So, the reduced Dembski metric can be further modified to incorporate the judgement of specificity, and non-specificity would lock out being able to surpass the threshold of complex specificity.

    I submit that a code-based function beyond 1,000 bits, where codes are reasonably specific, would classify. Protein functional fold-ability constraints would classify on the sort of evidence often seen. Functionality based on Wicken wiring diagram organised parts that would be vulnerable to perturbation would also qualify, once the description list of nodes, arcs and interfaces would exceed the the relevant thresholds. [In short, I am here alluding to how we reduce and represent a circuit or system drawing or process logic flowchart in a set of suitably structured strings.]

    So, some quantification is perhaps not so far away as might at first be thought.

    Your thoughts?

    GEM of TKI

  6. Hi Joseph:

    How many nucleotides can necessity and chance string together? That is given a test tube, flask or vat of nucleotides, plus some UV, heat, cold, lightning, etc., what can come of that?

    Has anyone tried to do such a thing?

    I’m sure they have.

    I’d be very surprised if Stephen Meyer doesn’t cover this topic in Signature in the Cell.

    c.f.

    http://onlinelibrary.wiley.com.....1/abstract

    Essential Cell Biology

  7. 7

    Mung I think you are absolutely right. I think ID will accomplish more simply by viewing biology as designed and working off that premise than things like CSI and IC.

    Also, I am no man. (Witch King gets pwned.)

  8. TM:

    The point of the CSI and IC concepts is to objectively empirically test, and as a result, ground the claim that biology is designed.

    As opposed to arguing in a circle on an assumption.

    That’s why MG et al worked so hard to try to kick sand up in our eyes on their validity.

    And, the strength of the inferences is why in the end she and her circle failed. Turns out, CSI is sound and Schneider’s ev — which she plainly is championing — is not. (The thread documents in details, I am not just spreading ill-founded but persuasive talking points.)

    Notice how MG has now quietly tip-toed away rather than address issues with her arguments, the log reduced form of the Chi CSI metric, and the questions on ev.

    Took some doing to get to that point, but that we are now there is highly significant.

    GEM of TKI

  9. kairosfocus in 5,

    I like the metrics you have been developing and refining over the course of time, even if the growing number of them can be a little overwhelming to an outsider. I would, however, like to see experimental results showing how well the log-reduced Chi metric, or any other metric, work in actually distinguishing designed from non-designed items.

    If I have an idea for a good binary classifier, I begin by going over the math and logic underlying why it should work. Once convinced that the features I am considering are capable of separating items of type A from items of type B, I then gather a dataset (or several) consisting of both types of items, and apply my classifier to measure its accuracy and precision.

    I don’t see why this couldn’t be done with the metric you propose. You could generate a dataset consisting of blog posts or wikipedia entries, and generate an equal sized number of random entries, both by using simple single-letter sampling as well as bigram and trigram approximations of English text (following Shannon’s work). For the specification delta (0,1), you may have a volunteer read each entry and judge if it makes sense (is functional), or not. I think the result would be an interesting test of your ideas.

  10. 2. Intelligent agents have causal powers that unintelligent causes do not, at least on short enough timescales.

    Is this necessary for there to be intelligent design? Is there any reason why designers didn’t just design and make simple stuff? I’m curious to know what makes this assumption so central.

  11. NM:

    The basic Chi metric, reduced form, sets up an easy way to look at various thresholds of complexity. I have highlighted the solar system and the observed cosmos.

    The modification to explicitly include the judgement of specificity per our notorious semiotic, judging observer, is the same sort of dummy variable as is sometimes put into econometric results to account for circumstances of a war or the like as opposed to more normal times. It makes explicit the issue of judging specificity, with complexity being evaluated on passing a threshold.

    While it is a commonplace that items with FSCI beyond 500 bits of complexity of known cause are uniformly known to be designed [they are after all beyond the solar system threshold], as witness say the body of text on the Internet, we actually have some results on infinite monkey random text generation experiments and tests over the past decade or so.

    Wiki reports:

    One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the “monkeys” typed,

    “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t”

    The first 19 letters of this sequence can be found in “The Two Gentlemen of Verona”. Other teams have reproduced 18 characters from “Timon of Athens”, 17 from “Troilus and Cressida”, and 16 from “Richard II”.[20]

    A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

    RUMOUR. Open your ears; 9r”5j5&?OWTY Z0d…

    From the citations, I suspect the comparative data sets were collections of Google books [or at least plays from these books] or the like.

    Such results of up to 24 ASCII characters through random text generation are consistent with the decades old analysis of Borel [IIRC] who highlighted 1 in 10^50 as in effect an observability threshold. 128^24 = 3.74 * 10^50. A space of about 170 bits worth of configs, is searcheable within our scope of resources.

    Further tests are possible, much as you suggest. And across time they should be done as well.

    However, the underlying analysis on the thresholds as offered by Dembski [cf. UPB and his take on Seth Lloyd's work] and by Abel, as well as others going back decades, will also be relevant.

    We have both analytical and empirical reason to infer that functionally specific information rich things beyond either 500 bits or 1,000 bits, will be comfortably beyond the reach of blind chance and mechanical necessity.

    GEM of TKI

  12. PS: Please note, too, the work by Dr Torley and that by Dr Giem which set the context for my own log reduction exercise.

  13. Heinrich asks (concerning the second basic premise, “Intelligent agents have causal powers that unintelligent causes do not, at least on short enough timescales.”):

    Is this necessary for there to be intelligent design? Is there any reason why designers didn’t just design and make simple stuff? I’m curious to know what makes this assumption so central.

    If intelligent causes were only capable of producing the types of artifacts that unintelligent causes were, such as simple things like paperweights, then neither type of mechanism would be able to explain the origin of life forms. What makes things interesting is the ID claim that although unintelligent mechanisms cannot produce integrated, highly functional machinery, intelligent causes can. So just because intelligent agents can produce simpler effects if they desired to, doesn’t mean that unintelligent mechanisms can produce the more functional and integrated machinery we find in biology. There is an asymmetry in the capabilities of the two types of mechanisms, at least according to the current model of ID.

    But asymmetry doesn’t necessarily imply an empty intersection of the two sets.

  14. If intelligent causes were only capable of producing the types of artifacts that unintelligent causes were, such as simple things like paperweights, then neither type of mechanism would be able to explain the origin of life forms.

    Unless life forms are sufficiently simple enough that both intelligent and unintelligent processes can create it. Is there any a priori reason to expect this?

    Doesn’t this also means that ID is being forced to make the assumption that some things are complicated?

  15. Heinrich,

    I am afraid I don’t understand what you’re arguing in the first paragraph. Are you arguing that life forms are simple (which we know they are not – at least combinatorially speaking), or are you arguing with the premise itself (premise 3), that unintelligent mechanisms are incapable of producing life forms?

    If the first, I agree that life forms don’t have to be complicated by logical necessity (as far as I can tell)…they just happen to be. We know this empirically, so it isn’t really an assumption on my part. Based on this fact, the first premise states that building integrated, functional machines (such as life forms) is a difficult task, due to their non-simplicity. The ID community is open to being shown otherwise.

    If you are arguing the second point, then that is fine. This is a point that clearly separates the ID model from other models of origins, so I would expect it to be a point of contention. Yet because it is a point of contention, it allows ID to make predictions that other origins models do not. Hence the my post.

  16. Are you arguing that life forms are simple (which we know they are not – at least combinatorially speaking),

    I was arguing this, or at least that they were simple enough that they’re not that difficult to build. They can’t be that difficult to build – lots of them are built every day, week, month and year without any intelligent agent involved.

    I also don’t think they’re that difficult to evolve, given time, and I haven’t seen any convincing arguments that show that selection isn’t up to the job.

  17. 1. Building integrated, highly functional machines is a difficult task.

    How do you get from this to “life forms are highly complicated machines”?

    Based on this fact, the first premise states that building integrated, functional machines (such as life forms) is a difficult task, due to their non-simplicity.

    And does this mean that if God was the creator, that creating life was difficult for Him?

  18. Mung asks,

    1. Building integrated, highly functional machines is a difficult task.

    How do you get from this to “life forms are highly complicated machines”?

    I don’t. Here, “complicated” serves as loose shorthand for “integrated, highly functional,” since integrated and highly functional machines (of sufficient functional capability and integration) happen to be complicated. I’m happy to not discuss “complicated machinery”, and instead focus on the more salient points of function and integration; however, Heinrich brought up “simple” vs “complex”, so I was trying to address the question using his/her own terminology.

    You also ask:

    And does this mean that if God was the creator, that creating life was difficult for Him?

    Why bring up discussion of god at this point? Different people have different understandings of god, so your answer would depend on your conception of god. I, however, have no desire to enter theological discussions on this thread.

  19. Heinrich wrote

    I was arguing this, or at least that they were simple enough that they’re not that difficult to build. They can’t be that difficult to build – lots of them are built every day, week, month and year without any intelligent agent involved.

    First, you’re committing an error in logic by assuming something is “easy” simply because it happens often. The question then becomes, what enables biology to repeatedly accomplish such a difficult task autonomously? The answer, of course, is the sophisticated information storage and processing machinery contained in every cell, guiding reproduction and development. The task is accomplished due to the vast quantities of digital and spatial information stored in a cell that directs the developmental outcome. You might as well argue that having digital pictures displayed on a screen is “easy” because you simply have to press a single button on the computer to make it happen.

    Second, the mathematics disagree with you. If you take a single functional protein of average length, you can see the combinatorial possibilities are enormous. Not all combinations produce function. In fact, research suggests functional sequences are quite rare. As we move up along the organismal hierarchy, and look at cells, tissues, organs, systems, and body plans, we find that each of is an organized configuration of the objects from the level below. So the organization and detail of a life form is staggering. Building life-sized self-replicating, autonomous robots is a hard task, yet biology has solved this problem with a level of sophistication that dwarfs our own.

    If building life forms is an easy task, why does life only come from other life in our experience? Why don’t silicon based electronic life forms spontaneously emerge around us? Why is carbon-based life the only type we see? Why aren’t there new origins of life (coming from non-life) every day? No, we only see life being produced by sophisticated machinery — other life forms. There is a reason for this.

    You also wrote

    I also don’t think they’re that difficult to evolve, given time, and I haven’t seen any convincing arguments that show that selection isn’t up to the job.

    Again, it is good that you disagree on this point, which makes it an objective statement that separates ID from an unguided evolutionary perspective. Since both sides disagree on this point, it is useful for generating predictions that flow from one model and not from the other.

  20. You might as well argue that having digital pictures displayed on a screen is “easy” because you simply have to press a single button on the computer to make it happen.

    That is exactly what I would argue. Just as I would argue it’s easy for me to get into my truck and drive to work, something I do on almost a daily basis.

    Second, the mathematics disagree with you. If you take a single functional protein of average length, you can see the combinatorial possibilities are enormous. Not all combinations produce function. In fact, research suggests functional sequences are quite rare.

    Think of the combinatorial possibilities of live ever existing on this particular planet. Did God have a hard time finding the earth when He was searching for a place suitable for life?

    I, however, have no desire to enter theological discussions on this thread.

    Theological has nothing to do with it.

    Building integrated, highly functional machines is a difficult task.

    For whom? For people who find building integrated highly functional machines is a difficult task?

    It must be an immensely difficult task for a blind man to find his way to the grocery store, wouldn’t you think?

  21. Mung wrote

    You might as well argue that having digital pictures displayed on a screen is “easy” because you simply have to press a single button on the computer to make it happen.

    That is exactly what I would argue. Just as I would argue it’s easy for me to get into my truck and drive to work, something I do on almost a daily basis.

    Just because somebody else already solved a difficult problem (i.e. how to get TFT LCD matrix to display patterns corresponding to digitally encoded color and luminance pixel information) does not make the underlying problem less difficult mathematically; it simply makes it easier for you, since you don’t have to solve it, as you’re given the “answer” to one particular instance. The underlying problem difficulty, from a mathematical perspective, is unchanged. Having one solution to the Traveling Salesman Problem doesn’t make TSP a less difficult mathematical problem; it doesn’t suddenly lose its NP-Completeness. I’m talking about problem difficulty in general here. Just because someone or something solved the problem of building integrated, highly functional biological machinery (using a solution that itself requires integrated, highly functional machinery in the form of a cell) does not make the underlying mathematical problem of building self-replicating, autonomous robots any easier. It just means that someone has done the hard work, not that there was no work to be done.

    Do you now see the difference between what you’re saying and what I’m talking about?

    If you still disagree, I’d like you to answer a few questions. Let’s assume, for the sake of contradiction, that the problem of building life forms is easy and life forms can be built from whatever materials are available, in many situations, by the blind interaction of natural forces on matter. Then we must ask:

    1. How often do we expect new life forms to emerge from non-life? (This doesn’t include reproduction from the solutions (life forms) we already have; I’m talking about abiogenesis.)

    2. Why don’t we see new life spontaneously forming around us, such as in sterile environments?

    3. Why is carbon-based life the only type of life on earth? If building life is easy, surely other combinations of compounds could work. We can imagine robotic life that uses electric circuitry and silicon as a basis. Is natural selection somehow not operating on ceramic crystals and other compounds, that replicate with occasional errors?

    4. Why is our solar system so barren of life? If robust self-replicating machines are so easy to stumble upon, why has nature so miserably failed on all other planets in our solar system? Why aren’t they teeming with life and the universe teeming with these “easy-to-build” machines? If you were correct and building life was a trivial task, we would expect life to be found anywhere unintelligent forces were acting on matter over long periods of time. Instead we find barren rock.

    All the facts would seem to argue against the position you presented.

  22. I’m reading a book on complexity and I came across the following quote:

    Making a pizza is complicated, but not complex. The same holds for filling out your tax return, or mending a bicycle puncture. Just follow the instructions step by step, and you will eventually be able to go from start to finish without too much trouble.

    Building a machine may be complicated, but does that make it complex?

    Is life just following step-by-step instructions?

    As soon as you start comparing life to machines this is the question you come up against.

    You’re trying to come up with foundational principles for ID.

    Complicated is not complex. Life is not mechanics. Simple is as simple does. For some people math is simple, for me, it’s complicated.

    I’m still trying to figure out what your questions have to do with your OP, but I’m thinking on it.

  23. Mung,

    Thank you for your comment, as it makes clear where our miscommunication is coming from.

    In my post I refer to “building” life forms, which you took to mean the construction process of putting together the components of a life form, given a recipe and materials, much like “making a pizza.” When I used the word build, I actually meant something closer to create or design, as it is used by computer scientists who “build” software systems by creating them from scratch. I can see how that can be confusing.

    All my points defend the premise that creating life forms is a difficult mathematical problem, and all yours defend the premise that given a recipe to follow and materials, perhaps the mechanical construction process is not difficult. These two points are not contradictory.

    Is this a fair assessment?

    Either way, I will edit my opening post to use the word create in place of build, as I think it makes my intended meaning clearer.

  24. no-man,

    I am an ID supporter. I’d love to be able to point people to something that would have logic that they would find compelling.

    Trust me, it’s easier for me to nit-pick than it is for you to come up with something like you have and I’d like you to know that I am aware of this.

    But nit-picking is precisely what the critics are going to do, lol. Fact of life.

    Better we do it to ourselves first, imo.

    When I used the word build, I actually meant something closer to create or design, as it is used by computer scientists who “build” software systems by creating them from scratch. I can see how that can be confusing.

    Software design used to be a top-down practice, but more and more it’s being done bottom up.

    Either way, the goal is to decompose the problem into smaller and smaller chunks to the point where the problem you’re trying to solve becomes simple.

    Modern practices could even be seen as building small components and connecting them until the larger system emerges. There is no step-by-step recipe specified to be followed to get from the start to the finish.

  25. Mung,

    You wrote

    Software design used to be a top-down practice, but more and more it’s being done bottom up.

    Even though a large portion of software development is concerned with stringing together pre-existing software libraries, not all is. For example, a lot of research in CS and AI requires novel algorithms that must be designed, since they do not exist. Sometimes the problem is amenable to a divide-and-conquer or dynamic programming approach, but often it is not.

    Modern practices could even be seen as building small components and connecting them until the larger system emerges.

    Which is still an act of creation, at least on the meta-level. It is like creating a structure from lego blocks versus creating one from iron ore. In one case you additionally build the sub-components, but both still require a plan of integration, unless the structure is sufficiently simple.

    There is no step-by-step recipe specified to be followed to get from the start to the finish.

    This is true in some cases, but the structures that arise from such an emergent process are more like randomized networks of parts with homogeneous interface connections (think the WWW), than like hierarchically arranged machines. I’ve yet to see an example of bottom-up, emergent creation that results in anything like an integrated, highly functional machine, where each part is distinct and plays a specific role in operation, and where distinct functional modules are built into higher level functional systems, as biological life forms are. Machines of that type require coordination to design, since one part affects others in highly-constrained ways. I’m open to seeing a counter-example.

    As soon as sub-components are distinct and perform different roles, you have an additional combinatorial problem, since each way of arranging / connecting them is different, and results in different function or non-function. This requires information or large probabilistic resources, both of which are quantifiable. In the case of homogeneous sub-components, the ordering of sub-components is irrelevant. This makes design problems of that type simpler to solve, but it is a different problem than the one we’re addressing here.

    Life forms are not simply large collections of homogeneous units, since proteins and cell types differ, and they are hierarchically arranged into tissue types that differ, organs that differ and systems that differ. Furthermore, the way you connect the various organs and systems together definitely affects their function.

  26. Machines of that type require coordination to design, since one part affects others in highly-constrained ways. I’m open to seeing a counter-example.

    I think counter-examples abound. I would argue that the vast majority of machines we are familiar with today did not arise de novo out of some top-down design session, but are rather the result of accretion over time. (Perhaps the space shuttle is an exception.)

    Think of modern automobiles compared to earlier conveyances, for example, and modern ships and aircraft. The nuclear submarine.

    They are all modifications of simpler pre-existing designs. No doubt the list could go on and on.

  27. First, you’re committing an error in logic by assuming something is “easy” simply because it happens often.

    Doesn’t ID commit the same error, by assuming that because something is complicated, it can’t happen very often?

    Second, the mathematics disagree with you. If you take a single functional protein of average length, you can see the combinatorial possibilities are enormous. Not all combinations produce function. In fact, research suggests functional sequences are quite rare.

    But natural selection is a great way of finding these functional sequences, by traversing the fitness landscape.

  28. Mung wrote

    They are all modifications of simpler pre-existing designs. No doubt the list could go on and on.

    Yes, but these are not emergent designs (in the technical sense of emergence, not the common definition), created by an undirected process. Those were the examples I was looking for.

    Heinrich wrote

    Doesn’t ID commit the same error, by assuming that because something is complicated, it can’t happen very often?

    Tell me, how often does life arise from non-life? Very often?

    As for your claim that NS can find functional sequences by traversing a fitness landscape, it actually depends greatly on the landscape topology. Not every landscape will be amenable to GA search.

  29. Doesn’t ID commit the same error, by assuming that because something is complicated, it can’t happen very often?

    Tell me, how often does life arise from non-life? Very often?

    Is that a “yes” or a “no”? Or a refusal to answer the question?

    As for your claim that NS can find functional sequences by traversing a fitness landscape, it actually depends greatly on the landscape topology.

    True. Has anyone measured fitness on a topology, and shown that NS can’t traverse it? (and no, claims of irreducible complexity don’t count)

  30. Heinrich wrote:

    Is that a “yes” or a “no”? Or a refusal to answer the question?

    I’ll spell it out for you, since you missed the subtlety of my response. When you ask “Doesn’t ID commit the same error, by assuming that because something is complicated, it can’t happen very often?” you are mistaken in thinking that ID’s belief that the effect not happening often (life arising from non-life) is an assumption. It is simply an empirical observation. Hence my question, since you apparently have some data that I lack.

    We have not seen life arise from non-life in the lab or in the field, so we know it doesn’t happen that often, at least not while we’re looking. You implicitly claim that this is a false impression, open to argument. So I ask, how often does life arise from non-life? Does this happen “very often”?

    If you say yes, please provide empirical evidence and estimate how often it happens. You seem to know. Once every 4.5 billion years is not often, btw. If no, then we can agree on the observation.

    You then ask

    Has anyone measured fitness on a topology, and shown that NS can’t traverse it? (and no, claims of irreducible complexity don’t count)

    It is not my job to prove your model doesn’t work, it is your job to prove it does.

    It is a fact that not all landscapes are amenable to GA search. You claim that NS works on the landscape of nature and is causally adequate because of this, so you need to do the work to demonstrate that nature does in fact provide the correct type of landscape (for protein space, let’s say).

    You are making a big claim. I’m skeptical of this, because you don’t know what the landscape of nature looks like, so you can’t demonstrate if it is a landscape a GA will work on or not. You can’t claim NS is the solution until you show both:

    A) NS can work, given a proper landscape (which I can accept)

    and

    B) Nature has landscapes of the right kind (and can find/create them without intelligent guidance)

    You haven’t shown B, so you still haven’t demonstrated that NS is a capable mechanism to produce the effect we see. You’ve claimed it, but haven’t demonstrated it.

    I’ll provide evidence for the models I propose, you provide evidence for those you propose.

  31. Heinrich, since quantum mechanics has shown that consciousness is foundational to reality (mnore precisely, information theoretic consciousness is shown to be foundational to reality), and not material particles. And since life is overwhelming characterized by functional information, and even overwhelmingly characterized by ‘quantum information’, and yet no one has ever witnessed purely material processes generating any non-trivial functional information over and above what was already present, but has only witnessed non-trivial information being generated by conscious beings (humans), why in the world would you hold out hope that in some twisted topology this insurmountable problem of purely material processes generating functional information would be overcome??? I would definitely say that the burden of proof is on you to develop your proposed topology and prove that it can do as such. Moreover you would have to prove that this twisted topology, which gets you massive amounts of functional information on the cheap, is ubiquitous to the entire planet, since extremely complex life, which is overflowing to the brim with functional information, is hypothesized, by neo-Darwinists, to have evolved from lower complexity, and is hypothesized to be presently evolving to higher complexity, on the entire surface (topology) of the planet (though no one can ever seem to be able to present a concrete example of this happening in the present)!

  32. no-man – as I understand it, ID is about more than the origins of life, so your question is a disctraction (FWIW, I suspect that life hasn’t arisen from non-life very often. But I also suspect the earliest forms of “life” were simple. At least relatively simple).

    Now, would you care to answer the question I asked – “Doesn’t ID commit the same error, by assuming that because something is complicated, it can’t happen very often?”

    B) Nature has landscapes of the right kind (and can find/create them without intelligent guidance)

    You haven’t shown B, so you still haven’t demonstrated that NS is a capable mechanism to produce the effect we see. You’ve claimed it, but haven’t demonstrated it.

    Keep up, Bond.

    Now, if you want to argue that they can’t, you have to show us the evidence that they have a topology that makes it impossible to transverse.

  33. I am so lost. What distinguishes non-quantum information from quantum information?

    …life is overwhelming characterized by functional information…
    quantum or non-quantum?

    …no one has ever witnessed purely material processes generating any non-trivial functional information over and above what was already present…

    Doesn’t Dembski’s Conservation of Information preclude this anyways?

    IOW, whatever information there is had to be present from the beginning.

    IOW, “purely material processes generating any non-trivial functional information over and above what was already present” is impossible in principle.

  34. Mung you ask,

    ‘What distinguishes non-quantum information from quantum information?’

    Quantum no-hiding theorem experimentally confirmed for first time – March 2011
    Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed.
    http://www.physorg.com/news/20.....tally.html

  35. further note mung;

    Quantum information/entanglement is not limited by any constraints of time and space;

    The Failure Of Local Realism – Materialism – Alain Aspect – video
    http://www.metacafe.com/w/4744145

    This following study adds to Alain Aspect’s work in Quantum Mechanics and solidly refutes the ‘hidden variable’ argument that has been used by materialists to try to get around the Theistic implications of the instantaneous ‘spooky action at a distance’ found in quantum mechanics.

    Quantum Measurements: Common Sense Is Not Enough, Physicists Show – July 2009
    Excerpt: scientists have now proven comprehensively in an experiment for the first time that the experimentally observed phenomena cannot be described by non-contextual models with hidden variables.
    http://www.sciencedaily.com/re.....142824.htm

    ,,,and quantum entanglement/information is found on a massive scale in molecular biology

    Quantum entanglement holds together life’s blueprint – 2010
    Excerpt: “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford.
    http://neshealthblog.wordpress.....blueprint/

    Quantum Information/Entanglement In DNA & Protein Folding – short video
    http://www.metacafe.com/watch/5936605/

  36. Heinrich, since I’m a non-stamp collector, perhaps you can actually demonstrate the generation of functional information over and above what is already present by passing the fitness test?

    Is Antibiotic Resistance evidence for evolution? – ‘The Fitness Test’ – video
    http://www.metacafe.com/watch/3995248

    ,, or is that ‘topology’ not twisted enough for you???

  37. What distinguishes non-quantum information from quantum information?

    Quantum no-hiding theorem experimentally confirmed for first time – March 2011
    Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed.

    Fantastic! So you agree with my conclusion.

    …the conservation of quantum information means that information cannot be created nor destroyed…

    Isn’t that what I said?

    So, the answer to my original question:

    What distinguishes non-quantum information from quantum information?

    Is. NOTHING. There is no difference. Information is information. Calling it “quantum information” is just spurious hand-waving B period S period.

    YES/NO TRUE/FALSE doesn’t change just because we shift it to some “quantum” level.

    Have you ever heard of Stanley Jaki?

    since I’m a non-stamp collector, perhaps you can actually demonstrate the generation of functional information over and above what is already present by passing the fitness test?

    There is no such thing as a “generation of functional information.”

    Information cannot be “generated.”

    Conservation of information prohibits the generation of information.

  38. Mung, you created classical information when you wrote the post.

  39. And if I delete that post?

  40. Heinrich,

    In your attempt to be cute did you read he review article you linked to, or simply read the title and figured it provided the type of evidence you’re looking for?

    The article discusses experiments of four or so enzymes/proteins that were mutated slightly, then found that they could be optimized again by some path, after testing all paths. Although this is interesting in its own right and is good research, it simply tells us that the fitness landscape has small adaptive paths (small hills) if you start close enough to an existing protein. So we now know (as we have known) that optimization of a given protein is possible.

    Your claim is much bigger than that.

    In claiming that NS solves the functional machinery problem, you not only claim that unintelligent forces can locate islands of function without intelligent guidance, but can also traverse the space between functional groups.

    However, research by Gauger and Axe give a different picture of the large scale distribution of adaptive paths in protein space. These are not contradictory with the Nature review paper; they simply show that different local portions of the landscape have different properties and that the larger questions your study didn’t address (rarity of functional folds, isolation of protein families) may not have the answers you’d like.

    We obviously expect some adaptive pathways to be present, since both ID theorists and evolutionary biologists know that NS can optimize structures, up to a point. The evidence you’ve given is the type we’ve already known but doesn’t answer the more interesting questions Axe et al. are asking.

    Surely you’d agree that creation of proteins is different than optimization of proteins? What evidence have you presented that NS can create functional proteins, beginning with random sequences? Instead, we’re shown that existing enzymes, when degraded, can be optimized back to existing function. I’m not denigrating the work done, which is a step forward to answering the questions we ask. I’m just questioning the extrapolation you make from that research to your much larger claims.

  41. Heinrich also wrote

    Now, would you care to answer the question I asked – “Doesn’t ID commit the same error, by assuming that because something is complicated, it can’t happen very often?”

    In short, no, ID doesn’t, because it isn’t making the assumption you claim it does.

    I’ll spell it out even more explicitly, since I seem incapable of getting you to understand this simple point.

    ID does not argue that “because [the the creation of functional machinery like life forms] is complicated, it can’t happen very often.”

    Instead, ID theorists empirically observe that the creation of life from non-life doesn’t happen very often, so they infer that the task must be difficult.

    Do you understand the difference between these two?

  42. Instead, ID theorists empirically observe that the creation of life from non-life doesn’t happen very often, so they infer that the task must be difficult.

    We have never observed planet formation.

    For all you know life arising from non-life may be happening every year, the problem is that the universe is a big place, and we are stuck in one tiny part of it, unable to observe the overwhelming majority of its contents in anything but the most rudimentary detail.

    There is no way for ID theorists or anyone else to make reliable observations like this – you infer that the task may be difficult (and you may be right) but the inference has no basis in observation – observations of the kind required to make this inference are impossible.

    What we see is one planet full of life – and which would most likely stomp all over any new examples of protolife popping up here, obliterating any evidence that it happened.

    As I said – scientists emperically observe that planets don’t form very often – in fact we have never observed it happening at all. Perhaps we should conclude that an intelligence was required?

    If you want to correct your statement then a better position would be to point to the weakness in abiogenisis theories, despite the decades of research, as an indication that the formation of life is complicated (more complicated than planet formation)

  43. DrBot,

    And why aren’t the other planets teeming with life? Are the forces of physics or NS not operating on those? Why are they barren if forming life is an easy task that “happens all the time”?

    Why do sterile environments remain sterile?

    I’m going based on what we know and observe, not on what we don’t.

    You then ask about planet formation:

    As I said – scientists emperically observe that planets don’t form very often – in fact we have never observed it happening at all. Perhaps we should conclude that an intelligence was required?

    We observe planets don’t form very often, and we then infer that it is a difficult task to create a planet. Do you think it is an easy task?

    You bring intelligence into it, but that is a down the road question. The premise that Heinrich argued with was simply that creating life forms is a difficult task.

  44. The article discusses experiments of four or so enzymes/proteins that were mutated slightly, then found that they could be optimized again by some path, after testing all paths. Although this is interesting in its own right and is good research, it simply tells us that the fitness landscape has small adaptive paths (small hills) if you start close enough to an existing protein.

    Yep, and that’s what you were asking for in 30:

    B) Nature has landscapes of the right kind (and can find/create them without intelligent guidance)

    You haven’t shown B, so you still haven’t demonstrated that NS is a capable mechanism to produce the effect we see. You’ve claimed it, but haven’t demonstrated it.

    With no mention of the size of the fitness surface to be crossed. I provided the evidence, you moved the goalposts.

    And you do it again:

    Surely you’d agree that creation of proteins is different than optimization of proteins? What evidence have you presented that NS can create functional proteins, beginning with random sequences?

    This, from the Evil Panda’s Thumb (also see the previous post linked to).

    Instead, ID theorists empirically observe that the creation of life from non-life doesn’t happen very often, so they infer that the task must be difficult.

    But it only needs to have been sufficiently uncommon that it happened once: obviously there’s observer bias after that (well, OK, we actually need abiogenesis leading to intelligent life).

    What observations do you use to say that “the creation of life from non-life doesn’t happen very often”? The only data I know of has a sample size of 1, and may well be a censored observation (because once life has evolved once, it is likely to have out-competed any later proto-life).

  45. And why aren’t the other planets teeming with life? Are the forces of physics or NS not operating on those? Why are they barren if forming life is an easy task that “happens all the time”?

    How did you determine that the other planets aren’t teeming with life?

    How many planets are there in the universe anyway and what percentage of them have we observed in enough detail to see traces of life?

    I’m going based on what we know and observe, not on what we don’t.

    If you observe that there are no tigers in your house do you infer that there are no tigers on earth?

    Again – what percentage of planets in the universe have we observed in sufficient detail to determine if the harbor life?

    I am going on what we don’t know – which is a tad more than 99.99999999% of the universe.

    You can’t determine if life occurs frequently in the universe when you can’t observe the overwhelming majority of the universe in sufficient detail.

  46. DrBot wrote

    How did you determine that the other planets aren’t teeming with life?

    Mars teeming with life? How about Jupiter? Mercury? Uranus? If any planet in our solar system other than earth is teeming with life, I’ll withdraw my point.

    If you observe that there are no tigers in your house do you infer that there are no tigers on earth?

    No, but you infer that tigers don’t form spontaneously in your house within the span of months (or however long you’ve had a tiger free house.) How long have the planets in our solar system been around again?

    We’re not saying life arising from non-life is impossible; we’re trying to estimate the frequency of it occurring. We’ve had a few billion years worth of trials on other planets (unless the laws of physics don’t allow for OOL or NS on those?), and they haven’t produced life (based on our best current knowledge). Therefore, the obvious inference is that the task of creating/finding functional living configurations is a difficult one.

    Do you have a better explanation for the barrenness of the other planets we’ve observed? (If you say special conditions are needed for life to form, I’d agree wholeheartedly. My list of necessary conditions would even be slightly larger.)

  47. Heinrich,

    Far be it from me to move goalposts. Asking for evidence that the large scale topology of the protein landscape is amenable to GA search is different than asking for evidence that a small portion of it is. I already had evidence of the latter. I’m asking about evidence concerning the former, which you’d have to show if your claim is valid.

    If you think the existence of a few adaptive paths is strong evidence that the landscape as a whole is amenable to GA search, let me remind you that even completely random (“spikey”, rugged) landscapes have adaptive paths. For example, in a landscape containing 32 elements where fitness values are assigned at random, I’ve found that there exist an average of 5 adaptive paths (where each intermediate confers higher fitness than its neighbor.) Yet a GA wouldn’t perform well on such a landscape.

    You have any evidence that the landscape as a whole is similar to the small part the review mentions?

  48. PS, those adaptive paths consist of five intermediates, as in the study you linked to. So what they found occurs even in landscapes that GAs perform poorly on.

  49. Mars teeming with life? How about Jupiter? Mercury? Uranus? If any planet in our solar system other than earth is teeming with life, I’ll withdraw my point.

    You avoided addressing my point.

    What you imply from above is that our solar system is a representative sample of other solar systems – if true then we observe one planet in the sample teeming with life, therefore if we accept this as a representative sample we ought – based on observation – to expect one planet per solar system teeming with life in the universe.

    How many solar systems are there in the universe?

    Lets take above an be conservative – 1 life giving planet per 1000000 solar systems.

    How many solar systems in the universe again? There are an estimated 10^11 galaxies containing tens of billions of stars.

    Is one sample – our sun and solar system – representative?

  50. DrBot,

    I see your point, but you fail to see mine.

    I claim that creating life is a difficult task. You disagree. I make the simple point that if the creation of life were an easy task (meaning, it doesn’t take billions of years on average, given planetary resources, to produce it) we would see more of it. I point that unintelligent forces have been working on the other planets in our solar system for billions of years, yet haven’t produced life. If the task were not difficult, surely unintelligent mechanisms could produce the effect given that much time.

    As far as we can tell, only carbon based life has arisen within that time period. So yes, the relative absence of life in our solar system is strong evidence that creation of life is a difficult problem in general. You may disagree with premise three (which I’d expect you to), but to disagree with premise one doesn’t make much sense.

    You ask:

    Is one sample – our sun and solar system – representative?

    If you mean the observation that if we allow unguided mechanisms to act on all the matter and energy of several planets for billions of years we should only expect functional self-replicating machinery to arise in one or less trials, then yeah, that sounds pretty accurate. Given those odds, I’d call creating life a very difficult problem; at very least, I don’t like those odds.

    Now you answer my question: If creating life from non-life is an easy task, why are the other planets in our solar system not teeming with life?

    Please don’t dodge my question again. I’m getting tired of asking it.

  51. Dr Bot:

    Your challenge, i.e why FSCI as seen in cell based life is credibly well beyond the capacity of chance plus necessity on the gamut of our observed cosmos. And indeed, it turns Dawkins’ attempt to use the claimed powers of Darwinian evo to overturn the inference to design, on its head.

    GEM of TKI

  52. Asking for evidence that the large scale topology of the protein landscape is amenable to GA search is different than asking for evidence that a small portion of it is.

    Is it? Do you have evidence for this?

    If you think the existence of a few adaptive paths is strong evidence that the landscape as a whole is amenable to GA search, let me remind you that even completely random (“spikey”, rugged) landscapes have adaptive paths. For example, in a landscape containing 32 elements where fitness values are assigned at random, I’ve found that there exist an average of 5 adaptive paths (where each intermediate confers higher fitness than its neighbor.) Yet a GA wouldn’t perform well on such a landscape.

    Why wouldn’t a GA preform well? And what evidence do you have that a random landscape is a good model for a real fitness landscape?

    You have any evidence that the landscape as a whole is similar to the small part the review mentions?

    No I don’t (this isn’t my area of research). Do you have any evidence that it isn’t? Axe’s paper certainly doesn’t count – I can’t see any relevant calculations in there. Gauger’s paper is better, in that it provides some evidence about part of the adaptive landscape, but they don’t give any argument that this is what happened in evolutionary time. If Kbl didn’t evolve from BioF, the paper is largely irrelevant.

  53. Heinrich,

    Local structure can differ from global structure in a fitness landscape. The obvious example is a random landscape with a small patch that is smooth and unimodal.

    Second, run some GA experiments or read up on them. It is common knowledge that on completely randomized landscapes (which you can visualize as simply a very rugged series of spikes), GAs are all but useless. But don’t take my word for it, simply test it yourself.

    Third, I never claimed natural landscapes were random. Please pay more attention. I claimed that the presence of a limited number of adaptive paths within a landscape is not evidence that the landscape as a whole is conducive to GA search. I used random landscapes as a counterexample. So finding adaptive paths locally, in a limited area of the landscape, is not evidence that the landscape as a whole has the topology you desire.

    Lastly, you wrote

    Do you have any evidence that it isn’t?

    Again, my job isn’t to prove that your model doesn’t work, your job is to prove that it does. I don’t have to prove the landscape isn’t of the right type; you have to prove that it is. If you claim NS solves every problem, then your burden of proof will obviously be larger. I only expect you to provide evidence for all cases you claim GA search solves.

    Now that you’ve been called out on the difference between local structure and global, you quickly return to your “prove me wrong” position. It isn’t my job.

    If someone claims “I have a machine that converts sunlight into oil”, I would say “Great, show me how it works.” If they then reply “You can’t prove it DOESN’T work”, I’d smile and politely walk away. So please stop asking me to prove your model doesn’t work.

  54. Second, run some GA experiments or read up on them. It is common knowledge that on completely randomized landscapes (which you can visualize as simply a very rugged series of spikes), GAs are all but useless. But don’t take my word for it, simply test it yourself.

    What does this have to do with real fitness landscapes?

    Third, I never claimed natural landscapes were random. Please pay more attention. I claimed that the presence of a limited number of adaptive paths within a landscape is not evidence that the landscape as a whole is conducive to GA search.

    Oh come on. You’ve just backed that argument up by reference to random landscapes. Either you’re suggesting that fitness landscapes are random, or you’ve just used an irrelevant argument.

    Again, my job isn’t to prove that your model doesn’t work, your job is to prove that it does.

    It is your job to substantiate your arguments. If you want to claim that GAs can’t work on real fitness landscapes, then provide some evidence. I’ve provided evidence that evolution can traverse a fitness landscape. Your counter, that it’s only a small part of the landscape, has some validity, but please back up your argument that this makes a difference – that with real fitness landscapes the topology is sufficiently different that a GA wouldn’t work on it.

  55. no-man You said:

    I claim that creating life is a difficult task. You disagree

    I said:

    you infer that the task may be difficult (and you may be right) …

    I claimed your argument from observation is flawed and gave my reasons. I did not claim that creating life is easy.

    I point that unintelligent forces have been working on the other planets in our solar system for billions of years, yet haven’t produced life. If the task were not difficult, surely unintelligent mechanisms could produce the effect given that much time.

    Why would you assume all those things?

    Perhaps the answer is – as many scientists believe – liquid water. We live in an area often termed the ‘habitable zone’ because it allows liquid water to exist on the surface of the planet. We also live on a moderatly sized planet – not a gas giant or a rock too small to hold an atmosphere.

    Now of course we don’t know much about the other planets – what is in the ice covered ocean on Europa for example – so we don’t know if any of the other planets also harbour life but they certainly don’t appear to be teeming with life.

    So back to your question –

    If creating life from non-life is an easy task, why are the other planets in our solar system not teeming with life?

    Why isn’t the sun teeming with life?

    As already stated above, life may require the conditions like liquid water – conditions that exist on this planet but not on others in our solar system. If we want to determine if life is ‘common’ we need to determine how many life compatible planets there are in the universe before we can do anything else.

    In this context I would accept 1 planet per million solar systems as ‘common’ because of the relative size and diversity of the universe – it would give us hundreds of billions of planets teeming with life across the universe, almost all of which we could never properly observe or visit.

    the relative absence of life in our solar system is strong evidence that creation of life is a difficult problem in general.

    Relative absence? There are eight planets in our solar system (unless you count Pluto) and one of them is teeming with life. If we go on this observation then we could say that 12.5% of planets are teeming with life.

    If you mean the observation that if we allow unguided mechanisms to act on all the matter and energy of several planets for billions of years we should only expect functional self-replicating machinery to arise in one or less trials, then yeah, that sounds pretty accurate. Given those odds, I’d call creating life a very difficult problem; at very least, I don’t like those odds.

    12.5% of planets in our solar system are teeming with life. I also don’t like those odds which is why I am arguing that your inference is based on flawed reasoning.

    I’ll re-state my origonal point – We have no way currently to survey planets in our universe in sufficient detail to determine which ones harbour life. We can’t make the observations so you can’t make any inference. I am agnostic about the idea that life may be common in the universe, and skeptical about abiogenesis being ‘easy’. Arguments about how easy it is in the OOL community tend to be contingent on the existence of planets with favourable conditions (like earth). They don’t argue that life would occur on any planet so claiming that because there is apparently no life on the other seven planets in our solar system completly misses the point.

    You need to remember the size of the universe when arguing about whether life is easy or frequent. To pluck a figure out of the air, if the chances were that one in one hundred billion stars hosted a life bearing planet then thats still a lot of life.

    Making arguments about the diversity of life on earth would be pointless if you lived in a desert and used your backyard as a sample. “If the earth is teeming with life then I ought to see it out my window”

Leave a Reply