Home » Biology, Darwinism, Evolution, Intelligent Design » ID-Compatible Predictions: Foresighted Mechanisms Identified?

ID-Compatible Predictions: Foresighted Mechanisms Identified?

Core ID and ID-compatible hypotheses have various predictions. For example, there’s the confirmed predictions related to junk DNA and genetic nature of the platypus, the predictions about designer drugs, long-term preservation mechanisms for conserving information that is not currently implemented, and retroviruses being capable of being used to implement designed changes. At this time the scientific research we have so far does not provide conclusive positive evidence for some of these predictions, although there are tantalizing glimpses that such predictions may become known to be true. There’s also some types of observed changes that happen so rapidly and repeatedly that they would seem to defy being within the domain of strictly Darwinian processes. But such research is just beginning. (And Ken Miller claims that ID cannot make predictions and research cannot occur…)

But then there’s the predictions specific to ID-compatible hypotheses such as front-loading.

There are multiple variants of “front loading”:

1. Design was implemented in the universe itself. Everything is deterministic, and a plan rolled out from the initial implementation. Behe discussed this possibility briefly in EoE.

2. Design is not only in the universe and its laws but in the Origin Of Life (OOL). Darwinian mechanisms are taken into account by the Designer(s) and the architecture of life itself is configured to be modular, so that multi-functionality, gene duplication, cooption, and preadaptation, etc. are able to unmask secondary information.

Dembski’s recent work shows that in order to find the targets in search space active information is required. Besides “directed front-loading” there is the potential that ID only holds true in regards to the OOL. The front-loaded active information is the design of the system (modular components, plasticity in the language conventions, foresighted mechanisms, etc), which allows the “evolving holistic synthesis” to function without there being a directly embedded plan. I believe this is Mike Gene’s favored hypothesis?

Of course, this presumes that Darwinian mechanisms are capable of this task, for which we have no positive evidence at this time. I personally believe that given a system intelligently constructed in a modular fashion (the system is designed for self-modification via the influence of external triggers) that Darwinian processes may be capable of more than this. But that’s foresighted non-Darwinian evolution in any case, and even if there are foresighted mechanisms for macroevolution they might be limited in scope.

3. Similar to variant 2 except there is a specific plan encoded into the original life (a single LUCA) and Darwinian mechanisms play less of a role, only being capable of producing minor variation. This plan may or may not be self-terminating. John Davison is heavily in favor of the self-terminating variant, and I think he believes there may be multiple LUCAs.

4. Similar to variant 2 or 3 except that there are multiple instances of Design (multiple Origins Of Life, multiple LUCAs) occurring at the level of kingdom or phylum.

5. Essentially 2 – 4 except with the addition of Designer Intervention for certain information that is/was not modular but specific to a particular organism. I believe this is UD Jerry’s favored position?

This article will discuss recent evidence related to predictions made about foresighted mechanisms. The focus will be on variants 3-5 of front-loading.

Assuming intelligent evolution, for some types of Designed modifications the mechanism may not be self-contained within biology. If external mechanisms or direct modification is the case we may only find evidence for foresighted mechanisms that are limited in capability.

What are “foresighted mechanisms”? Allen MacNeill first raised the objection that the term “random mutations” is not precise enough, which I believe to be true. Although to be fair, when ID proponents refer to “random mutations” they usually mean it to encapsulate all currently known “engines of variation” (MacNeill’s favored term). For example, Behe did this in EoE. On one page I remember him listing the various things like gene duplication, etc. but in general he referenced them all as “random mutations”.

It is perhaps instructive to point out that Darwin never used the term “random mutation” (nor “random” anything, for that matter) in the Origin of Species. The concept of randomicity is a mostly 20th century concept (especially in biology), and one of dubious empirical merit IMHO.

More useful might be “non-foresighted”, as that describes more precisely the character of most (but not all) of the new variations that appear among the members of populations of living organisms.
….
Again, I would urge ID supporters to recognize that there is nothing intrinsic to evolutionary theory that would necessarily rule out design in nature. Indeed, as I have argued in several venues, nature is packed with design; that’s what a genome is – a design for an organism. So the question really is, where does the information in the genome come from, and how much does it contribute to the actual phenotypes of organisms? How much of the “design” of an organism is provided by its environment? And can any of this be shown to be foresighted? All good questions, and all answerable by empirical research. However, absent empirical support, none of them can be answered by theoretical speculation alone.

So foresighted mechanism would be one that self-modifies its information in response to external stimulus based upon preconceived design. Or in the case of higher creatures viruses may have once served as attached-but-still-external foresighted mechanisms. (Daydreaming: I wonder if I’ll ever be quoted in a biology book someday…)

The only negative to using the term Non-Foresighted Variation (NFV) is that it assumes Darwinism to be true if that term is used to encapsulate everything. For example, an intelligence may set conditions by which a pseudorandom function induces variation. So foresight would be involved in setting the conditions. NFV would be a subset of all mechanisms for variation, whatever that may be called.

I previously discussed areas of research:

I was about to express the same thought, except with a different interpretation. Engineers will often design functionality that goes unused unless particular stimuli causes a triggered event (a function that is generally unexpressed except under certain conditions triggered by other functions or changes in input/system). I believe that such observations could be an avenue for ID-oriented research: looking for foresighted mechanisms.

Just the other day I was discussing this very subject with a friend. A good Designer would program biology to be proactive, to respond to an ever-changing environment. I used the example of Pseudomonas aeruginosa and it’s nylon-eating capabilities within 9 days, which some ID proponents have inferred may implicate foresighted mechanisms. I do not believe that hypothesis has been adequately explored yet. Merely saying that the processes of the modern synthesis and chance cannot account for it is not enough in my estimation. I would prefer that someone try and determine exactly what is triggering the change. It may also be that the design of the system itself allows such rapid evolution and not externally-triggered mechanisms (sort of like how the designed shape of legos allows many configurations). In any case, the modern synthesis proves useless.

These Adriatic Lizards may be a similar avenue for ID research, although obviously a more difficult route. I think it’d be easier to observe bacteria.

But, really, in today’s environment I think the only way such research would be funded if any discovered foresighted mechanisms were glibly written to be a product of evolution (along the same lines as modularity, due its beneficial nature) via a disclaimer sentence.

The main point is that the existence of foresightedness entails intelligence. MacNeill recognizes this and although he never said so I would hope that finding such mechanisms would lead him to accepting ID (or at least some types of ID-compatible hypotheses).

I previously made the following prediction:

But let’s say we did find such foresighted mechanisms. Darwinists might argue that such mechanisms would be selected for without intelligence being involved. After all, being foresighted would allow proactive responses to a changing environment and thus increase survivability. It’s kind of like how they create a story for modularity.

My prediction has come to pass. Such foresighted mechanisms that modify genes have been empirically identified. And the reaction from Darwinists has been as expected.

Bill sent me an email a little over a week ago describing the research.

A new study by Princeton University researchers shows for the first time that bacteria don’t just react to changes in their surroundings — they anticipate and prepare for them.

What we have found is the first evidence that bacteria can use sensed cues from their environment to infer future events

The research team, which included biologists and engineers, used lab experiments to demonstrate this phenomenon in common bacteria. [Insert Disclaimer]They also turned to computer simulations to explain how a microbe species’ internal network of genes and proteins could evolve over time to produce such complex behavior.[/Insert Disclaimer]

In one part of the study, the researchers studied the behavior of E. coli, the ubiquitous bacterium that travels back and forth between the environment and the gut of warm-blooded vertebrates. They wanted to explain a long-standing question about the bug: How do its genes respond to the temperature and oxygen changes that occur when the bacterium enters the gut?

The conventional answer is that it reacts to the change — after sensing it — by switching from aerobic (oxygen) to anaerobic (oxygen-less) respiration. If this were true, however, the organism would be at a disadvantage during the time it needed to make the switch. “This kind of reflexive response would not be optimal,” Tavazoie said.

The researchers proposed a better strategy for the bug. During E. coli’s life cycle, oxygen level is not the only thing that changes — it also experiences a sharp rise in temperature when it enters an animal’s mouth. Could this sudden warmth cue the bacterium to prepare itself for the subsequent lack of oxygen?

To test this idea, the researchers exposed a population of E. coli to different temperatures and oxygen changes, and measured the gene responses in each case. The results were striking: An increase in temperature had nearly the same effect on the bacterium’s genes as a decrease in oxygen level. Indeed, upon transition to a higher temperature, many of the genes essential for aerobic respiration were practically turned off.

To prove that this is not just genetic coincidence[aka non-foresighted Darwinian mechanisms], the researchers then grew the bacteria in a biologically flipped environment where oxygen levels rose following an increase in temperature. Remarkably, within a few hundred generations the bugs partially adapted to this new regime, and no longer turned off the genes for aerobic respiration when the temperature rose.

And here’s where the predicted Darwinist interpretation takes place.

“This reprogramming clearly indicates that shutting down aerobic respiration following a temperature increase is not essential to E. coli’s survival,” said Tavazoie. “On the contrary, it appears that the bacterium has “learned” this response by associating specific temperatures with specific oxygen levels over the course of its evolution.” Lacking a brain or even a primitive nervous system, how is a single-celled bacterium able to pull off this feat? While higher animals can learn new behavior within a single lifetime, bacterial learning takes place over many generations and on an evolutionary time scale, Tavazoie explained. To gain a deeper understanding of this phenomenon, his team developed a virtual microbial ecosystem, called “Evolution in Variable Environment.” Each microbe in this novel computational framework is represented as a network of interacting genes and proteins. An evolving population of these virtual bugs competes for limited resources within a changing environment, mimicking the behavior of bacteria in the real world.

To implement this framework, the researchers had to deal with the sheer scale and complexity of simulating any realistic biological system. They had to keep track of hundreds of genes, proteins and other biological factors in the microbial population, and observe them as they varied over millions of time points. “Simulations at this scale and complexity would have been impossible in the past,” said Tagkopoulos. Even with the vast number crunching power the supercomputers provided by the University’s computational science and engineering support group, their experiments took nearly 18 months to run, said Tagkopoulos.

In this virtual world, microbes are more likely to survive if they conserve energy by mostly turning off the biological processes that allow them to eat. The challenge they face then is to anticipate the arrival of food and turn up their metabolism just in time. To help them along, the researchers gave the bugs cues before feeding them, but the cues had to appear in just the right pattern to indicate that food was on its way. [This sentence is vague, but are they intelligently helping them along in the same manner as AVIDA was helped along for evolving the EQU program?]

“To predict mealtimes accurately, the microbes would have to solve logic problems,” said Tagkopoulos, a fifth-year graduate student in electrical engineering and the principal architect of the Evolution in Variable Environment framework.

And sure enough, after a few thousand generations, an ecologically fit strain of microbe emerged which did exactly that. This happened for every pattern of cues that the researchers tried. The feeding response of these gastronomically savvy bugs peaked just when food was offered, said Tagkopoulos.

When the researchers examined a number of fit virtual bugs, they could at first make little sense out of them. “Their biochemical networks were filled with seemingly unnecessary components,” said Tagkopoulos. “That is not how an engineer would design logic-solving networks.” Pared down to their essential elements, however, the networks revealed a simple and elegant structure. The researchers could now trace the different sequences of gene and protein interactions organisms used in order to respond to cues and anticipate mealtimes. “It gave us insights into how simple organisms such as bacteria can process information from the environment to anticipate future events,” said Tagkopoulos.

The researchers said that their findings open up many exciting avenues of research. They are planning to use similar methods to study how bacteria exchange genes with one another (horizontal gene transfer), how tissues and organs develop (morphogenesis), how viral infections spread and other core problems in biology.

So now the solution to “core problems in biology” are foresighted mechanisms…NOT non-foresighted Darwinian mechanisms. Yet somehow this idea is twisted to fit into a Darwinian worldview. The sad part is that I predicted this reaction.

How many times have we noticed evidence of changes that have occurred faster than what should be capable of RV+NS (non-foresighted mechanisms)? Darwinists did not look for foresighted mechanisms since they would not expect them. Now they have found them, but due to their beliefs they’re determined to turn evidence on its head.

EDIT:

I’d be lying if I said that the ID movement has all its ducks in a row. The relatively low amount of ID research is one of them. It’s a real problem even if there are a real life reasons such as persecution and the need to maintain day jobs that usually don’t provide the opportunity for ID research. But increasing the amount of ID research is fixable, given enough support.

This is one aspect of this news that casts some gloom on the otherwise sunny outlook. This is exactly the type of research that Darwinists have effectively prevented ID proponents from undertaking. ID proponents have been talking about looking for such mechanisms for years. But how do you do research when they run you out of labs and universities and deny you funding?

Personally I’d love to see Darwinists and ID proponents working together. The major problem is procuring funding. I hate to see it when Darwinists are demanding that ID proponents produce more research and then at the same time advocating closing any potential avenues for this research to take place. If they want to be consistent and if they take these questions seriously they should be helping ID proponents receive a decent level of funding even if they do believe in general that ID is incorrect. At the very least these research projects may discover the limitations of certain mechanisms and in the process discover information that could advance medical technology. Who knows, maybe it would be an ID proponent who does the actual gruntwork and manages to find positive evidence for some Darwinian mechanisms being capable of producing CSI.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

28 Responses to ID-Compatible Predictions: Foresighted Mechanisms Identified?

  1. I know a prediction that can be made from Genetic Entropy. All bacteria that have had to adapt will be less fit than the parent strain in native environment. As well all bacteria that have had to face multiple adaptations will be even less fit when compared to the original “parent”.

  2. I know a prediction that can be made from Genetic Entropy. All bacteria that have had to adapt will be less fit than the parent strain in native environment.

    That prediction is only valid if we only take NFV into account. FV could allow adaption that is both constructive and beneficial (more fit). Also, you’re not taking into account the possibility of special repair mechanisms that are only triggered by certain events. It’s sort of like how a full system virus scan or chkdsk is not triggered at all times. All of those would be foresighted mechanisms, which entails intelligence. So ID can be true and GE not completely true.

    But please don’t drag this topic down with another discussion of Genetic Entropy.

  3. Patrick I am sorry for disturbing the thread, if you feel this is irrevelent, so all I will state is:

    You must stay within the law of “Conservation of Information”. Thus even “front-loaded” variation will come at a cost of the information that is in the “optimal genome” of the parent species. (it is measurable as Loss of meaningful genetic diversity)

    Thus, because of the law, the parent strain will always be “more fit” in its native environment. The only exception to this will be if the parent strain is already degraded from the optimal original “parent strain” and the “sub-species” remutates (repairs) to a point that is closer to the original “grand”-parent strain.
    All this, of course, presupposes that the “Designer” is not tinkering with the genome once He has created it.

  4. I think forsighted mutations are apparent in microbes. I mentioned the impression of foresight in mutation is so strong that James Shapiro considers microbes to be conscious beings.

    See: Who are the (multiple) designers?

    Bacteria as natural genetic engineers….

    This remarkable series of observations requires us to revise basic ideas about biological information processing and recognize that even the smallest cells are sentient beings

    and

    ABSTRACT: 40 years experience as a bacterial geneticist have taught me that bacteria possess many cognitive, computational and evolutionary capabilities unimaginable in the first six decades of the 20th Century. Analysis of cellular processes such as metabolism, regulation of protein synthesis, and DNA repair established that bacteria continually monitor their external and internal environments and compute functional outputs based on information provided by their sensory apparatus. Studies of genetic recombination, lysogeny, antibiotic resistance and my own work on transposable elements revealed multiple widespread bacterial systems for mobilizing and engineering DNA molecules. Examination of colony development and organization led me to appreciate how extensive multicellular collaboration is among the majority of bacterial species. Contemporary research in many laboratories on cell-cell signaling, symbiosis and pathogenesis show that bacteria utilize sophisticated mechanisms for intercellular communication and even have the ability to commandeer the basic cell biology of “higher” plants and animals to meet their own needs. This remarkable series of observations requires us to revise basic ideas about biological information processing and recognize that even the smallest cells are sentient beings.

  5. 5

    scordova: “This remarkable series of observations requires us to revise basic ideas about biological information processing and recognize that even the smallest cells are sentient beings”

    Meaning what – that they have conscious free will?

    ———–

    re: change to anaerobic e coli being triggered by heat.

    (random thoughts.)

    The question was raised, how could cells learn without a brain. Maybe if you clumped these cells together and positioned them on top of someone’s head they would be a brain.

    What was the ID inference in regards to all this. Pavlov’s dog comes to mind. Was the dog designed to salivate upon hearing a bell.

    But in the case of the e coli, the “stimulus” is merely an increase in heat or rather energy. Does energy actually qualify as an environmental stimulus in the usual sense? Energy is what drives the whole process. Its very easy to imagine certain processes being triggered by an increase in heat. Energy doesn’t have a complex specification. It is a raw quantity.

    But if you can imagine cells reacting and adapting to some completely arbitrary stimulus that does have a complex specification ( a ringing bell) how difficult would it be for them to adjust their behavior in response to a change in energy levels.

    Animals’ winter coats, with changes in color, density, and texture are probably triggered far in advance by changes in heat (or a change beginning in the number of daylight hours, that is sun exposure).

    Also pain reactions to excess heat, far in advance of any damage being done.

  6. 6

    OK, the stimulus in the case of the bell, would just be sound, or a particular pattern of fluctuations in energy levels.

  7. Patrick (2) (and bornagain77 1 and 3),

    While there is no need to identify ID with genetic entropy, genetic entropy is a perfectly legitimate subtheory, or (as Lakatos would call it) auxiliary hypothesis for the core hypothesis of ID. As is true for all core hypotheses (including Darwinian evolution ;) ), it is very hard to falsify ID. However, auxiliary hypotheses make much more direct contact with reality, and are subject to hypothesis testing in the more conventional (Popperian) sense. The same goes for genetic front-loading, the hypothesized IC of the flagellum, and even the more scientific (less interventionist) varieties of YEC.

    One should encourage the testing of all auxiliary hypotheses (that’s what ID research is all about). One should simply be aware that the testing of those hypotheses does not bear a 1 to 1 correspondence to the testing of ID. This will frustrate DEists who wish for a silver bullet to demolish ID. But they have their own problems in this regard. :)

  8. Patrick -

    If you think that’s cool, you should check out Ecological Developmental Biology.

  9. Somewhat related:

    Unraveling Bacteria Communication Pathways

    ScienceDaily (June 18, 2008) — MIT researchers have figured out how bacteria ensure that they respond correctly to hundreds of incoming signals from their environment.

    http://www.sciencedaily.com/re.....125038.htm

  10. I would summarize and restate this for clarification, if just for myself. The Princeton research study claims to have discovered the biological mechanism behind the fascinating, apparently foresighted response of these bacteria to certain changes in their environment. This “elegant” mechanism itself is already built in and didn’t “evolve” from either random or foresighted mutations during the simulation study. The environmental foresight mechanism is of course predictably assumed by the Darwinist researchers to have still resulted from RV + NS, along with absolutely everything and anything else discovered in biology. If it is discovered, then of course it originated from RV + NS, no other alternative and no real need to prove it. So, unfalsifiable, but who cares (in the mainstream thinking).

    This mechanism is specific to E Coli and apparently adapted to mammalian hosts. The “foresight” is in it’s ability to “foresee” the needed genetic expression changes coming up when the bacteria’s environment is about to drastically change to an anerobic state.

    This mechanism either developed over ages by RV + NS, or by some form of ID. One possible form of the ID process would have been the “preloading” of the genetic mechanism itself in the bacterial genome, to be eventually triggered into expression when mammals evolved. Doesn’t seem as plausible as interventionist scenarios, but that’s just me.

    The “intervention” would have been to inject the apparently foresighted mutations creating the system elucidated by the researchers.

  11. 11
    PannenbergOmega

    This is interesting. Yet this is not the “creation by progressive stages” that is mentioned in the Book Of Genesis.

  12. 12

    @11. I think you have this website confused with creationist one (rough definitions)…

    Creationism: hypothesizing a Biblical account of creation through analysis real-world data.

    ID: detecting design within real-world data, and hypothesizing that macro-evolution (among others) is better explained by a non-human intelligent agent designer rather than purely materialistic processes.

    ID does not presuppose the identity of the designer. It could be space-elves. Perhaps it might be developed so far as to infer the nature of the designer, if certain parts of the “scientific” community would stop harassing it.

    And the topic of this post is “ID-Compatible Predictions: Foresighted Mechanisms Identified?”, not anything to do with religion. Yet, you bring up religion? I’m not sure what your position is, but it seems Darwinist: data interpreted in line of philosophical ramifications of a conclusion rather than exploring all possibilities and getting a conclusion first (i.e. what the data suggests, not what the conclusion suggests).

    (relevant post coming up next)

  13. 13

    @5 and others.

    I am not so sure how related animal training is related to this ‘E. coli’ training. To me it just sounds like another case of darwinian selection, where those cultures that didn’t respond according to the scientist’s specifications were rejected and only those that matched up to the criteria retained. However, I could be wrong.

    Pavlov’s dog is perhaps a suitable example of what might be happening. This is called “Classical Conditioning” in associative learning. Initially this occurred:

    1) Conditioned stimulus: Buzzer making sound before being fed.
    2) Unconditioned stimulus: Meat powder puffed into dog’s mouth.
    3) Unconditioned stimulus: Salivation in dog’s mouth.

    But through associative learning, Pavlov’s dog learned to associate the buzzer with food, and thus a previously unconditioned physiological response to food (salivation) became a conditioned response:

    1) Conditioned Stimulus: Buzzer.
    2) Conditioned Response: Salivation.

    So in the end Pavlov could make his dog salivate by just pressing a buzzer.

    The ropes that held them in place for the experiments in the lab, whilst bad elsewhere, was a ‘good’ experience, because it was also linked to receiving the meat powder. The dogs would learn, by associative learning, that the ropes came before the meat powder just like the buzzer did, so may also associate the presence of white ropes with salivation: tied up with white ropes -> salivation.

    AND THEN there’s context specificity. Must be wary in animal training that for such a response the dog doesn’t learn to associate these responses with a specific environment – Context Specificity. E.g. whilst being tied up or hearing the buzzer in the lab produces salivation, the same occurance in the park may not. For true learning to occur, the response must be dependant on the desired stimulus alone, not on others or the environment.

    So, um, match that up to the example of E. coli, because Classical Conditioning associate learning is what is *might* be displaying.

    ======
    To finish this ramble, with regards to animals coats @5, yes, the growth of a winter coat is triggered by a decrease in the number of daylight hours (this is currently thought to be the primary, but not necessarily exclusive, factor). As daylight hours decrease, the Pineal gland begins to secrete melatonin in more frequent ‘pulses’, leading to endocrinological changes such as thickening of coat. It can also signal other changes, e.g. Autumn, with it’s decreasing daylight hours, triggers the Rut (mating period) in deer.

  14. 14
    JunkyardTornado

    Avonwatches: ” am not so sure how related animal training is related to this ‘E. coli’ training. To me it just sounds like another case of darwinian selection, where those cultures that didn’t respond according to the scientist’s specifications were rejected and only those that matched up to the criteria retained. However, I could be wrong.”

    The e coli when outside the body operates aerobically but inside the body switches epigentically by turning on certain genes and turning others off to start opertation anaerobically because no oxygen is present. This is how they function normally outside of the lab. There is some controversy over on telicthoughts because the computer simulation was doing mutations, whereas the actual bacteria involved epigentic occurences and not mutations. Anyway the discussion over there is quite a bit more technical.

    But the original motivation of the experiment was that if the bacteria waited until they got to the gut to start switching to anaerobic, that behavior would not be optimal, so the hypothesis was that the bacteria were starting these epigenetic changes previous to this, before oxygen levels dropped, which as it turned out they were, in response to temperature changes upon entering the mouth.

    But its not hard to envision how this behavior might not have been “optimal” at some time in the distant or not so distant past. Then the bacteria were able to match to the surrounding environment in the gut where that match would include everything, the temperature, the lack of oxygen, and so forth. “Matching to the environment” or words to that effect are alluded to repeatedly by certain participants over at TelicThoughts, as the supposed result of mutations or whatever. I personally keep thinking in terms of photography, and how a single chemical, silver nitrate on a plate matches its environment with perfect photographic accuracy. But once this imprint exists in the e coli (while admitting I no very little about the actual mechanism there) its not hard to envision how a partial match which starts to occur with a temperature change in the mouth starts the epigenetic changes as well.

  15. 15
    PannenbergOmega

    Hi Avon. Nope, not a Darwinist.
    Though I am sympathetic to Biblical Literalism.

    What is described in the Book of Genesis is a clear process of progressive creation of the basic kinds. Plants, Birds/Water Creatures, Animals and finally Mankind.

    Ciou!

  16. 16

    ?but this thread has nothing to do with that??maybe I’m missing the link/relevance?

  17. 17
    PannenbergOmega

    Hi Avon, I know ID and Creationism are two different things. Yet, what good will this do for the Gospel if there is design in nature (the result of a mind) but it conflict with the Genesis account..

  18. To put the discussion back on topic, ALL foresighted mechanisms “could” be as limited in scope as this example. Thus the limited evidence we have so far is currently compatible with PannenberOmega’s preferred stance. Further research is required to see if there exists embedded foresighted mechanisms that can produce macroevolution, and what triggers them (assuming the process has not self-terminated). Several variants of front-loading would of course predict the existence of such things.

    BTW, I’ve been surprised at the relatively few number of comments here at UD. Perhaps I’m making a mountain out of a molehill, but isn’t this evidence huge news for the ID community? Unless I missed something these mechanisms for SELF-MODIFYING GENETIC EXPRESSION were only suspected of existing, since changes were occurring far faster than what should be expected of RV+NS, but now we have actual physical evidence.

    Someone at Telicthoughts posted the same day I did:

    http://telicthoughts.com/trained-microbes/

    What’s hilarious is how the Darwinists commenting there are going through mental contortions to fit this evidence into their paradigm. They appear to miss the obvious fact that the foresighted mechanisms (or “endogenous adaptive mutagenesis” [EAM] as preferred by TelicThoughts) are physical evidence. This isn’t merely someone looking at the timeframes involved (less than a few hundred generations) and noting that this “should be beyond the capabilities of RV+NS”. They actually “measured the gene responses in each case.”

    The supposed capabilities of RV+NS to produce foresighted mechanisms are derived by “computer simulation”. Yet somehow in reality RV+NS is never observed to create such systems. Did everyone notice how fast the virtual simulated bugs developed this capability? In a “few thousand
    generations”. Compare that to the results of Richard Lenski’s lab that after 20 years and 44,000 generations produced a very minor change. Looks like the derived funnel for the search was tuned and balanced by intelligence very nicely…but biological reality apparently does not provide such balanced and consistent funnels.

    One Darwinist argues that “[t]he mechanism described in this research seems to be completely stimulus-response so it could be argued that the term foresight doesn’t even apply.” The foresight comes in knowing there will be many varying environments. This would be known by the intelligence preparing the organism. If we were to design custom nano-machines to live on Mars or another planet we’d do the same thing. Also, I would not say that the “intelligence” is in the foresighted mechanisms itself. The FM is produced by intelligence.

    Also, I noticed that many of the Darwinists on there are banned here at UD. Apparently for good reason.

  19. http://telicthoughts.com/train.....ent-195453

    To those of you making the tired argument that the evolutionary simulation described in the Science paper is invalid because design is ‘sneaking’ into the simulation via the design of the program itself, consider these points:

    What’s really tiring is Darwinists misunderstanding of the problem they’re facing and thus misrepresenting the debate… It’s NOT merely that a Designer of any type was known to involved in making the simulation or that a computer is involveed. That’s the strawman that Darwinists always think ID proponents are claiming.

    The simulation has no foresight. The mutations are generated (pseudo)randomly, with no correlation to their effect on fitness (they could just as easily be generated from a true random source with no effect on the end results of the simulation). And so a blind process, driven by random mutations and subject to selection, leads to virtual biochemical networks that are capable of representing and predicting the environment.

    In order to get the results they desire Darwinists design their simulations in such a manner that does not reflect biological reality. Just how pseudo-random the mutations really are depends on the context of the system involved. For example, randomly combining/transposing/duplicating/whatever modular functional components is not the same as modifications that can be both invalid and deleterious. I’ve seen many GA’s where all “random mutations” are perfectly valid additions (whether they’re beneficial fitness-wise is of course another matter). So the types of mutations allowed might be a chosen set.

    For example, if I designed a GA to find the shortest path to an object I would only give it valid options. I would not give it wacky options not involved with the search. I do not have access to the code for this simulation but I’d be very surprised if it was any different from previous efforts.

    The foresight comes in how the search is funneled.

    “Both the regression and the search bias terms require the transmission function to have ‘knowledge’ about the fitness function. Under random search, the expected value of both these terms would be zero. Some knowledge of the fitness function must be incorporated in the transmission function for the expected value of these terms to be positive. It is this knowledge — whether incorporated explicitly or implicitly — that is the source of power in genetic algorithms.” –Lee Altenberg, on the basics of GAs

    Intelligent agents have a desired target area. This target area can be a mass of various implementations to output the desired goal. Which leads to…

    The networks that evolve are so complicated that when the researchers look at them, as the Science Daily article says, they can at first “make little sense out of them.” As one of the experimenters notes, they bear no resemblance to the designs an engineer would produce to solve the same logic problems. To understand the networks, the experimenters have to pare them down to their essentials using virtual knockout experiments, and then figure out how the pared-down networks work by “careful inspection.”

    Old saying, but there are many ways to skin a cat. Meaning that there are many ways to meet a designed end goal. This end goal need not be very specific and narrow (one and only one option in a search space of infinite). This end goal in search space can be generalized and thus have many varied options/implementations that fulfill the end goal. These implementations sometimes may appear odd, but subjective aesthetic considerations do not take away from the objective real design.

    Also, I’m curious how well the virtual bug’s implementation compares against the real life implementation. Sometimes a GA will find an option that engineers never considered before, but usually hand-written implementation works more efficiently.

    This boils down to studying the networks as if they had just been discovered in nature.

    And this is where ID proponents object, since these simulations do not reflect biological reality except in a limited sense. We have no evidence to indicate that Darwinian processes operating within biology are capable of finding the same types of pathways within search space that GAs with active information are capable of finding.

    All of this makes it obvious that actual designs are not being introduced into the simulation in some subtle way.

    He’s asserting pre-designed specific results as being the only kind of design. Kind of like Dawkins Weasel program. Again, and again for infinity since this is a Darwinist, the GA is designed to search for a GENERALIZED target that is preset by the intelligence. Without active information being involved nothing is produced that can qualify as CSI.

    And where is the active information?

    “To help them along, the researchers gave the bugs cues before feeding them, but the cues had to appear in just the right pattern to indicate that food was on its way. ”

    Like I said, the funnel had to be tuned and balanced properly by an intelligence. There’s likely other active information in the design of the simulation, but that’s the most blatant admission.

    This is similar to avida. I’ll just quote.

    In the “The Evolutionary Origin of Complex Features,” published in Nature in 2003 by Lenski, the selective forces that have 100% probability affixed are those for various simple binary arithmetic functions, which are ultimately used to build the “equals” (EQU) function, and for the EQU function itself. What’s more, the more complex the function, the greater the reward given to the digital organisms for it. There is no analogy for such selective forces in nature. Nature doesn’t care whether something is more or less functionally complex; it only cares whether it can survive in a particular environment. And what happens when no step-by-step rewards are given for functional complexity? An article on AVIDA in Discover magazine last year (Feb. 2005) stated, “When the researchers took away rewards for simpler operations, the organisms never evolved an equals program.” By building rewards into the system — i.e. providing a highly constrained fitness function — the programmers gave the system a purpose. Hence its creative power:

    A fan of EAM (endogenous adaptive mutagenesis), such as Joy, might argue that it is a mistake to model the mutations as random because an organism will actively generate specific mutations to enhance its fitness. Unfortunately for Joy, there is little or no evidence to suggest that this happens.

    Errr…hello? This is exactly what the physical evidence showed: a mechanism other than random mutations. The real question is whether this mechanism can be produced by Darwinian processes WITHIN biological reality. Here are several interpretations.

    Darwinist
    RV+NS -> (produces) foresighted mechanism -> targeted mutations

    ID #1
    intelligence -> foresighted mechanism -> targeted mutations

    ID #2
    OOL: System is designed to evolve with active information (modular comoponents, etc) incorporated only at this time. -> RV+NS -> foresighted mechanism -> targeted mutations

    What was not evident from physical evidence was how or whether biological processes could produce the Darwinist interpretation or ID #2. The real question they answered was whether they could design a software GA that could generate mechanisms that “can process information from the environment to anticipate future events”.

    Further, the simulations show that random mutations suffice. Guidance, whether endogenous or exogenous, is not needed. So not only is there no evidence for EAM in this study; it’s also explanatorily superfluous.

    As already noted, guidance was explicitly admitted within the article.

    If the mutations are essentially random and the actual designs haven’t ‘sneaked’ into the simulation inadvertently,

    I’ve already noted his misunderstanding.

    and if EAM isn’t part of the picture,

    I haven’t read enough about this EAM concept to comment. But it’s probably close to my own concept for foresighted mechanisms.

    then the critic’s remaining option is to criticize the “NS” part of the simulation — natural selection.

    The critic must argue not only that the simulation mismodels selection — he must argue that it does so in a way that entirely accounts for the designs produced by the simulation. So far no one in this thread has pointed to a specific defect in the way the simulation models selection, much less shown how the defect gives rise to the designs.

    Let’s see what happens if those virtual bugs are never given intelligent cues.

    Commenter johnnyb complained earlier that

    The problem is that in order for evolution to work, the “organism’s parameters”, specifically the ones which are necessary for the simulation, but both (a) exist, and (b) be easily evolvable… I have never seen an evolutionary algorithm devise its own parameters to tune!

    His complaint ignores the fact that the researchers are not attempting to model the origin of life or the long-term evolution of bacteria. They are starting with a model organism and simulating how its responses evolve as it is subjected to patterned environmental perturbations. To say this is illegitimate is like claiming that we can’t trust aerodynamic simulations because they don’t explain the origin of the atmosphere.

    As I’ve already noted with ID #2, it’s possible such a system could arise based upon limited front-loading during OOL. So ID would only be relevant to OOL. Yes, OOL and the subsequent evolution are distinct things but Darwinists cannot separate the two forever. As noted by a fellow Darwinist:

    “I think it is disingenuous to argue that the origin of life is irrelevant to evolution. It is no less relevant than the Big Bang is to physics or cosmology. Evolution should be able to explain, in theory at least, all the way back to the very first organism that could replicate itself through biological or chemical processes. And to understand that organism fully, we would simply have to know what came before it. And right now we are nowhere close. I believe a material explanation will be found, but that confidence comes from my faith that science is up to the task of explaining, in purely material or naturalistic terms, the whole history of life. My faith is well founded, but it is still faith.” — Gordy Slack, a Darwinist

    johhnyb also rightly pointed out that “I have never seen an evolutionary algorithm devise its own parameters to tune!” GAs are designed with foresight, with an end goal in mind.

    The simulations show how the model organisms can evolve solutions to logic problems via random mutations and selection.

    And that’s ALL it shows. Not that such can occur in biological reality. In fact, I would have been surprised if they had not been able to create GA that was tuned and balanced to meet their goal.

    This alone is a serious blow to IDers and creationists who have heretofore argued that new information cannot be created by RM + NS.

    Nonsense. He says “new information”, shifting the goal post to be made a mile wide. It’s CSI, not “new information”. That’s a typical Darwinist ploy, substituting easy goals with the hard but reality-based goals that ID proponents have been stating for years.

    Some critics have argued that the problem lies not in the way the simulation models selection, but in something more fundamental. They claim that any simulation of RM + NS is doomed to fail, because computer code is designed and therefore cannot accurately model an undirected process such as RM + NS.

    I’d like to meet the ID proponents who make such arguments, if they exist, and bop them on the head for making such a foolish argument.

    But if you make that argument about evolutionary simulations, then why not make the same argument about weather simulations? Would you claim that weather simulations are inherently unreliable because they model an undirected process using designed software?

    I’d hope not.

    Funny how we don’t hear ID supporters making this complaint. You do hear some global warming skeptics complaining that existing climate models are unreliable, but you don’t hear them claiming that accurate models are impossible in principle. Yet that is exactly what these critics of evolutionary simulations do.

    Again, who is saying stuff like this.

    My challenge to critics of the simulation described in the Science paper:

    1. If you hold that evolutionary simulations are inaccurate in principle because they use designed code (and designed computers) to simulate a undirected process, then explain to us why this objection does not apply to simulations of other undirected processes, such as the weather.

    Already shown to be unnecessary to “explain”.

    2. If you hold that the problem is with the way that the researchers are modeling selection, then show us specifically where the error lies.

    I’d like the code. But even the article indicates where the error lies.

    3. If you can’t do either of these, then sit down, ponder, and come to grips with the fact that RM + NS can fashion solutions that solve logic problems, model the environment, make predictions, and allow organisms to change preemptively.

    Again, given intelligence being involved, I’d be surprised if RM+NS could not meet the goal.

    That’s not to say that GAs are limitless. In fact, the observed limitations of GAs is one of the best pieces of evidence against Darwinism there is.

  20. Perhaps I’m making a mountain out of a molehill, but isn’t this evidence huge news for the ID community? Unless I missed something these mechanisms for SELF-MODIFYING GENETIC EXPRESSION were only suspected of existing, since changes were occurring far faster than what should be expected of RV+NS, but now we have actual physical evidence.

    To be honest, I think you are making a mountain out of a molehill. Firstly, self-modifying gene expression has been known about and understood for a long time – look up the lac operon. It was basic undergrad stuff 20 years ago.

    What’s interesting here is firstly that the cue E. coli is using is anticipatory – which is cool but in principle not difficult to evolve – but then also then that the researchers used the simulations to show that the evolution of a response to a different set of cues should be fairly general. This is even cooler, but really needs to be backed to by further experiment.

    P.S. I’m curious that you have put up a long reply here to comments on TT. Aren’t you allowed to comment over there?

  21. 21

    Bob O’H comments:

    - which is cool but in principle not difficult to evolve -

    To which I ask you Bob, Which foundational principle of science are you refering?

  22. Firstly, self-modifying gene expression has been known about and understood for a long time – look up the lac operon. It was basic undergrad stuff 20 years ago.

    I left out a big FORESIGHTED in that one sentence, which is of course been the focus of the entire article so I thought it went without saying…. Foresightedness caught in the act is what got me excited. That’s a direct prediction of an ID-compatible hypothesis.

    What’s interesting here is firstly that the cue E. coli is using is anticipatory – which is cool but in principle not difficult to evolve

    You might be correct. Shortly after writing the last comment it occurred to me that there’s been very little details to work with, and that the basis for my assertions may be incorrect, but I was away from any computer till now.

    I’d “presumed” that as a classification “foresighted mechanisms” in principle should be very difficult for Darwinian processes to produce in biological reality. But what if there is a subset of foresighted mechanisms that are easily within the Edge? I figured someone would jump on that obvious lapse.

    Then again, I’m not sure if we even know at this time how the actual biological mechanism functions nor how complex it really is. The mechanisms of the virtual bug may easily evolve (let’s presume no active information for the moment) but the mechanisms of biology may not. More research needed. In any case, after thinking about this further I’ve moved to “cautious optimism”.

    but then also then that the researchers used the simulations to show that the evolution of a response to a different set of cues should be fairly general. This is even cooler, but really needs to be backed to by further experiment.

    “Cues” and “patterns” were never defined exactly.

    “the cues had to appear in just the right pattern”

    “This happened for every pattern of cues that the researchers tried.”

    I originally interpreted they meant the other patterns to be referring to other food sources. Your interpretation that “the evolution of a response to a different set of cues should be fairly general” would make those two sentences conflict. It could also be that once the logic-solving foresighted mechanism had evolved via hand-holding that it could respond “for every pattern of cues that the researchers tried. The feeding response of these gastronomically savvy bugs peaked just when food was offered.” I don’t know.

    Again, the simulation would have to be examined to see what sort of active information is incorporated.

    I’m curious that you have put up a long reply here to comments on TT. Aren’t you allowed to comment over there?

    I could but I was hoping to jumpstart the conversation here.

  23. Patrick

    Some critics have argued that the problem lies not in the way the simulation models selection, but in something more fundamental. They claim that any simulation of RM + NS is doomed to fail, because computer code is designed and therefore cannot accurately model an undirected process such as RM + NS.

    I’d like to meet the ID proponents who make such arguments, if they exist, and bop them on the head for making such a foolish argument.

    Didn’t GilDogden imply something like that here and here at UD.

  24. 24

    The only solution to the funding issue is private funding. The only way you will get enough funding from private sources is if you are conducting research that is more than just academically interesting. You have to start working on problems that affect everyone. Problems like diseases (AIDS comes to mind) and the energy crisis.

  25. To me this is the pertinent text:

    When the researchers examined a number of fit virtual bugs, they could at first make little sense out of them. “Their biochemical networks were filled with seemingly unnecessary components,” said Tagkopoulos. “That is not how an engineer would design logic-solving networks.” Pared down to their essential elements, however, the networks revealed a simple and elegant structure. The researchers could now trace the different sequences of gene and protein interactions organisms used in order to respond to cues and anticipate mealtimes. “It gave us insights into how simple organisms such as bacteria can process information from the environment to anticipate future events,” said Tagkopoulos.

    This is a typical result of evolutionary algorithm…..and, as they point out, this is NOT how an engineer would build such a network. The “pared down” version, however, is what you would expect an engineer/designer to come up with, and, I rather strongly suspect, is the type of network that biological experimenters come up with when they start examining organisms for such genetic networks. Bottom-line: networks aren’t the product of chance.

  26. This is a typical result of evolutionary algorithm…..and, as they point out, this is NOT how an engineer would build such a network.

    Doesn’t this suggest that the bacteria were not designed?

  27. Hmmm, Bob,
    They don’t understand certain features, so the old “it must be junk” comes into play, How about there are purposes in the features we haven’t realized yet? Or have you forgotten the lessons of ENCODE so swiftly?

    http://www.genome.gov/25521554

    BETHESDA, Md., Wed., June 13, 2007 -” An international research consortium today published a set of papers that promise to reshape our understanding of how the human genome functions. The findings challenge the traditional view of our genetic blueprint as a tidy collection of independent genes, pointing instead to a complex network in which genes, along with regulatory elements and other types of DNA sequences that do not code for proteins, interact in overlapping ways not yet fully understood.”

    What are you holding Bob? You say you got four aces! I say you got squat! No numerous transitional species! No beneficial mutations that withstand scrutiny! And a complexity in the simplest life that far surpasses man’s ability to produce as such, not to mention that the simplest life is irreducibly complex to approxiamately 500 genes, in genitalia, according to Craig Ventor.

    Want to make a bet with your four aces Bob? I bet you that the principle of Genetic Entropy will be strictly obeyed in “fitness tests” for every bacteria that have “adapted away” from a parent bacteria in this experiment they performed.

  28. sparc,

    I knew someone would bring Gil’s comments up. I’ve always interpreted him as saying that the simulations do not reflect biological reality. If he’s not saying that, then he does need to be corrected.

    bob,

    Doesn’t this suggest that the bacteria were not designed?

    First of all, define category of design. Directly designed? No. Designed by a guided tool? The designed information was the active information in the automated trial-and-error search. The organization of the program itself may also be a factor. Second, they’re looking at virtual bugs derived via intelligent search and not biology.

    Third, note how quickly the simulation produces results of any kind in comparison to experiments with biology. It’s a guided search. The answer is not “biology needs more deep mystical/magical time” since all evidence (GA simulation or otherwise) shows that without intelligent guidance the search will not find CSI or even some systems of middling complexity. All evidence shows nature does not provide constraints that are balanced and consistent enough to derive all types of systems (macroevolution). Nor do there appear to be environmental factors to provide a selection filter for searching for many types of natural systems.

Leave a Reply