Home » Intelligent Design » Proteins Fold As Darwin Crumbles

Proteins Fold As Darwin Crumbles

A Review Of The Case Against A Darwinian Origin Of Protein Folds By Douglas Axe, Bio-Complexity, Issue 1, pp. 1-12

Proteins adopt a higher order structure (eg: alpha helices and beta sheets) that define their functional domains.  Years ago Michael Denton and Craig Marshall reviewed this higher structural order in proteins and proposed that protein folding patterns could be classified into a finite number of discrete families whose construction might be constrained by a set of underlying natural laws (1).  In his latest critique Biologic Institute molecular biologist Douglas Axe has raised the ever-pertinent question of whether Darwinian evolution can adequately explain the origins of protein structure folds given the vast search space of possible protein sequence combinations that exist for moderately large proteins, say 300 amino acids in length.  To begin Axe introduces his readers to the sampling problem.  That is, given the postulated maximum number of distinct physical events that could have occurred since the universe began (10150) we cannot surmise that evolution has had enough time to find the 10390 possible amino-acid combinations of a 300 amino acid long protein.

The battle cry often heard in response to this apparently insurmountable barricade is that even though probabilistic resources would not allow a blind search to stumble upon any given protein sequence, the chances of finding a particular protein function might be considerably better.  Countering such a facile dismissal of reality, we find that proteins must meet very stringent sequence requirements if a given function is to be attained.  And size is important.  We find that enzymes, for example, are large in comparison to their substrates.  Protein structuralists have demonstrably asserted that size is crucial for assuring the stability of protein architecture.

Axe has raised the bar of the discussion by pointing out that very often enzyme catalytic functions depend on more that just their core active sites.  In fact enzymes almost invariably contain regions that prep, channel and orient their substrates, as well as a multiplicity of co-factors, in readiness for catalysis.  Carbamoyl Phosphate Synthetase (CPS) and the Proton Translocating Synthase (PTS) stand out as favorites amongst molecular biologists for showing how enzyme complexes are capable of simultaneously coordinating such processes.  Overall each of these complexes contains 1400-2000 amino acid residues distributed amongst several proteins all of which are required for activity.

Axe employs a relatively straightforward mathematical rationale for assessing the plausibility of finding novel protein functions through a Darwinian search.  Using bacteria as his model system (chosen because of their relatively large population sizes) he shows how a culture of 1010 bacteria passing through 104 generations per year over five billion years would produce a maximum of 5×1023 novel genotypes.  This number represents the ‘upper bound’ on the number of new protein sequences since many of the differences in genotype would not generate “distinctly new proteins”.  Extending this further, novel protein functions requiring a 300 amino acid sequence (20300 possible sequences) could theoretically be achieved in 10366 different ways (20300/5×1023). 

Ultimately we find that proteins do not tolerate this extraordinary level of “sequence indifference”.  High profile mutagenesis experiments of beta lactamases and bacterial ribonucleases have shown that functionality is decisively eradicated when a mere 10% of amino-acids are substituted in conservative regions of these proteins.  A more in-depth breakdown of data from a beta lactamase domain and the enzyme chorismate mutase  has further reinforced the pronouncement that very few protein sequences can actually perform a desired function; so few in fact that they are “far too rare to be found by random sampling”.

But Axe’s landslide evaluation does not end here.  He further considers the possibility that disparate protein functions might share similar amino-acid identities and that therefore the jump between functions in sequence space might be realistically achievable through random searches.  Sequence alignment studies between different protein domains do not support such an exit to the sampling problem.  While the identification of a single amino acid conformational switch has been heralded in the peer-review literature as a convincing example of how changes in folding can occur with minimal adjustments to sequence, what we find is that the resulting conformational variants are unstable at physiological temperatures.  Moreover such a change has only been achieved in vitro and most probably does not meet the rigorous demands for functionality that play out in a true biological context.  What we also find is that there are 21 other amino-acid substitutions that must be in place before the conformational switch is observed. 

Axe closes his compendious dismantling of protein evolution by exposing the shortcomings of modular assembly models that purport to explain the origin of new protein folds.  The highly cooperative nature of structural folds in any given protein means that stable structures tend to form all at once at the domain (tertiary structure) level rather that at the fold (secondary structure) level of the protein.  Context is everything.  Indeed experiments have held up the assertion that binding interfaces between different forms of secondary structure are sequence dependent (ie: non-generic).  Consequently a much anticipated “modular transportability of folds” between proteins is highly unlikely. 

Metaphors are everything in scientific argumentation.  And Axe’s story of a random search for gem stones dispersed across a vast multi-level desert serves him well for illustrating the improbabilities of a Darwinian search for novel folds.  Axe’s own experience has shown that reticence towards accepting his probabilistic argument stems not from some non-scientific point of departure in what he has to say but from deeply held prejudices against the end point that naturally follows.  Rather than a house of cards crumbling on slippery foundations, the case against the neo-Darwinian explanation is an edifice built on a firm substratum of scientific authenticity.  So much so that critics of those who, like Axe, have stood firm in promulgating their case, better take note. 

Read Axe’s paper at: http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2010.1

Further Reading

  1. Michael Denton, Craig Marshall (2001), Laws of form revisited, Nature Volume 410, p. 417
  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

365 Responses to Proteins Fold As Darwin Crumbles

  1. Robert Deyes:

    I have already referred to this very good paper in many recent posts. It really deserved a thread of its own. So, thank you for pointing the attention to it.

    The basic information contained in protein domains remains IMO the strongest and most detailed model for ID. I hope this paper can help us to have some in depth discussion about that.

  2. Dear All,

    We have seen it many times proven that there are insufficient physical events in the entire known universe to explore the object space of a single 300 amino acid protein.

    Now I have a question:

    Can an intelligent agent, confined fully within this known universe, design life (exactly as we know it) from scratch? The agent knows all physical and chemical laws and has a practically unlimited goverment funding for this project. The idea of DNA, amino acids, proteins, lipid walls, ATP etc, however, are yet to be discovered.

    Is there a research program that will terminate with a complete biosphere within a few billion years? Are there searches that are capable to generate the amount of biological information around us within 10 billion years or so?

    Or even an intelligent search would need more than 10^150 calculations so the biological information cannot be originated from within this universe.

  3. Alex73:

    Good question.

    And here is my answer:

    No, an intelligent search has a huge advantage vs an unguided search, even if the intelligent agent does not know the final detailed answer in advance.

    First of all, a conscious intelligent is guided by his conscious representations of reality. IOW:

    a) He is aware of purposes

    b) He perceives reality and creates intelligent maps of it

    c) He can build explanatory models

    d) He can make reasonable inferences

    e) He is guided by innate cognitive principles (logics, mathemathics)

    f) He can recognize function and measure it

    And so on. All these are not assumptions: they are just considerations derived form the direct observation of how design is realize in us human conscious intelligent beings.

    That’s why humans can easily output original CSI, and build machines, and elaborate comnplex cognitive theories like QM, and write software, and output poetry and dramas, and so on.

    None of that would be in the reach of simple unconscious processes.

    So, intelligent consciousness really makes the difference: it’s the whole magic of knowledge and of guided action.

  4. gpuccio,

    Thanks for your reply. I agree, human intellgence has enormous advantage over blind search.

    But is it enough? How many experiments or floating point operations in a quantum model would be required to design a bacterium from totally, absolutely, scratch? How many interactions have to be examined between the gazillion chemical components of a human body? Just think about how much time it takes for pharmaceutical companies to test the effects of a single molecule. What about an entire biosphere, where all inhabitants can potentially interact?

    To be more precise: can we give an estimate for the quantity of required 1-bit (yes/no) decisions that will produce the functional information around us? Given the limitations of the physical world, was there enough time and computational resource to make all of these decisions?

  5. alex73 you asked a very good question:

    “Can an intelligent agent, confined fully within this known universe, design life (exactly as we know it) from scratch? The agent knows all physical and chemical laws and has a practically unlimited government funding for this project. The idea of DNA, amino acids, proteins, lipid walls, ATP etc, however, are yet to be discovered.,,,, Are there searches that are capable to generate the amount of biological information around us within 10 billion years or so?”

    In my very unqualified opinion, Unless quantum computers greatly increase computing capacity past what is maximally possible for computers built of “particles”, not only is a comprehensive search for all relevant biological sequences impossible for random processes but the search is also impossible “for a intelligent agent, confined fully within this known universe,”:

    Hopefully one of the more qualified computer programmers on UD can comment on this to give us a better idea just how hard a search would be for a “confined” intelligent agent armed with idea supercomputer.

    notes:

    Book Review – Meyer, Stephen C. Signature in the Cell. New York: HarperCollins, 2009.
    Excerpt: As early as the 1960s, those who approached the problem of the origin of life from the standpoint of information theory and combinatorics observed that something was terribly amiss. Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created. Now of course, elementary particles aren’t chemical laboratories, nor does peptide synthesis take place where most of the baryonic mass of the universe resides: in stars or interstellar and intergalactic clouds. If you look at the chemistry, it gets even worse—almost indescribably so: the precursor molecules of many of these macromolecular structures cannot form under the same prebiotic conditions—they must be catalysed by enzymes created only by preexisting living cells, and the reactions required to assemble them into the molecules of biology will only go when mediated by other enzymes, assembled in the cell by precisely specified information in the genome.
    So, it comes down to this: Where did that information come from? The simplest known free living organism (although you may quibble about this, given that it’s a parasite) has a genome of 582,970 base pairs, or about one megabit (assuming two bits of information for each nucleotide, of which there are four possibilities). Now, if you go back to the universe of elementary particle Planck time chemical labs and work the numbers, you find that in the finite time our universe has existed, you could have produced about 500 bits of structured, functional information by random search. Yet here we have a minimal information string which is (if you understand combinatorics) so indescribably improbable to have originated by chance that adjectives fail.
    http://www.fourmilab.ch/docume.....k_726.html

    In the year 2000 IBM announced the development of a new super-computer, called Blue Gene, which was 500 times faster than any supercomputer built up until that time. It took 4-5 years to build. Blue Gene stands about six feet high, and occupies a floor space of 40 feet by 40 feet. It cost $100 million to build. It was built specifically to better enable computer simulations of molecular biology. The computer performs one quadrillion (one million billion) computations per second. Despite its speed, it was estimated to take one entire year for it to analyze the mechanism by which JUST ONE “simple” protein will fold onto itself from its one-dimensional starting point to its final three-dimensional shape.

    As well armed with all the known laws of protein folding our search is slow:

    “Blue Gene’s final product, due in four or five years, will be able to “fold” a protein made of 300 amino acids, but that job will take an entire year of full-time computing.” Paul Horn, senior vice president of IBM research, September 21, 2000
    http://www.news.com/2100-1001-233954.html

    “SimCell,” anyone?
    “Unfortunately, Schulten’s team won’t be able to observe virtual protein synthesis in action. Even the fastest supercomputers can only depict such atomic complexity for a few dozen nanoseconds.” – cool cellular animation videos on the site
    http://whyfiles.org/shorties/230simcell/

    Networking a few hundred thousand computers together has reduced the time to a few weeks for simulating the folding of a single protein molecule:

    A Few Hundred Thousand Computers vs. A Single Protein Molecule – video
    http://www.metacafe.com/watch/4018233

    As a sidelight to this, the complexity of computing the actions of even simple atoms quickly exceeds the capacity of our supercomputers of today:

    Delayed time zero in photoemission: New record in time measurement accuracy – June 2010
    Excerpt: Although they could confirm the effect qualitatively using complicated computations, they came up with a time offset of only five attoseconds. The cause of this discrepancy may lie in the complexity of the neon atom, which consists, in addition to the nucleus, of ten electrons. “The computational effort required to model such a many-electron system exceeds the computational capacity of today’s supercomputers,” explains Yakovlev.
    http://www.physorg.com/news196606514.html

  6. also of note:

    Possibilities and Limitations of Quantum Computing
    Excerpt: Together with co-authors, particularly Harry Buhrman, de Wolf proved various strong limitations on quantum computers. For most problems they are not significantly faster than classical computers. These limitations were proved by reducing complexity theoretic questions to algebraic questions about degrees of multivariate polynomials. Sufficient prove of strong -often optimal- lower bounds on the time a quantum algorithm needs to compute a Boolean function, can be given by proving a lower bound on the degree of an n-variate polynomial approximating that function.

    Moreover, de Wolf also contributed to the discovery of some quantum algorithms and protocols that outperform their classical counterparts. One example of this is a ‘quantum fingerprinting’ scheme. It allows two separated parties to compare large chunks of data more efficiently than classical computers. By assigning small quantum states to long classical strings, the amount of data that has to be exchanged for this operation can be exponentially reduced. In the future this technique could for example be used to create digital autographs.
    http://www.ercim.eu/publicatio....._wolf.html

  7. Alex73:

    can we give an estimate for the quantity of required 1-bit (yes/no) decisions that will produce the functional information around us?

    No, we can’t, because that depends critically on many variables we don’t know:

    a) How much the agent knows in advance (you hypothesized that “the agent knows all physical and chemical laws”. That’s a vague statement, and indeed we don’t know what the agent knows in advance and wht he has to discover).

    b) What kind of “search” the agent implements from time to time, and how much of it is algorithmic and how much is random search.

    c) How efficient is the agent in deriving new knowledge from the results of each search.

    That is only an example. Many other points could be added, some of them more philosophical, some more technical.

    I would just comment that it is true, we humans have been up to now particularly inefficient in the field of new biological design, but please bear in mind two important things:

    1) We are just beginning, and can improve

    2) The original designer of biological information is/was probably much better than we are at it.

    IOWs, it is not certainly only, or mainly, a question of computing power.

    Moreover, just reflecting on the formal ptoblem of “how much did the designer know in advance?”, I would like to add a further comment.

    From what we know, and the facts we have, I am convinced that the greatest “leap” in information content in biological history was OOL. Indeed, I don’t believe there is any objective data to hypothesize that OOL was a gradual process. I am truly convinced that it was rather sudden (whether that means a few million years or a few minutes, it’s really difficult to hypothesize).

    That means, with reference to protein domains, that about half of known protein domains was already implemented at OOL, that is rather suddenly.

    A similar leap can be observed at the origin of metazoa body plans. The Ediacara and Cambrian explosions are not very compatible with a gradual search, be it a random or an intelligent one.

    On the contrary, other transitions are certainly more gradual, and could well speak for an intelligent gradual search.

    So, your questions are very interesting, but I doubt that at present they can be answered. But they are questions which, in principle, can and should be approached by scientific research. What we need are:

    a) more facts

    b) better reasoning on facts

  8. To all:

    I have found this very interesting recent paper:

    “Sequence space and the ongoing expansion of the protein universe”

    Inna S. Povolotskaya & Fyodor A. Kondrashov

    Nature, Vol 465| 17 June 2010| doi:10.1038/nature09105

    The abstract:

    “The need to maintain the structural and functional integrity of an evolving protein severely restricts the repertoire of acceptable amino-acid substitutions1, 2, 3, 4. However, it is not known whether these restrictions impose a global limit on how far homologous protein sequences can diverge from each other. Here we explore the limits of protein evolution using sequence divergence data. We formulate a computational approach to study the rate of divergence of distant protein sequences and measure this rate for ancient proteins, those that were present in the last universal common ancestor. We show that ancient proteins are still diverging from each other, indicating an ongoing expansion of the protein sequence universe. The slow rate of this divergence is imposed by the sparseness of functional protein sequences in sequence space and the ruggedness of the protein fitness landscape: ~98 per cent of sites cannot accept an amino-acid substitution at any given moment but a vast majority of all sites may eventually be permitted to evolve when other, compensatory, changes occur. Thus, ~3.5?×?109?yr has not been enough to reach the limit of divergent evolution of proteins, and for most proteins the limit of sequence similarity imposed by common function may not exceed that of random sequences.”

    The paper is freely available online, and I think it can certainly contribute to the present discussion.

    Tha important point IMO is that many proteins were already present in LUCA (or very early), and they have diverged in sequence while maintaining their function. Another important point is the following:

    “As a protein evolves along a rugged fitness ridge, some previously
    forbidden amino-acid substitutions at a site become acceptable and
    some previously acceptable substitutions become forbidden, owing to compensatory substitutions at other sites of the same protein or its interaction partners.”

    The role of compensatory substitutions is very important, and it explains how some proteins, for instance some myoglobins, can be very different at the level of primary sequence, and yet retain the same 3D structure and function.

  9. One comment to the above could be that Durston’s method of using Shannon’s entropy of the individual AA sites in functional protein clusters remains probably the best way of measuring empirically the explored functional space of a protein cluster.

  10. I guess my question in response to Dat’s question would be is,are complex organisms just amalgamations of single celled organisms?

  11. The important point IMO is that many proteins were already present in LUCA (or very early)…

    Just curious. Where do you place LUCA on a timeline, and where do you place OOL?

  12. gpuccio I noticed this in the abstract of your paper:

    We formulate a computational approach to study the rate of divergence of distant protein sequences and measure this rate for ancient proteins, those that were present in the last universal common ancestor. We show that ancient proteins are still diverging from each other, indicating an ongoing expansion of the protein sequence universe.”

    Not to belittle computer models too much, but as Gil has pointed out repeatedly, computer models are only as good as can be mapped to the real world and the potential to be severely led astray by computer models, while neglecting to reference what is actually possible in the real world, are great. i.e. The actual restriction of proteins diverging may be far greater than they have assumed in the parameters of their model:

    As is pointed out here by Dr. Behe:

    The proteins that are actually found in life “for evolution to actually work with” are shown to be highly constrained in their ability to evolve into other proteins:

    Dollo’s law, the symmetry of time, and the edge of evolution – Michael Behe – Oct 2009
    Excerpt: Nature has recently published an interesting paper which places severe limits on Darwinian evolution.,,,
    A time-symmetric Dollo’s law turns the notion of “pre-adaptation” on its head. The law instead predicts something like “pre-sequestration”, where proteins that are currently being used for one complex purpose are very unlikely to be available for either reversion to past functions or future alternative uses.
    http://www.evolutionnews.org/2.....f_tim.html

    Severe Limits to Darwinian Evolution: – Michael Behe – Oct. 2009
    Excerpt: The immediate, obvious implication is that the 2009 results render problematic even pretty small changes in structure/function for all proteins — not just the ones he worked on.,,,Thanks to Thornton’s impressive work, we can now see that the limits to Darwinian evolution are more severe than even I had supposed.
    http://www.evolutionnews.org/2......html#more

    as well I have a strong reason to believe that functional sequences for proteins are far more rare than even Dr. Axe’s work of 1 in 10^77 would indicate:

    proteins have now been shown to have a “Cruise Control” mechanism, which works to “self-correct” the integrity of the protein structure from any random mutations imposed on them.

    Proteins with cruise control provide new perspective:
    “A mathematical analysis of the experiments showed that the proteins themselves acted to correct any imbalance imposed on them through artificial mutations and restored the chain to working order.”
    http://www.princeton.edu/main/...../60/95O56/

    Cruise Control?,, The equations of calculus involved in achieving even a simple process control loop, such as a dynamic cruise control loop, are very complex. In fact it seems readily apparent to me that highly advanced algorithmic information must reside in each individual amino acid used in a protein in order to achieve such control. This fact gives us clear evidence that far more functional information resides in proteins than meets the eye. For a sample of the equations that must be dealt with, to “engineer” even a simple process control loop like cruise control, for a single protein, please see this following site:

    PID controller
    A proportional–integral–derivative controller (PID controller) is a generic control loop feedback mechanism (controller) widely used in industrial control systems. A PID controller attempts to correct the error between a measured process variable and a desired setpoint by calculating and then outputting a corrective action that can adjust the process accordingly and rapidly, to keep the error minimal.
    http://en.wikipedia.org/wiki/PID_controller

    It is in realizing the staggering level of engineering that must be dealt with to achieve “cruise control”, for each individual protein, that it becomes apparent even Axe’s 1 in 10^77 estimate for finding specific functional proteins within sequence space, may be far too generous, since the individual amino acids themselves are clearly embedded with highly advanced mathematical language in their structures, which adds an additional severe constraint, on top of the 1 in 10^77 constraint, on finding exactly which of the precise sequences of amino acids in sequence space will perform a specific function.

    Though the authors of the paper tried to put a evolution friendly spin on the “cruise control” evidence, finding an advanced “Process Control Loop” at such a base molecular level, before natural selection even has a chance to select for any morphological novelty, is very much to be expected as a Intelligent Design/Genetic Entropy feature, and is in fact a very constraining thing to the amount of variation we can expect from proteins in the first place.

    As well gpuccio, I’ve seen this claim for a extraordinarily high percent for “parent” proteins required to be present at the OOL, but is not this number of proteins itself derived from the rather dubious fact that evolutionists could not account for the origination of certain proteins at certain levels of life and thus they “pushed them all back” to the “former age of miracles” at the OOL?

  13. Don’t know how relevant this is, but first multicellular life seems to have been pushed back a billion years or so. It may have some bearing on the duration of the Cambrian explosion.

    http://news.yahoo.com/s/afp/20.....0630232304

  14. BA:

    I am not sponsoring the study, just bringing it to our attention for discussion.

    And anyway, this computational model is not a mere simulation, it is applied (I can’t really say how reliably) to real biological data.

  15. BA:

    As well gpuccio, I’ve seen this claim for a extraordinarily high percent for “parent” proteins required to be present at the OOL, but is not this number of proteins itself derived from the rather dubious fact that evolutionists could not account for the origination of certain proteins at certain levels of life and thus they “pushed them all back” to the “former age of miracles” at the OOL?

    No, I don’t think so. I think it is derived form the dotribution of those proteins we observe in current living beings, and form the analysis of homologies between sequences, 3D structures and function.

  16. sorry gpuccio, I meant the paper you referenced, but if you do sponsor, where do I send my application 8)

    when you say:

    “it is applied (I can’t really say how reliably) to real biological data.”

    Do they not merely find similar sequences of AA’s of proteins, with the gargantuan assumption that you can get from point a to point b in the similar sequences by evolutionary processes, even though no one has demonstrated, in the real world, that it is possible to do as such for even a single protein as Dr. Behe pointed out in his review of Thornton’s work?

  17. Petrushka (#11):

    I have no special reason to put LUCA or OOL at any special place in the timeline. I just accept what is usually considered the best assumption:

    Wikipedia:

    “The LUA is estimated to have lived some 3.5 to 3.8 billion years ago (sometime in the Paleoarchean era)”

    About OOL, I have no reason to believe that it was much earlier than that. Indeed, I think OOL was probably rather sudden and not gradual. So, it could just start with LUCA or something similar in a relatively short time.

    That’s just my idea, but I don’t believe that anyway the gap between OOL (whatever it was) and LUCA can be very great, even for those who believe in some model of gradual OOL: let’s say 100 – 200 My?

  18. BA:

    No, I don’t think that’s the point. I am still reading the paper, so I could be wrong on some points, but I believe that it explores, through a computational model, how proteins which were very similar (or identical) at the beginning can become different in time at the primary sequence level, while retaining their 3D structure and function, through random mutations and negative selection.

    IOW, the protein family starts from a small point (at the “big bang” in protein space), and then, throughout evolution, it “explores” its functional space while remaining essentially the same at the functional level, and changing its primary structure as far as functional constraints allow.

    That has nothing to do with finding a new functional space and function. It is instead a way to reason about functional spaces as related to the general search space. For instance, an interesting point is that according to the authors the exploration of the functional space is slow and constant, due to functional constraints. That’s very interesting, considering that those protein clusters where already functionally defined and separated in LUCA, and them have continued to diverge at the level of primary sequence. The obvious question is: how was the fucntional information present at the “big bang” achieved, especially if it was so complex and compact at that time?

  19. bornagain, gpuccio,

    Thanks for coming back. My hunch is -like bornagain’s- that the required computational capacity goes beyond the available resources. I also think that the margin is enormous, i.e. not even a single, primitive bacterium could be designed from scratch.

    Now if the margin is indeed so large, then perhaps it will be possible to identify a well defined subsystem where the estimates can be performed. Also, the realistically available resources for the project are just a small portion of the mass and energy of the universe, after all, we see stars and galaxies around us and not the interiors of a mighty research lab. Anyway, I will keep on thinking about a possible way to attack the problem.

    I think the issue is important, because the Darwinist camp keeps on pestering the ID folks for more details about the Designer. Most of the ID studies I am aware of focus on showing

    1. the existence of design

    2. the utter inability of an unintelligent processes to generate significant anounts of information

    i.e. the main thrust was to disprove Darwinism. Now paving the way to calculate the resources necessary for an intelligent design procedure could be a unique ID related research subject with certainly serious philosophical consequences. Dr Dembski’s studies into “search for an optimum search” definitely seem to go in this way also.

  20. Alex73:

    Your points are interesting. I still believe we know too little to make that kind of assessment, although I am very confident that even the complexities in biological information are certainly accessible to intelligent speculation.

    A starting point could be to wait and see if protein engineers fare better in building functional proteins “from scratch”. Up to now, the results are not encouraging, but as I said, we have just begun.

  21. gpuccio you said:

    “The obvious question is: how was the functional information present at the “big bang” achieved, especially if it was so complex and compact at that time?”

    That is indeed a very important question, as Dr. Meyers has given no small headache to evolutionists about, but I still find the assumptions in their model to be problematic and I feel fairly confident, from the few studies I’ve seen so far, that even their “modest” assumptions for “functional spaces as related to the general search space” will be found to be way to optimistic as to what the “real world” will allow.

    A couple of videos that might be of interest to some:

    Life On Earth Its earliest evidence – video
    http://science.discovery.com/v.....dence.html

    U-rich Archaean sea-floor sediments from Greenland – indications of >3700 Ma oxygenic photosynthesis
    http://adsabs.harvard.edu/abs/2004E&PSL.217..237R

    Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems. Gregory S. Engel, Nature (12 April 2007)
    Photosynthetic complexes are exquisitely tuned to capture solar light efficiently, and then transmit the excitation energy to reaction centres, where long term energy storage is initiated.,,,, This wavelike characteristic of the energy transfer within the photosynthetic complex can explain its extreme efficiency, in that it allows the complexes to sample vast areas of phase space to find the most efficient path. —- Conclusion? Obviously Photosynthesis is a brilliant piece of design by “Someone” who even knows how quantum mechanics works.
    http://www.ncbi.nlm.nih.gov/pubmed/17429397

    Dr. Hugh Ross – Origin Of Life Paradox – video
    http://www.metacafe.com/watch/4012696

    Life – Its Sudden Origin and Extreme Complexity – Dr. Fazale Rana – video
    http://www.metacafe.com/watch/4287513

    Biological Big Bangs – Origin Of Life and Cambrian – Dr. Fazale Rana – video
    http://www.metacafe.com/watch/4284466

    The Biological Big Bang model for the major transitions in evolution – Eugene V Koonin – Background:
    “Major transitions in biological evolution show the same pattern of sudden emergence of diverse forms at a new level of complexity. The relationships between major groups within an emergent new class of biological entities are hard to decipher and do not seem to fit the tree pattern that, following Darwin’s original proposal, remains the dominant description of biological evolution. The cases in point include the origin of complex RNA molecules and protein folds; major groups of viruses; archaea and bacteria, and the principal lineages within each of these prokaryotic domains; eukaryotic supergroups; and animal phyla. In each of these pivotal nexuses in life’s history, the principal “types” seem to appear rapidly and fully equipped with the signature features of the respective new level of biological organization. No intermediate “grades” or intermediate forms between different types are detectable;
    http://www.biology-direct.com/content/2/1/21

  22. That’s just my idea, but I don’t believe that anyway the gap between OOL (whatever it was) and LUCA can be very great, even for those who believe in some model of gradual OOL: let’s say 100 – 200 My?

    Considering that bacteria exchange DNA rather promiscuously, I’m not sure the concept of a single common ancestor of single celled organisms makes much sense.

  23. Petrushka, though I don’t want to waste time diverging from the main topic of protein evolution, or lack thereof, I want to take one post to state I would hardly describe the method in which bacteria transfer DNA as “promiscuously”,:

    In particular one method of DNA transfer between bacteria gives clear indication of being intelligently designed method of communication between bacteria cells:

    Transduction (genetics)
    http://en.wikipedia.org/wiki/T.....(genetics)

    The Bacteriophage Virus – Assembly Of A Molecular “Lunar Landing” Machine – Intelligent Design – video
    http://www.metacafe.com/watch/4023122/

    As well, need I remind you of Dr. Cano’s studies of ancient bacteria which show extreme conservation of molecular sequences?

  24. Petrushka:

    LUCA is not my personal invention: it’s a very widespread assumption of the main darwinist theory.

    Even if there was exchange of genes in the early history of life, where didi those genes come from?

    If you believe in common descent (and I do), it seems reasonable that some early common ancestor must have existed. And the criteria to ascribe to that ancestor a basic protein repertoire, while certainly indirect, seem reasonable enough. And that basic repertoire, according not to me or to ID, but to current darwinists science, seems to have been rather rich and complex.

  25. Alex @2

    You asked “Can an intelligent agent, confined fully within this known universe, design life (exactly as we know it) from scratch?”

    According to Dawkins, intelligent aliens can do it. But more importantly, Why the limiter of “confined fully within this known universe” ?? Correct me if I’m wrong, but evidence of design is not contingent upon it being only from within this universe.

  26. Bantay:

    Your remark is obviously right: we can think of a transcendent designer, or of an omniscient designer, or of an omnipotent designer.

    But we can also think of an immanent designer, who acts with power, but also with context dependent limitations.

    As ID, in its current status, cannot give us detailed inferences about the nature of the designer, it is important to explore all possibilities, at least as potential models.

    And I do believe that an immanent, non omniscient designer could do it, could build up biological information. That’s an important point, whatever the final nature of the designer will be found to be. It’s a point about design, and about intelligent conscious beings. And it’s a point which must be discussed.

  27. If you believe in common descent (and I do), it seems reasonable that some early common ancestor must have existed.

    I think LUCA is considered to be a population of DNA exchanging organisms rather than an individual. It’s not all just so stories, You can see bacteria exchanging DNA with a light microscope.

    The question about where and when genes originated is a gaps question, and gaps questions are attacked with research.

    You can do, as your referenced paper does, and work backward from existing proteins, or you can work forward from synthesized replicators, or you can do both.

    We are used to seeing difficult problems in science solved quickly — in a few decades at most — and we tend to forget that problems like gravity have remained incompletely solved for centuries.

    The argument from gaps erodes over time, although the rate of progress is variable.

  28. Alex73,

    I found this more recent article on the limits of quantum computing:

    The Limits of Quantum Computers – March 2008
    Excerpt:
    Quantum computers would be exceptionally fast at a few specific tasks, but it appears that for most problems they would outclass today’s computers only modestly. This realization may lead to a new fundamental physical principle

    Key Concepts

    * Quantum computers would exploit the strange rules of quantum mechanics to process information in ways that are impossible on a standard computer.
    * They would solve certain specific problems, such as factoring integers, dramatically faster than we know how to solve them with today’s computers, but analysis suggests that for most problems quantum computers would surpass conventional ones only slightly.
    * Exotic alterations to the known laws of physics would allow construction of computers that could solve large classes of hard problems efficiently. But those alterations seem implausible. In the real world, perhaps the impossibility of efficiently solving these problems should be taken as a basic physical principle.
    http://www.scientificamerican......-computers

    Thus it seems Alex73 the maximum limit of computing that is achievable by the most “perfect” ideal supercomputer in the physical universe will not be able to surpass the threshold that has already been granted to evolutionists for resources, Namely:

    Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created.
    http://www.fourmilab.ch/docume.....k_726.html

  29. Doug Axe also used a computer example to illustrate the extreme rarity of finding a functional protein:

    Doug Axe Knows His Work Better Than Steve Matheson
    Excerpt: Suppose a secretive organization has a large network of computers, each secured with a unique 39-character password composed from the full 94-charater set of ASCII printable characters. Unless serious mistakes have been made, these passwords would be much uglier than any you or I normally use (and much more secure as a result). Try memorizing this:

    C0$lhJ#9Vu]Clejnv%nr&^n2]B!+9Z:n`JhY:21

    Now, if someone were to tell you that these computers can be hacked by the thousands through a trial-and-error process of guessing passwords, you ought to doubt their claim instinctively. But you would need to do some math to become fully confident in your skepticism. Most importantly, you would want to know how many trials a successful hack is expected to require, on average. Regardless of how the trials are performed, the answer ends up being at least half of the total number of password possibilities, which is the staggering figure of 10^77 (written out as 100, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000). Armed with this calculation, you should be very confident in your skepticism, because a 1 in 10^77 chance of success is, for all practical purposes, no chance of success. My experimentally based estimate of the rarity of functional proteins produced that same figure, making these likewise apparently beyond the reach of chance.
    http://www.evolutionnews.org/2.....35561.html

  30. Does your experimentally based estimate account for the fact that functional proteins can be found in random distributions accessible in ordinary human lifetimes?

  31. Bantay,

    Indeed, mere evidence for design does not require identification of a designer. ID proponents are still often asked who their Designer was. Some say that there is no reason to appeal for a supernatural designer. Even Dawkins accepts that life could be designed (by other, non-designed, evolved intelligent beings, that is.)

    Brute force searches for functional proteins, however, would fail even if we had infinite time, simply because there is not enough matter in the universe to store a single bit of information about each possible combination. We would just run out of memory. Consequently, we will never, ever, ever, ever, ever, ever, ever, ever, ever, ever be able to learn about all possible combinations.

    In the past, when we learned a something from nature we could often use it in engineering. I expect protein engineering to become a proper technical discipline of its own in my life time. Maybe then, as gpuccio also confirms, we will learn how to find islands of function in this vast space.

    On the other hand, it is also jolly well possible that once we try to make our owns we will find the already existing proteins even more amazing.

    bornagain, gpuccio,

    Thanks for your comments.

  32. Petruska, number one, exactly who are you talking to?, and number two, where is your referenced citation for your assertion?

  33. Szostak’s 1 in 10^12?

    sorry petrushka, no soup for you:

    Szostak’s number of 1 in 10^12 is severely misleading as to finding a protein that will perform a specific function. as well, Szostak also seems to have allowed “slop” in his experiment with “tethered” mRNA’s.,,, Plus, Szostak’s work has now been brought into severe question by empirical work which show his proteins lead to “cascading failure:

    A Man-Made ATP-Binding Protein Evolved Independent of Nature Causes Abnormal Growth in Bacterial Cells
    Excerpt: “Recent advances in de novo protein evolution have made it possible to create synthetic proteins from unbiased libraries that fold into stable tertiary structures with predefined functions. However, it is not known whether such proteins will be functional when expressed inside living cells or how a host organism would respond to an encounter with a non-biological protein. Here, we examine the physiology and morphology of Escherichia coli cells engineered to express a synthetic ATP-binding protein evolved entirely from non-biological origins. We show that this man-made protein disrupts the normal energetic balance of the cell by altering the levels of intracellular ATP. This disruption cascades into a series of events that ultimately limit reproductive competency by inhibiting cell division.”
    http://www.plosone.org/article.....ne.0007385

    It is also interesting to note:

    Yet even if Szostak’s “optimistic” 1 in 10^12 (trillion) number were true, if you can call 1 in a trillion optimistic, for finding biologically functional proteins in sequence space, it would still be so rare as to present insurmountable mathematical difficulties for any evolutionary scenario. There simply is no vast reservoir of trillions upon trillions of “junk proteins” to be sifted through in nature (proteins don’t form “naturally”) waiting to accidentally form into a “simple” self replicating molecule. Nor is there any vast reservoir of junk proteins, to be found in living organisms, waiting for natural selection to sift through them to find any novel combinations that may be useful. In fact ribosomes are severely intolerant of inexact proteins:

    The Ribosome: Perfectionist Protein-maker Trashes Errors
    Excerpt: The enzyme machine that translates a cell’s DNA code into the proteins of life is nothing if not an editorial perfectionist…the ribosome exerts far tighter quality control than anyone ever suspected over its precious protein products… To their further surprise, the ribosome lets go of error-laden proteins 10,000 times faster than it would normally release error-free proteins, a rate of destruction that Green says is “shocking” and reveals just how much of a stickler the ribosome is about high-fidelity protein synthesis.
    http://www.sciencedaily.com/re.....134529.htm

    So since humans have 80% different proteins than chimps how in the world did this occur with a system so dead set against variance Petrushka?

    As well, the “protein factory” of the ribosome is far more complicated than first thought:

    Honors to Researchers Who Probed Atomic Structure of Ribosomes – Robert F. Service
    Excerpt: “The ribosome’s dance, however, is more like a grand ballet, with dozens of ribosomal proteins and subunits pirouetting with every step while other key biomolecules leap in, carrying other dancers needed to complete the act.”
    http://www.creationsafaris.com.....#20091015a

  34. “This selection yielded four new ATP-binding proteins that appear to be unrelated to each other or to anything found in the current databases of biological proteins.”

  35. Petrushka:

    Ah, the famous Szostak’s paper!

    I would like to comment extensively about it, but probably I will not the time today. I will as soons as possible.

  36. 37

    @Bornagain. (#28)

    You wrote:

    Thus it seems Alex73 the maximum limit of computing that is achievable by the most “perfect” ideal supercomputer in the physical universe will not be able to surpass the threshold that has already been granted to evolutionists for resources,

    Born,

    Out of curiosity, how did you reach this conclusion? For example…

    - Are you a subscriber to Scientific American? If not, you only had access to a summary of the articles contents.
    - You seem to have merely assumed that quantum computing represents the most “perfect” ideal supercomputer in the physical universe, rather than derived it from the article or provided an argument or reference.
    - The qualifier “most” clearly does not specify if protein folding was one of the problems that would not benefit from quantum computing. You seem to have merely assumed it was.
    - For you to know a threshold was not surpassed, you’d need to the resulting limit once expanded by quantum computing and the amount of computational requirements granted or available to “evolutionists” to solve the problem. However, the article did not present any of these things, nor did you provide any numbers of references. Again, you merely seemed to assume this was the case.

    Note: I’m not trying to argue one way or the other, I’m just trying to understand how you reached this specific conclusion. It’s a mystery to me.

    For example, since I *do* know that search is one of the kinds of computational problems that benefit from quantum computing, I recognized it as a natural fit for the job of searching through folding patterns. And since I knew Grovers’ algorithm is one of the most well known quantum search algorithms, I did a a Google search for both “Grovers’ algorithm” protein. As I expected, I found articles like this…

    http://cdsweb.cern.ch/record/4.....002076.pdf

    To quote from the research paper….

    Some fundamental tasks in biocomputing involving sequence analysis include: searching databases in order to com- pare a new sub-sequence to existing sequences, inferring protein sequence from DNA sequence, and calculation of sequence alignment in the analysis of protein structure and function. A tremendous amount of computing is required, much of which is devoted to search-type prob- lems, either directly in large databases, or in configura- tion space of alignment possibilities. While it is possible that all of these problems may be amenable to quantum algorithmic speed-up, it is explicitly demonstrated in this work how the fundamental task of sequence alignment can be approached using a quantum computer. Indeed,this problem is a very natural application of the quantum search algorithm (perhaps a strange reflection of the possibility that the machinery of DNA itself may actually function using quantum search algorithms [3]).

    Again, I’m not arguing one way or another. I’m just noting that, in reaching your conclusion, you made many assumptions which were not evident and at least one of those assumptions was false.

    Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created.

    Of course, the hard to vary explanation of why Groves’ search algorithm is O(./N) rather than the classical cost of O(N) is that it executes in parallel on 10^500 universes. This would seem to increase the odds dramatically.

    Again, I’m not arguing one way or another. Nor am I suggesting that posting and quoting links does not add to the discussion. I’m illustrating that it’s *not* a substitute for actually understanding the subject at hand.

  37. Alex73 @ #2,

    I’ve been waiting for this argument of yours to be made.

    I’ve been thinking along the same lines but have never been able to write it down in a satisfactory English. Some of my argument is built on my understanding of “A Different Universe” by Prof. Robert B. Laughlin

  38. veilsofmaya, I appreciate your link, as I am a big fan for the potential of quantum computing doing something breathtaking, but you don’t seem to understand the full scope of the problem searching for an unknown sequence in sequence space and then determining if that unknown sequence will be functional or not the same problem. They are two completely different computational problems entirely. Sorry for not pointing that out earlier.

    In the year 2000 IBM announced the development of a new super-computer, called Blue Gene, which was 500 times faster than any supercomputer built up until that time. It took 4-5 years to build. Blue Gene stands about six feet high, and occupies a floor space of 40 feet by 40 feet. It cost $100 million to build. It was built specifically to better enable computer simulations of molecular biology. The computer performs one quadrillion (one million billion) computations per second. Despite its speed, it was estimated to take one entire year for it to analyze the mechanism by which JUST ONE “simple” protein will fold onto itself from its one-dimensional starting point to its final three-dimensional shape.

    “Blue Gene’s final product, due in four or five years, will be able to “fold” a protein made of 300 amino acids, but that job will take an entire year of full-time computing.” Paul Horn, senior vice president of IBM research, September 21, 2000
    http://www.news.com/2100-1001-233954.html

    “SimCell,” anyone?
    “Unfortunately, Schulten’s team won’t be able to observe virtual protein synthesis in action. Even the fastest supercomputers can only depict such atomic complexity for a few dozen nanoseconds.” – cool cellular animation videos on the site
    http://whyfiles.org/shorties/230simcell/

    Networking a few hundred thousand computers together has reduced the time to a few weeks for simulating the folding of a single protein molecule:

    A Few Hundred Thousand Computers vs. A Single Protein Molecule – video
    http://www.metacafe.com/watch/4018233

    In real life, the protein folds into its final shape in a fraction of a second! The Blue Gene computer would have to operate at least 33 million times faster to accomplish what the protein does in a fraction of a second. This is the complexity found for folding JUST ONE “simple” existing protein molecule there are many crucial proteins that are thousands of Amino Acids long for which the problem of computing folding is exponentially worse.

    As a sidelight to the complexity found for just folding existing proteins, the complexity of computing the actions of even a simple atom quickly exceeds the capacity of our supercomputers of today:

    Delayed time zero in photoemission: New record in time measurement accuracy – June 2010
    Excerpt: Although they could confirm the effect qualitatively using complicated computations, they came up with a time offset of only five attoseconds. The cause of this discrepancy may lie in the complexity of the neon atom, which consists, in addition to the nucleus, of ten electrons. “The computational effort required to model such a many-electron system exceeds the computational capacity of today’s supercomputers,” explains Yakovlev.
    http://www.physorg.com/news196606514.html

    I like this tidbit you had veilsofmaya:

    (perhaps a strange reflection of the possibility that the machinery of DNA itself may actually function using quantum search algorithms [3]).

    It would not surprise me in the least if this were true since I hold the Designer invented/invents quantum mechanics as well.

    Veilsofmaya, you are so committed to the absurd metaphysical position of 10^500 parallel universes, as witnessed in your exchange with nullalalus the day before yesterday, I ain’t even going to try to talk you out of it save to say even if it is true, which I have severe reservations about, it does not negate Theism in the least as you would hoped it would do.

  39. anaruiz stated:

    “This selection yielded four new ATP-binding proteins that appear to be unrelated to each other or to anything found in the current databases of biological proteins.”

    It amazes me how people will always take any trivial evidence for evolution and then extrapolate wildly from it that purely material processes can generate complexity that dwarfs our puny understanding, all the while neglecting to honestly look at what their evidence actually says:.

    Having a 1 in 10^12 protein sequence “stick to” the universal energy molecule of ATP is not surprising, in fact I am surprised more sequences do not “stick to” the universal ATP. But having a protein “stick to” ATP and having a protein utilize ATP for a specific constructive purpose in manufacturing, or work, are two different things entirely. Thus it is not surprising in the least when Szostak’s “new” proteins led to “cascading failure of energetic balance” when as I referenced earlier:

    “We show that this man-made protein disrupts the normal energetic balance of the cell by altering the levels of intracellular ATP. This disruption cascades into a series of events that ultimately limit reproductive competency by inhibiting cell division.”

    Now what would be interesting, since evolutionists are having such a hard time grasping truly functional proteins, is if evolutionists could demonstrate how the “simple” ATP molecule, which is necessary for the construction of functional proteins, arose without the ATP enzyme in the first place:

    Evolution Vs ATP Synthase – Molecular Machine – video
    http://www.metacafe.com/watch/4012706

    Molecular Machine – The ATP Synthase Enzyme – video
    http://www.metacafe.com/watch/4380205

    Best Look Ever at Lifes Smallest Rotary Motor – article
    Excerpt: They imaged 19,825 motors to increase the average resolution down to 1.6 Angstroms (16 nanometers, or billionths of a meter). As a result, they were able to map out all the parts in better detail than ever, which are shown in photographs and diagrams in the paper.
    http://www.creationsafaris.com.....#20100107c

    “There are no detailed Darwinian accounts for the evolution of any fundamental biochemical or cellular system only a variety of wishful speculations. It is remarkable that Darwinism is accepted as a satisfactory explanation of such a vast subject.”
    James Shapiro – Molecular Biologist

    ATP Synthase achieves nearly 100% efficiency which far surpasses any human engineered motor:

    A rotary molecular motor that can work at near 100% efficiency:
    Excerpt: In cells, the free energy of ATP hydrolysis is ca. 90 pN nm per ATP molecule, suggesting that the F1 motor can work at near 100% efficiency. We confirmed in vitro that F1 indeed does ca. 80 pN nm of work under the condition where the free energy per ATP is 90 pN nm. The high efficiency may be related to the fully reversible nature of the F1 motor:
    http://rstb.royalsocietypublis.....3.abstract

    Worlds Smallest Rotary Engine Highlighted
    Excerpt: The match implies 100% efficiency for the conversion of the Gibbs free energy of ATP hydrolysis into mechanical work performed on the elastically strained filament. This is not surprising given the approximate thermodynamic equilibrium of the enzyme (long)-filament construct.
    http://www.creationsafaris.com.....#20090525a

    ATP: The Perfect Energy Currency for the Cell
    Jerry Bergman, Ph.D.
    http://www.trueorigin.org/atp.asp

    Excerpt:
    Without ATP, life as we understand it could not exist. It is a perfectly-designed, intricate molecule that serves a critical role in providing the proper size energy packet for scores of thousands of classes of reactions that occur in all forms of life. Even viruses rely on an ATP molecule identical to that used in humans. The ATP energy system is quick, highly efficient, produces a rapid turnover of ATP, and can rapidly respond to energy demand changes (Goodsell, 1996, p.79).

    Furthermore, the ATP molecule is so enormously intricate that we are just now beginning to understand how it works. Each ATP molecule is over 500 atomic mass units (500 AMUs). In manufacturing terms, the ATP molecule is a machine with a level of organization on the order of a research microscope or a standard television (Darnell, Lodish, and Baltimore, 1996).

  40. 41
    William J. Murray

    Alex73,

    You brought up a very interesting point.

    Forget about designing life; does the human brain have the physical informational resources, in this universe, to even produce a ten page report on the subject of the origin of life?

    I think that another avenue of intelligent design theory that should be pursued is what the human capacity to produce intelligent design means; I think the idea that humans are just computational collections of bumping molecules can be scientifically defeated in the same way that Darwinism can be defeated.

    IMO, a human being cannot be solely a physical object because, if they were, they would only have available to them the limit of physical computational resources. Therefore, they could not be expected to produce artifacts that lie outside of the computational resources of the natural laws and material characteristics of the universe.

  41. If you haven’t read the Princeton “cruise control” article that BA posted at #12 (it’s from Nov. 2008, so it may have been discussed here), you must read it.

    “Optimally desgined” feedback control systems is apparently a sweeping victory for Darwin!

    The article describing these mechanisms screams for intelligence, and further pushes randomness out of the picture. But in the end, they conclude:

    The scientists do not know how the cellular machinery guiding this process may have originated…

    Maybe it was another group of machinery, which, of course, was formed through good ‘ol RM+NS (which continues to shrink into irrelevance, yet holds on victoriously through pure dogma)

    …but they emphatically said it does not buttress the case for intelligent design, a controversial notion that posits the existence of a creator responsible for complexity in nature.

    Chakrabarti said that one of the aims of modern evolutionary theory is to identify principles of self-organization that can accelerate the generation of complex biological structures. “Such principles are fully consistent with the principles of natural selection. Biological change is always driven by random mutation and selection, but at certain pivotal junctures in evolutionary history…

    a.k.a. anytime almost anything noteworthy is developed

    …such random processes can create structures capable of steering subsequent evolution toward greater sophistication and complexity.”

    And these guys are Princeton scientists? The logic in that article will make your head pound. Absolutely unreal.

  42. William J. Murray,

    I share your view about testing the true weight bearing on an hypothesis that want to reduce human consciousness and intellect to chemistry contained in the physical individual.

    One avenue that seems rather easy to me would be to make an accurate calculation of the true memory requirement of our total human experience. Information theory has given us the means to do that, but it seems as if isolating memory carrying patterns in the brain eludes us still, due to the neuro-plasticity of the brain.

    But the obvious approach would be, not to try and find the memories in the brain. We simply need to calculate the memory requirement of humans as they exert their will in the physical world. This will include all human experiences, thoughts, dreams, actions on stimuli, recollection of memories and all conscious actions. We can also calculate a proposed algorithmic memory requirement, that maintains our personality and ability to apply our knowledge.

  43. 44
    whoisyourcreator

    FYI: You’re all missing the most interesting step in protein folding and that is that proteins can’t fold without the miracle of water.
    Don’t you just love that water evolved by chance, yet hydrogen bonds are close to impossible to break by man, yet break and reform every 1.2 picoseconds by ‘mother nature’?:

    “From their previous work, the RUB-researchers already knew about the strong influence of proteins on the water in its vicinity. In the bulk, every 1.3 picoseconds hydrogen bonds are formed and broken between single water molecules – thus resulting in a fairly disorder liquid. However, even small protein concentrations bring the water molecules more in line with each other. The dynamic motions of the water network are altered by the protein. Folded proteins were also known to show a significantly different influence on water molecules than unfolded proteins. Now KITA-spectroscopy for the first time allowed insight into the time-period in-between these two states.”
    “Protein Folding: One Picture Per Millisecond Illuminates The Process” Aug 6, 2008.
    http://www.sciencedaily.com/re.....075610.htm

    “Nature has developed extremely efficient water-splitting enzymes – called hydrogenases – for use by plants during photosynthesis, however, these enzymes are highly unstable and easily deactivated when removed from their native environment. Human activities demand a stable metal catalyst that can operate under non-biological settings.”:
    “Berkeley Scientists Discover Inexpensive Metal Catalyst for Generating Hydrogen from Water” April 30, 2010
    http://newscenter.lbl.gov/news.....rom-water/

  44. whoisyourcreator:

    I liked your protein folding and water reference,,, I found this related article when looking at it:

    Water Is ‘Designer Fluid’ That Helps Proteins Change Shape – 2008
    Excerpt: “When bound to proteins, water molecules participate in a carefully choreographed ballet that permits the proteins to fold into their functional, native states. This delicate dance is essential to life.”
    http://www.sciencedaily.com/re.....113314.htm

    It seems the Darwinian Gestapo forgot tell the authors of the article that the word Designer is to never, ever, be used when referring to biological processes of a cell. 8)

  45. Hey William J.—I’ve often wondered about this myself. In dreams, for instance, we create whole landscapes and new people out of thin air, and in fact whole dialogues in which we (the presumed creators of the dialogue) don’t actually know what’s coming next. Based on what we know about computers, what physical resources are necessary to make this possible?

  46. 47

    @bornagain77 (#39)

    Born, my point is that you seemed to have merely assumed that the problem would not benefit from quantum computing. From your Scientific American article quote…

    [QC] would solve certain specific problems, such as factoring integers, dramatically faster than we know how to solve them with today’s computers, but analysis suggests that for most problems quantum computers would surpass conventional ones only slightly.

    The qualifier “Most problems” does not include or exclude protein folding. Yet, you seem to have assumed exclusion. Why?

    They are two completely different computational problems entirely. Sorry for not pointing that out earlier.

    OK, assuming two sub-problems, how do you know quantum computing would not benefit both of them, rather than just one? You seem to be making the very same assumption, even after having it pointed out to you.

    Furthermore, are both parts of the problem untraceable? That is, the process of determining if a sequence will be functional could be the “easy” part and searching is the exponentially difficult part. If this is the case, it’s irrelevant if QC can benefit the second sub-problem as classical computers may be able to solve it in a reasonable amount of time.

    If fact, both Shor’s algorithm and Grover’s algorithm uses classical computation to setup the quantum computaion engine. This is because the setup is trivial to calculate classically. Only the part that is exponentially difficult is computed using quantum means.

    Again, It seems you have not considered any of these things before reaching you conclusion. Instead it seems you’ve merely posted articles that you *think* support your position, but appear not to should we actually understand the problem at hand.

    Veilsofmaya, you are so committed to the absurd metaphysical position of 10^500 parallel universes, as witnessed in your exchange with nullalalus the day before yesterday, I ain’t even going to try to talk you out of it save to say even if it is true, which I have severe reservations about, it does not negate Theism in the least as you would hoped it would do.

    First, MWI of quantum mechanics does not suggest there are only 10^500 universes. It suggests that there are 10^500 universes that are similar enough to our own to be useful in the context of the computation. Apparently, you’ve reached a conclusion on MWI without actually understating it either.

    You might want to read up on it here.

    Also, if the MWI is so absurd, then perhaps you can explain this poll of the leading cosmologists and other quantum field theorists?

    Regardless of why Grover’s algorithm operates at O(./N) rather than the classical cost of O(N), It’s still staggeringly faster at large problem sets than classical search algorithms. In fact, at staggeringly large search spaces, the benefit would become staggering large.

    Specially, as a search space grows staggeringly large, the number of operations necessary to search it with classical computing algorithms also grows staggeringly large. But this is not the case with Grovers’ algorithm, as the number of operations increases at a mere fraction of the rate.

    That the Grover’s algorithm runs on 10^500 universes in parallel is the explanation for why the number of operations does not grow as nearly as fast as corresponding classical algorithms. Even if this explanation was wrong, it’s does change the fact that the number of operations required would be staggering less.

    This is why I was confused when you wrote..

    It would not surprise me in the least if this were true since I hold the Designer invented/invents quantum mechanics as well.

    So, quantum computing actually can solve the problem, it’s just that only the designer can use it and we can’t? Is quantum computing supernatural?

  47. 48

    @bornagain77 (#39)

    *Please note*

    In the context of my comments on this thread, I am NOT arguing that your conclusion is true or false.

    However, I am arguing that the means by which you reached your conclusion appear to depend on assumptions that are not evident in the references you posted and are the result of a fundamental lack of understanding of the subjects they contain.

  48. 49

    I wrote:

    That the Grover’s algorithm runs on 10^500 universes in parallel is the explanation for why the number of operations does not grow as nearly as fast as corresponding classical algorithms. Even if this explanation was wrong, it’s does change the fact that the number of operations required would be staggering less.

    In other words, we already know only a fraction of quantum instructions would be needed. This can be verified in the same way that we know a staggering number of classical instructions would be needed – using a handful of equations.

    Instead, the MWI is an explanation for *why* so few quantum instructions are necessary.

    Specially, the reason why a few quantum instructions can perform the same computational task as a staggering number of classical instructions is because each of these quantum instructions are actually executing in parallel on a staggering number of slightly different universes.

  49. veilsofmaya, as I stated earlier I don’t want to try to convince you that your metaphysical presupposition of MWI is wrong seeing as you are so attached to it, no matter how many flaws are pointed out to you, even as Nullalasus so eloquently pointed them out to you the other day. And this is definitely not the thread for you to drag everyone through 100′s of comments on your MWI metaphysics.

    But as for you imagining protein folding will be greatly enhanced by “real world” quantum computing:

    The Limits of Quantum Computers – 2007
    Excerpt: In the popular imagination, quantum computers would be almost magical devices, able to “solve impossible problems in an instant” by trying exponentially many solutions in parallel. In this talk, I’ll describe four results in quantum computing theory that directly challenge this view.
    http://www.springerlink.com/co.....330115207/

  50. From your reference whoisyourcreator it states:

    “The dynamic motions of the water network are altered by the protein. Folded proteins were also known to show a significantly different influence on water molecules than unfolded proteins.,,,,
    http://www.sciencedaily.com/re.....075610.htm

    what makes this amazing is that water only “turns on” this ability to “dance with the protein” once the protein is sequenced; prior to the amino acids being sequenced into an unfolded protein by the ribosome, water is definitely not in the mood to dance:

    Abiogenic Origin of Life: A Theory in Crisis – Arthur V. Chadwick, Ph.D.
    Excerpt: The synthesis of proteins and nucleic acids from small molecule precursors represents one of the most difficult challenges to the model of prebiological evolution. There are many different problems confronted by any proposal. Polymerization is a reaction in which water is a product. Thus it will only be favored in the absence of water. The presence of precursors in an ocean of water favors depolymerization of any molecules that might be formed. Careful experiments done in an aqueous solution with very high concentrations of amino acids demonstrate the impossibility of significant polymerization in this environment. A thermodynamic analysis of a mixture of protein and amino acids in an ocean containing a 1 molar solution of each amino acid (100,000,000 times higher concentration than we inferred to be present in the prebiological ocean) indicates the concentration of a protein containing just 100 peptide bonds (101 amino acids) at equilibrium would be 10^-338 molar. Just to make this number meaningful, our universe may have a volume somewhere in the neighborhood of 10^85 liters. At 10^-338 molar, we would need an ocean with a volume equal to 10^229 universes (100, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000) just to find a single molecule of any protein with 100 peptide bonds. So we must look elsewhere for a mechanism to produce polymers. It will not happen in the ocean.
    http://origins.swau.edu/papers.....fault.html

  51. 52

    @bornagain77 (#50)

    Born,

    It appears you don’t to understand what Nullalasus’ wrote either.

    First he was arguing that MWI was swallowed by intelligent design. That is, MWI should be included as an intelligent design theory. See here. This has no bering on if it was a tenable theory or not.

    Second, he merely asserted it was “supernatural” without clearly explaining how he reached this conclusion or why all supernatural things are not non-sense as well. It seemed to be an argument from absurdity, with a mere assertion that it was absurd. See here and here.

    And this is definitely not the thread for you to drag everyone through 100’s of comments on your MWI metaphysics.

    Again, I’m not arguing for or against anything in this thread. I’m merely pointing out what appears to be serious flaws in the way you’ve reached your conclusion.

    But as for you imagining protein folding will be greatly enhanced by “real world” quantum computing:

    Born, you’ve done the exactly same thing, despite my having pointed it out to you *twice*.

    - Are you a subscriber to SprigerLink? If not, you only have access to a brief summary of the paper.
    - That quantum computers are not almost magical devices, able to “solve impossible problems in an instant”, is not the same thing as claiming a quantum computer could not solve protein folding problems in a few minutes, hours, days, years or even a few centuries. You’ve merely assumed this is the case.

    Again, I’m not arguing one way or another. Nor am I suggesting that posting and quoting links does not add to the discussion. I’m illustrating that it’s *not* a substitute for actually understanding the subject at hand.

  52. veils: to repeat, this is not the thread to rehash your MWI metaphysics, but as to your claim that the problem of protein folding “may” fall to the power of quantum computing. Protein folding is found to be a NP-complete problem:

    Complexity of protein folding
    Abstract It is believed that the native folded three-dimensional conformation of a protein is its lowest free energy state, or one of its lowest. It is shown here that both a two-and three-dimensional mathematical model describing the folding process as a free energy minimization problems is NP-hard. This means that the problem belongs to a large set of computational problems, assumed to be very hard (“conditionally intractable”). Some of the possible ramifications of this results are speculated upon.
    http://www.springerlink.com/co.....117672r08/

    NP-complete
    Excerpt; Although any given solution to such a problem can be verified quickly, there is no known efficient way to locate a solution in the first place; indeed, the most notable characteristic of NP-complete problems is that no fast solution to them is known. That is, the time required to solve the problem using any currently known algorithm increases very quickly as the size of the problem grows. As a result, the time required to solve even moderately large versions of many of these problems easily reaches into the billions or trillions of years, using any amount of computing power available today. As a consequence, determining whether or not it is possible to solve these problems quickly is one of the principal unsolved problems in computer science today.
    http://en.wikipedia.org/wiki/NP-complete

  53. 54

    @bornagain77 (#50)

    You wrote;

    I like this tidbit you had veilsofmaya:
    (perhaps a strange reflection of the possibility that the machinery of DNA itself may actually function using quantum search algorithms [3]).

    It would not surprise me in the least if this were true since I hold the Designer invented/invents quantum mechanics as well.

    But then you wrote:

    But as for you imagining protein folding will be greatly enhanced by “real world” quantum computing:

    It’s unclear why a designer would use something that is a figment of our imagination and therefore could not be used. Are you suggesting that quantum computing *does* work for some hypothetical designer, but *we* cannot use it?

    If not, please explain the difference.

    In other words, you seem to be merely picking and choosing the bits and pieces you happen to “like” while ignoring the rest.

  54. veils my hunch is that our ability to search may be greatly enhanced by quantum computing but that our ability to solve “conditionally intractable” (NP complete) problems mathematically, such as that found with protein folding will not be so greatly enhanced. Partly this is because we witness “searches” in DNA that are truly enviable, such as this following “search” problem:

    Quantum Dots Spotlight DNA-Repair Proteins in Motion – March 2010
    Excerpt: “How this system works is an important unanswered question in this field,” he said. “It has to be able to identify very small mistakes in a 3-dimensional morass of gene strands. It’s akin to spotting potholes on every street all over the country and getting them fixed before the next rush hour.” Dr. Bennett Van Houten – of note: A bacterium has about 40 team members on its pothole crew. That allows its entire genome to be scanned for errors in 20 minutes, the typical doubling time.,, These smart machines can apparently also interact with other damage control teams if they cannot fix the problem on the spot.
    http://www.sciencedaily.com/re.....123522.htm

    ,,, Whereas there are no instances of completely novel functional proteins being “solved” with enviable speed in life somewhere that I can point to. The somewhat sluggish response of the immune system to develop effective antibodies being a prime case in point. Thus I have no reason to presuppose that such a mechanism for calculating novel functional “folded” proteins, from completely new sequences, exists undiscovered somewhere in the “unexplored” quantum world.

  55. 56

    @Bornagain77 (#53)

    *Sigh*

    You wrote:

    veils: to repeat, this is not the thread to rehash your MWI metaphysics,

    Again, I’m not arguing for MWI in this thread. I’m arguing that the comments you’ve made reveal your ignorance of it. it’s another example of where you’ve reached conclusions on a subject that you do not appear to understand.

    As for the paper you cited, did you notice the article you referenced was originally published in 1993?

    Since Grovers did not published his search algorithm until 1996 it would have indeed been considered untraceable at the time.

    Furthermore, if protein folding really is NP-complete, it appears to be good fit for quantum computing.

    From your Wikipedia reference…

    In computational complexity theory, the complexity class NP-complete (abbreviated NP-C or NPC), is a class of problems having two properties:

    - It is in the set of NP (nondeterministic polynomial time) problems: Any given solution to the problem can be verified quickly (in polynomial time).

    […]

    Although any given solution to such a problem can be verified quickly, there is no known efficient way to locate a solution in the first place; indeed, the most notable characteristic of NP-complete problems is that no fast solution to them is known.

    Given the classification of NP-complete, it sounds like the hard part is the search, rather than the verification, as I eluded to earlier. If this is the case, we wouldn’t need to use quantum computing to solve the second part of the problem. While this isn’t clear from the article summary you linked to, it certainly isn’t excluded.

    It also seems to fit the description found here


    The most well-known example of this is quantum database search, which can be solved by Grover’s algorithm using quadratically fewer queries to the database than are required by classical algorithms. In this case the advantage is provable. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in two-to-one functions and evaluating NAND trees.

    Consider a problem that has these four properties:
    - The only way to solve it is to guess answers repeatedly and check them,
    - There are n possible answers to check,
    – Every possible answer takes the same amount of time to check, and
    -There are no clues about which answers might be better: generating possibilities randomly is just as good as checking them in some special order.

    Also, from your quote:

    That is, the time required to solve the problem using any currently known algorithm increases very quickly as the size of the problem grows.

    This is exactly what I was referring to earlier, when I wrote…


    Specially, as a search space grows staggeringly large, the number of operations necessary to search it with classical computing algorithms also grows staggeringly large. But this is not the case with Grovers’ algorithm, as the number of operations increases at a mere fraction of the rate.

    This appears to be yet another example of claiming a link or research paper supports your conclusion, yet on close discovery, we find it does not.

    Our of curiosity, Is this usually the way you reach all of your conclusions?

  56. veils you state:

    “Again, I’m not arguing for MWI in this thread. I’m arguing that the comments you’ve made reveal your ignorance of it.”

    I understand that no matter what anybody says you will believe in MWI no matter what!

    As for protein folding, I hold that Quantum Computing will not greatly impact it, whereas you do. You have no example to show it being done but only belief that it will be done. Fine that is the nature of science, go out and prove that it can be done And then you will have the hard proof to overcome what I feel are very reasonable objections, but calling me ignorant while providing no concrete example of your conjecture, is not scoring any points with me.

  57. Ah, the famous Szostak’s paper!

    I would like to comment extensively about it, but probably I will not the time today. I will as soons as possible.

    If you have found errors in the Szostak paper, why not send a letter to Nature?

    Or at least publish a rebuttal in Bio-Complexity. ID proponents have a number of high profile outlets suitable for challenging mainstream papers. A forum thread probably isn’t the best choice. It will simply be buried in a few days.

  58. 59

    @bornagain77 (#55)

    You wrote:

    but that our ability to solve “conditionally intractable” (NP complete) problems mathematically, such as that found with protein folding will not be so greatly enhanced.

    Even if this is were case (which I’m not suggesting), “not greatly enhanced” does not tell us what enhancement it would provide and if it would allow completion before the amount of time granted or available to “evolutionists”, whatever that happens to be defined as. You seem to be jumping to conclusions based on vague assumptions.

    Partly this is because we witness “searches” in DNA that are truly enviable, such as this following “search” problem:

    I’m not following you here. That DNA can be repaired quickly doesn’t necessitate that quantum algorithms cannot solve a specific NP complete search problem in the amount of time granted or available to “evolutionists?” This does not seem to follow.

    Whereas there are no instances of completely novel functional proteins being “solved” with enviable speed in life somewhere that I can point to.

    Born,

    I’m still confused. Doesn’t this repair mechanism fit the description of an NP-complete problem? You seem to suggesting that quantum search algorithms do work, and even imply that DNA repair is an example of just such a quantum search algorithms in practice, yet you’re claiming we could not use it to search NP-complete problem spaces, such as protein folding.

    Again, are quantum search algorithms “supernatural”, and therefore unable to be utilized unless we’re “the designer?”

  59. So since humans have 80% different proteins than chimps how in the world did this occur with a system so dead set against variance Petrushka?

    What do mean by different? Different proteins, or variations on the same proteins?

  60. veils;

    The Limits of Quantum Computers – 2007
    excerpt: Second I’ll show that in the “black box” or “oracle” model that we know how to analyze, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform “quantum advice states”:
    http://www.springerlink.com/co.....330115207/

    Protein folding is a NP complete problem!

    Combinatorial Algorithms for Protein Folding in Lattice
    Models: A Survey of Mathematical Results – 2009
    4 Protein Folding: Computational Complexity 18
    4.1
    NP-completeness: from 10^300 to 2 Amino Acid Types . . . . . . . . . . . . . . 19
    4.2
    NP-completeness: Protein Folding in Ad-Hoc Models . . . . . . . . . . . . . . 19
    4.3
    NP-completeness: Protein Folding in the HP-Model . . . . . . . . . . . . . . 21
    http://www.cs.brown.edu/~sorin.....survey.pdf

    Thus veils, with all “vagueness” removed, you have nothing to make your case with.

  61. Petrushka can you site any studies where the proteins within an organism have accepted 3 or 4 amino acid substitutions/mutations that were not found to be detrimental to the original protein found in the parent species?

  62. Dis you answer my question about differences?

  63. Concerning your original comment on changes and chimps. I’m wondering if you have done the math to determine if the changes between humans and apes exceeds resonable and expected rates of change.

    I don’t recall Behe making that argument. If he makes that argument, I’d appreciate a feference, including page number.

  64. William J. Murray (#41):

    I absolutely agree with your point. Indeed, I have made the same point many times here. That’s why I have always believed that ID is a powerful falsification of both neo-darwinism and strong AI, the two false theories on which contempporary materialistic reductionist scientism is based.

    My point is very simple: conscious intelligent processes make the difference.

    Original CSI (or, more simply, original dFSCI) can be produced only with the help of conscious intelligent processes, even if the process certainly also implies algorithmic computations.

    Conscious intelligent agency can explain CSI, while non conscious processes cannot (falsification of neo darwinism).

    As conscious intelligent agency can easily produce CSI, while no non conscious algorithmic system can, conscious agency is essentially different from an algorithmic system (falsification of strong AI).

    It’s as simple as that. Conscious intelligent representations make all the difference.

  65. Petrushka (#58):

    The problem is not so much in finding “errors” in the paper, but in its conclusions (or in the meaning that many darwinists, including probably you, attribute to the paper itself). In that sense, I probably disagree with the final intepretation of many of the papers (about evolutionary causal mechanisms, or with implications about them) published not only in Nature, but in the whole scientific literature.

    Does that seem strange to you? It isn’t. As you are not a fool, you have certainly understodd by now that, as a convinced (and intellectually fulfilled :) ) IDist, I believe in a completely different paradigm (design) regarding the origin of biological information, and I do believe that the current paradigm (neo-darwinism) is flatly wrong.

    Do you think I should write a letter to Nature to inform them of the existence of ID theory?)

    Maybe I will publish something in Bio-Complexity. We will see.

    In the meantime, I have to disagree with you that “a forum thread probably isn’t the best choice” for intellectual confrontation about these subjects. This blog is a very good place for that (and it is very much read all around, by both parties…). Tha fact itself that you (and many other darwinists, some of them certainly “high profile”) come here so often seems to disprove your point.

    More on the Szostak paper soon…

  66. Petrushka you state:

    “I’m wondering if you have done the math to determine if the changes between humans and apes exceeds reasonable and expected rates of change.”,,,

    Waiting Longer for Two Mutations – Michael J. Behe
    Excerpt: Citing malaria literature sources (White 2004) I had noted that the de novo appearance of chloroquine resistance in Plasmodium falciparum was an event of probability of 1 in 10^20. I then wrote that ‘‘for humans to achieve a mutation like this by chance, we would have to wait 100 million times 10 million years’’ (Behe 2007) (because that is the extrapolated time that it would take to produce 10^20 humans). Durrett and Schmidt (2008, p. 1507) retort that my number ‘‘is 5 million times larger than the calculation we have just given’’ using their model (which nonetheless “using their model” gives a prohibitively long waiting time of 216 million years). Their criticism compares apples to oranges. My figure of 10^20 is an empirical statistic from the literature; it is not, as their calculation is, a theoretical estimate from a population genetics model.
    http://www.discovery.org/a/9461

    Simulating evolution by gene duplication of protein features that require multiple amino acid residues: Michael J. Behe and David W. Snoke
    Excerpt: We conclude that, in general, to be fixed in 10^8 generations, the production of novel protein features that require the participation of two or more amino acid residues simply by multiple point mutations in duplicated genes would entail population sizes of no less than 10^9.,,,The fact that very large population sizes—10^9 or greater—are required to build even a minimal [multi-residue] feature requiring two nucleotide alterations within 10^8 generations by the processes described in our model, and that enormous population sizes are required for more complex features or shorter times, seems to indicate that the mechanism of gene duplication and point mutation alone would be ineffective, at least for multicellular diploid species, because few multicellular species reach the required population sizes.
    http://www.pubmedcentral.nih.g.....id=2286568

    Extreme functional sensitivity to conservative amino acid changes on enzyme exteriors – Doug Axe
    Excerpt: Contrary to the prevalent view, then, enzyme function places severe constraints on residue identities at positions showing evolutionary variability, and at exterior non-active-site positions, in particular.
    http://nsmserver2.fullerton.ed.....lution.pdf

    Severe Limits to Darwinian Evolution: – Michael Behe – Oct. 2009
    Excerpt: The immediate, obvious implication is that the 2009 results render problematic even pretty small changes in structure/function for all proteins — not just the ones he worked on.,,,Thanks to Thornton’s impressive work, we can now see that the limits to Darwinian evolution are more severe than even I had supposed.
    http://www.evolutionnews.org/2......html#more

    Chimps are not like humans – May 2004
    Excerpt: the International Chimpanzee Chromosome 22 Consortium reports that 83% of chimpanzee chromosome 22 proteins are different from their human counterparts,,, The results reported this week showed that “83% of the genes have changed between the human and the chimpanzee—only 17% are identical—so that means that the impression that comes from the 1.2% [sequence] difference is [misleading]. In the case of protein structures, it has a big effect,” Sakaki said. http://cmbi.bjmu.edu.cn/news/0405/119.htm

    Eighty percent of proteins are different between humans and chimpanzees; Gene; Volume 346, 14 February 2005:
    Excerpt: The early genome comparison by DNA hybridization techniques suggested a nucleotide difference of 1-2%. Recently, direct nucleotide sequencing confirmed this estimate. These findings generated the common belief that the human is extremely close to the chimpanzee at the genetic level. However, if one looks at proteins, which are mainly responsible for phenotypic differences, the picture is quite different, and about 80% of proteins are different between the two species.
    http://www.ncbi.nlm.nih.gov/pubmed/15716009

    A review of The Edge of Evolution: The Search for the Limits of Darwinism
    The numbers of Plasmodium and HIV in the last 50 years greatly exceeds the total number of mammals since their supposed evolutionary origin (several hundred million years ago), yet little has been achieved by evolution. This suggests that mammals could have “invented” little in their time frame. Behe: ‘Our experience with HIV gives good reason to think that Darwinism doesn’t do much—even with billions of years and all the cells in that world at its disposal’ (p. 155). http://creation.com/review-mic.....-evolution

    Richard Dawkins’ The Greatest Show on Earth Shies Away from Intelligent Design but Unwittingly Vindicates Michael Behe – Oct. 2009
    Excerpt: The rarity of chloroquine resistance is not in question. In fact, Behe’s statistic that it occurs only once in every 10^20 cases was derived from public health statistical data, published by an authority in the Journal of Clinical Investigation. The extreme rareness of chloroquine resistance is not a negotiable data point; it is an observed fact.
    http://www.evolutionnews.org/2.....est_s.html

  67. BA77:

    Reading through your rather lengthy post, I don’t see a response to my question:

    Does the molecular distance between humans and apes exceed Behe’s Edge? Does Behe claim there are any gaps in mammalian evolution that exceed his Edge?

    The fact that 80 percent of proteins have changed does not address the question of whether the changes have exceed expected rates.

    I’m wondering if you are thinking that 80 percent diffent means human have that many unique proteins, or that humans have mostly the same proteins, but 80 percent have one or more point mutations.

  68. Petrushka you ask:
    Does the molecular distance between humans and apes exceed Behe’s Edge?

    In my opinion Yes, and the evidence is greatly so in favor of that position, but then even one unique gene or protein would greatly exceed what Darwinian processes are capable of:

    Does Behe claim there are any gaps in mammalian evolution that exceed his Edge?

    As far as I know, Dr. Behe says the “tentative” edge of Darwinian evolution for a vertebrate lies somewhere between the level of species and class.
    http://creation.com/review-mic.....-evolution

  69. Petrushka, it is interesting to note that a completely different method of investigation, years earlier, had arrived at almost exactly the same conclusion as Dr. Behe:

    So Michael Behe comes to the grand conclusion to his survey: ‘Somewhere between the level of vertebrate species and class lies the organismal edge of Darwinian evolution’ (p. 201). A diagram illustrates this (p. 218), which he reproduces on the page facing the title page of the book (figure 2).

    Interestingly, the creationist study of baraminology (defining the limits of the original created kinds, or baramins, of Genesis 1) has arrived at conclusions consistent with Behe’s proposition, using a different approach based on hybridization criteria, where possible, combined with morphology, etc.4 In fact, in 1976 creationist biologist Frank Marsh proposed that the created kinds (baramins) were often at the level of genus or family, although sometimes at the level of order.5
    http://creation.com/review-mic.....-evolution

  70. As far as I know, Dr. Behe says the “tentative” edge of Darwinian evolution for a vertebrate lies somewhere between the level of species and class.

    That covers some territory, since mammalia is a class. It doesn’t really answer the question of whether the molecular distance between humans and apes is over the Edge.

    Can you cite a protein unique to humans that could not be reached by evolution? I ask because I’d think this would be better known, if Behe has mentioned it.

  71. Petrushka, Since you never cite anything I ever ask you for why should I do all you research for you?

  72. Petrushka, Since you never cite anything I ever ask you for why should I do all you research for you?

    Excuse me, but I cited a Nobel Prize winner on the question of whether randon protein sequences could be functional. I’ve been promised a rebuttal.

    Here’s an article on protein evolution:

    http://mbe.oxfordjournals.org/cgi/reprint/17/4/656

    You brought up the question of chimp vs human proteins. I ask a simple question: Are the differences between apes and humans beyond Behe’s Edge, and does he cite this as evidence against evolution?

  73. Well since you did not address my cite in rebuttal to Szostak’s paper, Why should I consider it fruitful to continue with you?

  74. as to your new cite on protein evolution petrushka:

    Selective Constraints, Amino Acid Composition, and the Rate of Protein Evolution

    Here Is Their Methodology:
    Orthology of sequences was assessed by examination of the phylogenetic tree of each protein provided in HOVERGEN.
    http://mbe.oxfordjournals.org/cgi/reprint/17/4/656

    Thus Petrushka in your cite, the researchers, instead of actually testing to see if proteins could be “evolved” into other proteins of similar sequences, just assumed that it was true that proteins could evolve. Yet if we look to see if this “modest” assumption of theirs, of the evolvability of proteins is true, we find that laboratory tests indicate severe constraints that render even this “modest” assumption of theirs false:

    Extreme functional sensitivity to conservative amino acid changes on enzyme exteriors – Doug Axe
    Excerpt: Contrary to the prevalent view, then, enzyme function places severe constraints on residue identities at positions showing evolutionary variability, and at exterior non-active-site positions, in particular.
    http://nsmserver2.fullerton.ed.....lution.pdf

    Severe Limits to Darwinian Evolution: – Michael Behe – Oct. 2009
    Excerpt: Four years ago David Snoke and I wrote a paper entitled “Simulating evolution by gene duplication of protein features that require multiple amino acid residues” (4) where we investigated aspects of that scenario. The bottom line is that, although by assumption of the model anything is possible, when evolution must pass through multiple neutral steps the wind goes out of Darwinian sails, and a drifting voyage can take a very, very long time indeed. But don’t just take my word for it — listen to Professor Thornton (1):

    To restore the ancestral conformation by reversing group X, the restrictive effect of the substitutions in group W must first be reversed, as must group Y. Reversal to w and y in the absence of x, however, does nothing to enhance the ancestral function; in most contexts, reversing these mutations substantially impairs both the ancestral and derived functions. Furthermore, the permissive effect of reversing four of the mutations in group W requires pairs of substitutions at interacting sites. Selection for the ancestral function would therefore not be sufficient to drive AncGR2 back to the ancestral states of w and x, because passage through deleterious and/or neutral intermediates would be required; the probability of each required substitution would be low, and the probability of all in combination would be virtually zero. (my emphasis)

    Let’s quote that last sentence again, with emphasis: “Selection for the ancestral function would therefore not be sufficient … because passage through deleterious and/or neutral intermediates would be required; the probability of each required substitution would be low, and the probability of all in combination would be virtually zero.” If Thornton himself discounts the power of genetic drift when it suits him, why shouldn’t I?,,,,The immediate, obvious implication is that the 2009 results render problematic even pretty small changes in structure/function for all proteins — not just the ones he worked on.,,,Thanks to Thornton’s impressive work, we can now see that the limits to Darwinian evolution are more severe than even I had supposed.
    http://www.evolutionnews.org/2......html#more

    Petrushka, My question is why was the variability of proteins not rigorously tested in the lab first before they drew their conclusions in their paper. You would think that checking to make sure their assumptions were true would come first and foremost before making such grand claims from mere sequence similarities!

  75. Why would anyone be discussing reverse evolution? No mainstream biologist thinks that selection can force a specific change, much less a specific sequence of changes.

  76. Petruska, I’ve cited two (actually three) papers that severely question whether proteins can make even modest changes in the amino acid sequences while retaining function. Please cite lab work to the contrary or admit that evolution has not addressed this very fundamental issue.

  77. further notes:

    Dollo’s law, the symmetry of time, and the edge of evolution – Michael Behe – Oct 2009
    Excerpt: Nature has recently published an interesting paper which places severe limits on Darwinian evolution.,,,
    A time-symmetric Dollo’s law turns the notion of “pre-adaptation” on its head. The law instead predicts something like “pre-sequestration”, where proteins that are currently being used for one complex purpose are very unlikely to be available for either reversion to past functions or future alternative uses.
    http://www.evolutionnews.org/2.....f_tim.html

    “A problem with the evolution of proteins having new shapes is that proteins are highly constrained, and producing a functional protein from a functional protein having a significantly different shape would typically require many mutations of the gene producing the protein. All the proteins produced during this transition would not be functional, that is, they would not be beneficial to the organism, or possibly they would still have their original function but not confer any advantage to the organism. It turns out that this scenario has severe mathematical problems that call the theory of evolution into question. Unless these problems can be overcome, the theory of evolution is in trouble.”
    Problems in Protein Evolution:
    http://www.cs.unc.edu/~plaisted/ce/blocked.html

    “Mutations are rare phenomena, and a simultaneous change of even two amino acid residues in one protein is totally unlikely. One could think, for instance, that by constantly changing amino acids one by one, it will eventually be possible to change the entire sequence substantially… These minor changes, however, are bound to eventually result in a situation in which the enzyme has ceased to perform its previous function but has not yet begun its ‘new duties’. It is at this point it will be destroyed – along with the organism carrying it.” Maxim D. Frank-Kamenetski, Unraveling DNA, 1997, p. 72. (Professor at Brown U. Center for Advanced Biotechnology and Biomedical Engineering)

    Interestingly, It is found that many “errors”, that do occur in protein sequences, are found to be “designed errors”.

    Cells Defend Themselves from Viruses, Bacteria With Armor of Protein Errors – Nov. 2009
    Excerpt: These “regulated errors” comprise a novel non-genetic mechanism by which cells can rapidly make important proteins more resistant to attack when stressed,
    http://www.sciencedaily.com/re.....134701.htm

  78. 79

    @Bornagain77 (#61)

    It’s seems you’ve done the very same thing, yet again.

    You’ve based your conclusions on a single paragraph summary which, even then, you’ve misinterpreted. For example,

    Second I’ll show that in the “black box” or “oracle” model that we know how to analyze, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform “quantum advice states”:

    Note the qualifier here: “black box” or “oracle” model that we know how to analyze

    This is because, when the author said NP-Complete problems cannot be solved, he means solved in a way that proves if P=NP, which would solve all NP-complete problems. This is still an unsolved question, which is very signifcant.

    From the same author…

    If we interpret the space of 2n possible assignments to a Boolean formula ? as a “database,” and the satisfying assignments of ? as “marked items,” then Bennett et al.’s result says that any quantum algorithm needs at least ? 2n/2 steps to find a satisfying assignment of ? with high probability, unless the algorithm exploits the structure of ? in a nontrivial way. In other words, there is no “brute-force” quantum algorithm to solve NP-complete problems in polynomial time, just as there is no brute-force classical algorithm.

    http://www.scottaaronson.com/papers/npcomplete.pdf

    However, you earlier provided an example regarding the repair of DNA, which you seemed to imply was somehow using quantum mechanics and appears to be NP-hard. In fact, if we discover how DNA repair and protein folding works, it may provide a way to solve all NP-complete problems without having to exploit specific properties of the problem space.

    Of course, the real question is, it possible to exploit the problem space of protein folding using quantum computing algorithms?

    Finding the optimal solution to a complex optimisation problem is of great importance in practically all fields of science, technology, technical design and econometrics. We demonstrate that a modified Grover’s quantum algorithm can be applied to real problems of finding a global minimum using modest numbers of quantum bits. Calculations of the global minimum of simple test functions and Lennard-Jones clusters have been carried out on a quantum computer simulator using a modified Grover’s algorithm. The number of function evaluations N reduced from O(N) in classical simulation to O(N1/2) in quantum simulation. We also show how the Grover’s quantum algorithm can be combined with the classical Pivot method for global optimisation to treat larger systems.

    You can read the entire paper, not just the summary, here: http://arxiv.org/pdf/0911.1242

    Of course, all of this is completely meaningless unless you actually define a specific timeframe for which the computation needs to be competed in. Merely saying…

    Thus it seems Alex73 the maximum limit of computing that is achievable by the most “perfect” ideal supercomputer in the physical universe will not be able to surpass the threshold that has already been granted to evolutionists for resources…

    is incredibly vague. It’s unclear if you are referring to 1 billion years, 2 billion years, etc. The conclusion cannot be reached by the fact that very definition of the claim is ambiguous.

    Nor have you shown an revised timeframe for sequencing proteins after adjusting for even a modest speed up, should that only be possible, for us to compare to.

    Then you quote…

    Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe,

    But how does this convert to quantum computing? You provide no reference or way to translate between the two. It’s vague and does not support you conclusion.

    Again, I’m not arguing that it is true. Nor am I suggesting that ether outcome has any significance one way or the other. Instead, I’m just noting that, in reaching your conclusion, you made many assumptions which were not evident and based on misinterpretations of the papers themselves, etc.

  79. 80

    @bornagain77 (#57)

    I wrote:

    “Again, I’m not arguing for MWI in this thread. I’m arguing that the comments you’ve made reveal your ignorance of it.”

    You replied.

    I understand that no matter what anybody says you will believe in MWI no matter what!

    Even if this were true, which it’s not the case, how would knowing this help you reach a conclusion about a theory that you do not appear to understand?

    This simply does not follow and appears to be a non-answer.

  80. The Szostak paper and its follow-up.

    The original paper was published in Nature in 2001. It has rapidly become one of the “champions” of darwinist arguments against ID arguments about the protein search space (arguments which, while officially ignored by mainsteam biology, seem to be able to strongly “inspire” darwinist experimental research, nearly as much as Behe’s flagellum).

    I will try to keep the discussion as simple as possible.

    1) What did they do?
    Szostak and his co-workers have realized a rather complex experiment aimed to explore the emergence of functional sequences from a random library. That is clear from the start:

    The frequency of occurrence of functional proteins in collections of random sequences is an important constraint on models of the evolution of biological proteins. Here we have experimentally determined this frequency by isolating proteins with a specific function from a large random-sequence library of known size.

    So, some details about their procedure:

    a) They start form a random pools of about 6 x 10^12 random proteins of 80 AAs

    b) They perform 8 rounds of selection by immobilized ATP and then amplification of the selected molecules by PCR. IOW, they select and enrich those sequences which in some way, even minimal, bind to ATP. At this point, they have 6.2% binding to ATP vs the initial 0.1%.

    c) They clone 24 sequences, and find that they are derived from 4 original sequences (by analysis of the homologies). Please note that at this point some minimal differences already exist between individual sequences of the same family, but that’s not really important. At this point, ATP binding is still extremely low, and the authors comment:

    One possible explanation for this low level of ATP-binding is conformational heterogeneity, possibly re¯ecting inefficient folding of these primordial protein sequences.

    d) They carry on 3 more rounds of mutagenic amplification + selection (by the usual ATP). Their words:

    In an effort to increase the proportion of these proteins that fold into an ATP-binding conformation, we mutagenized the library and carried out further rounds of in vitro selection and amplification. Three consecutive rounds with mutagenic PCR amplification were performed with an average mutagenic rate of 3.7% per amino acid for each round.

    IOW, they have now implemented directed evolution by random mutation + intelligent selection.

    e) They go back to 6 more rounds of non mutagenic selection, up to round 18. At this point, 34% of the library binds ATP, and all the sequences are derived from one single original family (family B from point c).

    Please note that at this point the sequences are rather different one from the other: directed evolution has accomplished its task. In their words:

    Comparing the round 18 sequences with the ancestral sequence showed that four amino-acid substitutions had become predominant in the selected population (present more than 39 times in 56 sequences, Fig. 3b), and that 16 other substitutions had also been selectively enriched (present more than 4 times in 56 sequences, Fig. 3b). In addition, each clone contained a variable number of other substitutions. The selectively enriched substitutions are distributed over the 62 amino-terminal amino acids of the original 80-amino-acid random region, suggesting that amino acids throughout this region are contributing to the formation of a folded structure, at least in the complex with ATP. The substitutions in each of the assayed clones improve ATP-binding relative to the ancestral sequence.

    IOW, directed RV + intelligent selection has found many important new AAs which essentially contribute to the folding of these new sequences and to their binding to ATP.

    f) They choose 8 random sequences from round 18, choose the 4 with the highest ATP binding, expand those 4 sequences in E. coli, measure the dissociation constants (Kd) for ATP, and choose the one with the highest binding (Kd = 100 nM).

    g) This is protein 18-19, the final ATP binding protein (for the moment).

    A few comments at this point. Even ignoring some possible technical details about the initial random library and the first rounds of selection and amplification, the greatest flaws in this procedure are the following:

    - What is being selected, and how? In the beginning, it is only any possible binding to ATP, even the weakest. The selection method is based on that: binding of the sequence to immobilized ATP. There is no reference here to specific folding, or function of any other kind. And the selection takes place through a completely artificial “measure” of that binding: the molecules are incubated with immobilized ATP, then washed, and then the bound molecules are eluted.

    As BA has already pointed out in his post #40: “Having a 1 in 10^12 protein sequence “stick to” the universal energy molecule of ATP is not surprising, in fact I am surprised more sequences do not “stick to” the universal ATP.”

    And that’s perfectly correct. Some generic and unspecific biochemical binding to some molecule is absolutely what we can expect in a library of 10^12 molecules, if we look for that binding with some very sensitive selection method. This is simply a very trivial biochemical result, and has nothing to do with function.

    - What was present in the initial library?
    The answer is simple: possibly 4 molecules which loosely “stuck” to ATP, due to some favourable limited biochemical part of their sequence. Of those 4, only one is the ancestor of the final 18-19 ATP binding protein.

    - How is the final protein obtained?
    Through traditional protein engineering: rounds of artificial mutation and specific selection, using as seeds the 4 families derived from the 4 original sequences which loosely “stuck” to ATP.

    - What is the result?
    Protein 18-19, a protein with some folding and a strong binding to ATP. Those (both the folding and the strong binding) are the result of the engineering process which “found” the important AA positions which contribute to folding and ATP binding, and which were not present in the original random sequences,

    - Is the result “functional”?
    No, except for the fact that it has some 3D structure (probably very gross), and that it binds ATP. That’s not exactly a “function”, at best a biochemical property. (More on that in next post).

    So, the important point is: this paper, interesting as it may be, does not explore the presence of functional sequences in a random library. On the contrary, at best it explores the presence of random weak binding to ATP in a random library, which, by directed protein engineering and selection, can be incorporated in more structured 3D proteins and made stronger, up to significant Kd values.

    One final observation is that none of the selection procedures used in the experiment (based on ATP binding) has anything to do with “Natural Selection”. Indeed, neither the original weak interaction with ATP, nor the final strong binding, can in any way be selected in a living context: the first is absolutely too weak and trivial to mean anything unless selectively measured in a lab, and the second is definitely harmful (more on that).

    So, it seems that the only “function” of protein 18-19 is that it can be used as a propaganda tool against ID. In that sense, the paper underwent very positive selection in the academic world, but I doubt that living cells could really benefit form that :) .

    I will go on with the follow-up to that study in my next post.

  81. gppucio, thanks for the detailed look, I had no idea that the test was that blatantly biased. I look forward to your follow up.

  82. 83

    I wrote:

    You can read the entire paper [on exploiting the problem space of protein folding], not just the summary, here: http://arxiv.org/pdf/0911.1242

    Looks like I accidentally posted a link to the wrong paper.

    The paper I was referring to is actually located here.

  83. 2) The follow-up

    The story of protein 18-19 does not end here. There are follow-ups. As one of the very few known artificial proteins with some characteristic (I will not say “function”), it has been extensively studied. I want to point here to 3 important aspects of that follow-up:

    a) Further optimization of the structure

    b) The artificial search for some “possible minimal function”
    c) The demonstration of in vivo negative function

    a) Further optimization of the structure: How protein 18-19 became protein DX:

    “ Structural insights into the evolution of a non-biological protein: importance of surface residues in protein fold optimization., Smith MD, Rosenow MA, Wang M, Allen JP, Szostak JW, Chaput JC, PLoS ONE. 2007 May 23;2(5):e467”

    Here we discuss two mutations that emerged from an in vitro selection experiment designed to improve the folding stability of a non-biological ATP binding protein. These mutations alter two solvent accessible residues, and dramatically enhance the expression, solubility, thermal stability, and ligand binding affinity of the protein.

    Unlike many naturally occurring proteins, protein 18-19 requires high concentrations of free ligand in order to remain stably folded and soluble. In an attempt to overcome this limitation and to evolve a non-biological protein toward a folded state that more closely resembles the ligand-independent folded state of many natural proteins, we designed an in vitro selection experiment using mRNA display to isolate variants of protein 18-19 that remained bound to an ATP agarose affinity resin in the presence of increasing concentrations of chemical denaturant.

    Is that clear enough? Protein 18-19, the “final” result of Szostac’s initial attempt at designed protein engineering, is indeed a very poorly folding and unstable molecule. So, another implementation of designed protein engineering is neede, to transform it into something more stable, through two more critical aminoacid sunstitutions. This “final” product (I hope… the story is becoming a little bit boring) has been called protein DX. It will be our hero from now on.

    b) The artificial search for some “possible minimal function”:

    But yet, even darwinists deep in their hearts seem to realize that their toy molecule still seems to have no function, and that simple ATP binding does not give great hopes for an integration in a living context. But there is always hope. A recent paper comes to the rescue:

    “A Synthetic Protein Selected for Ligand Binding Affinity Mediates ATP Hydrolysis”

    Chad R. Simmons†,‡, Joshua M. Stomel†,§, Michael D. McConnell†,‡, Daniel A. Smith†,‡,
    Jennifer L. Watkins†,‡, James P. Allen‡, and John C. Chaput†,‡,*

    http://www.acschemicalbiology.org

    ABSTRACT How primitive enzymes emerged from a primordial pool remains a fundamental unanswered question with important practical implications in synthetic biology. Here we show that a de novo evolved ATP binding protein, selected solely on the basis of its ability to bind ATP, mediates the regiospecific hydrolysis of ATP to ADP when crystallized with 1 equiv of ATP. Structural insights into this reaction were obtained by growing protein crystals under saturating ATP conditions. The resulting crystal structure refined to 1.8 Å resolution reveals that this man-made protein binds ATP in an unusual bent conformation that is metalindependent and held in place by a key bridging water molecule. Removal of this interaction using a null mutant results in a variant that binds ATP in a normal linear geometry and is incapable of ATP hydrolysis. Biochemical analysis, including high-resolution mass spectrometry performed on dissolved protein crystals, confirms that the reaction is accelerated in the crystalline environment. This observation
    suggests that proteins with weak chemical reactivity can emerge from high affinity ligand binding sites and that constrained ligand-binding geometries could have helped to facilitate the emergence of early protein enzymes.

    IOW, take, protein DX, pack it with its strong ligand, ATP, and some ATP becomes ADP, probably as a consequence of the bent conformation of the binding. Indeed, the only purpose of tis very artificail reasoning seems to be to give some strength to the anti ID “tool”. The rethorics against ID is obvious in the ending of the paper:

    Our characterization of a synthetic protein that derives entirely from random-sequence origin demonstrates that design-free methods can be used to generate proteins with novel functions. We show that a protein selected solely on the basis of its ability to bind ATP emerged with the ability to hydrolyze ATP. This achievement suggests that relatively undemanding catalytic reactions may not have been that much harder for primordial proteins to attain than ligand binding.

    c) The uncomfortable truth: The demonstration of in vivo negative function

    Many thanks to BA for pointing to this recent paper:

    A Man-Made ATP-Binding Protein Evolved Independent of Nature Causes Abnormal Growth in Bacterial Cells

    Joshua M. Stomel1,4, James W. Wilson2¤, Megan A. Leo´ n1, Phillip Stafford3, John C. Chaput1,5*

    Abstract

    “Recent advances in de novo protein evolution have made it possible to create synthetic proteins from unbiased libraries that fold into stable tertiary structures with predefined functions. However, it is not known whether such proteins will be functional when expressed inside living cells or how a host organism would respond to an encounter with a non-biological protein. Here, we examine the physiology and morphology of Escherichia coli cells engineered to express a synthetic ATPbinding protein evolved entirely from non-biological origins. We show that this man-made protein disrupts the normal energetic balance of the cell by altering the levels of intracellular ATP. This disruption cascades into a series of events that ultimately limit reproductive competency by inhibiting cell division. We now describe a detailed investigation into the synthetic biology of this man-made protein in a living bacterial organism, and the effect that this protein has on normal cell physiology.”

    And this, like many truths, is really simple. What can a molecule which is only capavle of strongly binding ATP do in a living cell? Damage. It just subtracts ATP to the living environment. Nothing else.

    And I mean real damage.

    In summary, the current study provides the first in-depth analysis of a non-biological protein in a living host organism. We found that a synthetic ATP-binding protein from non-natural origins functions inside living cell by disrupting the normal energetic balance within the cell. This disruption cascades into a series of events that limit reproductive competency by inhibiting cell division. This discovery provides a paradigm where synthetic proteins could be used to as novel therapeutics, including next generation antibiotics, and provides new opportunities for probing many basic and applied questions in cellular biology.

    How was that darwinist argument that evolution has no target, that its target is “any possible function”? Well, not this one, it seems…

    Ah, and what about the supposed hydrolitic fucntion? No trace of it in vivo, it seems. Why am I not surprised :) ?

    Maybe the cell forgot to crystallize the new molecule with 1 equivalent of ATP?

  84. veils, as I said yesterday after you had called me ignorant,… I do not believe protein folding to be amendable to quantum computing whereas you do. Fine that is the way science works. Instead of presenting mathematical theory to counterbalance the mathematical theory I presented what you need to do is actually fold a protein with a quantum computer to prove that it can be done. But calling me ignorant all the while presenting no concrete evidence to refute what I see to be very reasonable objections to the scenario is not scoring any points with me.

  85. gpuccio, as Paul Harvey would have said, “And that is the rest of the story!”

    Definitely filed for future reference.

  86. Petrushka (#13):

    Interesting. Too litle for now, and already controversial, but interesting.

    Let’s wait and see.

  87. 88

    @bornagain77 (#86)

    you wrote:

    I said yesterday after you had called me ignorant,…

    Born,

    Really? I wrote:

    “Again, I’m not arguing for MWI in this thread. I’m arguing that the comments you’ve made reveal your ignorance of it.

    Clearly, I’m not using the term ‘ignorant’ as an ad-hominem as it was used in the context of the MWI. It seems you’ve presented a red-herring. Furthermore, there are many things that I am ignorant about and I have no problem with someone calling me out on it.

    In fact, I welcome it since I do not want to reach conclusions based on things I do not understand. However, your response suggest that you’re not interested in understanding the problem, but merely posting articles that you *think* support the position you already hold.

    For example, did you actually watch the quantum computing lecture links in provided the Olsky thread? While I’m not expecting you to understand everything presented, it provides a firm foundation for understanding how the WMI explains the phenomena of quantum computing.

    I do not believe protein folding to be amendable to quantum computing whereas you do.

    Born, have you read any of my comments? Have I not make numerous disclaimers that I’m not arguing for or against either position.

    You’re the one making the claim. I’m illustrating that your claim does not actually seem to be supported by the papers and links you cite. And I’ve given multiple examples of why. It seems your only response is to say you do not believe protein folding to be amendable to quantum computing which is fine – but then why bother quoting and linking to papers in the first place?

    Fine that is the way science works.

    Reaching conclusions from a misinterpretation of a paper’s summary is not how science works. In fact, it’s sure fire way for science *not* to work.

    Instead of presenting mathematical theory to counterbalance the mathematical theory I presented what you need to do is actually fold a protein with a quantum computer to prove that it can be done.

    First, where did I claim it can be done right now? Second, you yourself provided an example of an NP-hard problem being solved in the form of DNA repair, and claimed the designer used quantum mechanics to do it. So, according to you, it’s already been proven.

    Again, I’m referring to your claim that protein folding is untraceable using quantum computing within some vague timeframe which isn’t even defined. The very claim is impossible to make by definition of being ambiguous.

    Third, I’m claiming the mathematical theory you presented actually does not support your claim – you just don’t realize it because you do not understand it or just read the summary. In fact, many links did not present any substantial mathematical theories because on the summary was provided.

    But calling me ignorant all the while presenting no concrete evidence to refute what I see to be very reasonable objections to the scenario is not scoring any points with me.

    Please see above. I’m suggesting the objections are not reasonable because they are based on misinterpretations, incomplete information, etc.

    If you want to scale back your claim to a “hunch” or a mere ‘belief’, that’s one thing. But continuing to claim the papers you referenced support your position, is something quite different.

  88. Please cite lab work to the contrary or admit that evolution has not addressed this very fundamental issue.

    I cited a rather lengthy paper on protein evolution. You are the one going upstream on this, challenging Nobel Prize winners.

    I haven’t asserted that the changes to proteins in mammalian evolution are huge. You made that claim with your 80 percent claim.

    Now I’m asking you, for the third time, to give me a reference to changes during mammalian evolution that exceed Behe’s Edge. My expectation is there aren’t any. My expectation is the changes are rather small.

    Behe’s “challenge” to see specific evolutionary histories repeated in the laboratory is a monumental red herring.

    Evolution doesn’t produce changes according to demand. It doesn’t produce changes according to need. It doesn’t search for solutions. It doesn’t have goals or a direction.

  89. veils whatever I say, you do not think my opinions worthy, thus you think me ignorant of this topic as you yourself have clarified. And indeed I have never claimed to be an expert in this particular area of quantum mechanics as you seem to think I have, But you do seem to think yourself expert in this area, whereas I have my reservations as to your expertise, but none-the-less from my own personal reading of the evidence, not from your personal reading of it, a reading which I do not agree with, I do not see protein folding to be amendable to quantum computing whereas you do. You can jump up and down all you want, belittling the way I view things, but that will not change a thing with me, and until I see substantial evidence to the contrary that is the way it sits. And unlike neo-Darwinists, who refuse to see design, though the cell is crammed to brim and overflowing with it, I am willing to change my mind once I see compelling evidence that it is possible. just Like I clearly said before.

  90. Petruska that 80% number was not a “claim” it was a data point from a paper I referenced. Please provide a citation that physically shows that evolution of proteins to new functions is possible Petrushka, instead of just sequence similarity studies by “Nobel’s”.

    Here is another more recent sequence similarly study by the way”,

    “Why Darwin was wrong about the tree of life,” New Scientist (January 21, 2009)
    Excerpt: Even among higher organisms, “the problem was that different genes told contradictory evolutionary stories,”,,,“despite the amount of data and breadth of taxa analyzed, relationships among most [animal] phyla remained unresolved.” ,,,,Carl Woese, a pioneer of evolutionary molecular systematics, observed that these problems extend well beyond the base of the tree of life: “Phylogenetic incongruities [conflicts] can be seen everywhere in the universal tree, from its root to the major branchings within and among the various taxa to the makeup of the primary groupings themselves.”,,, “We’ve just annihilated the (Darwin’s) tree of life.”
    http://www.evolutionnews.org/2......html#more

    A article in – Trends in Ecology and Evolution – concluded
    “the wealth of competing morphological, as well as molecular proposals of the prevailing phylogenies of the mammalian orders would reduce the mammalian tree to an unresolved bush, the only consistent clade probably being the grouping of elephants and sea cows.
    W. W. De Jong, “Molecules remodel the mammalian tree,” – Trends in Ecology and Evolution, Vol 13(7), pgs. 270-274 (July 7, 1998).

    as for novel genes in humans, which apparently you are to busy to look up:

    First study hints at insights to come from genes unique to humans
    Excerpt: Among the approximately 23,000 genes found in human DNA, scientists currently estimate that there may be as few as 50 to 100 that have no counterparts in other species. Expand that comparison to include the primate family known as hominoids, and there may be several hundred unique genes.
    http://news.wustl.edu/news/Pages/11349.aspx

    If materialists were to actually try to account for the origination of just one completely unique gene between chimps and humans, instead of just ignoring them, they would find novel genes are exceedingly rare to “find”:

    Could Chance Arrange the Code for (Just) One Gene?
    “our minds cannot grasp such an extremely small probability as that involved in the accidental arranging of even one gene (10^-236).”
    http://www.creationsafaris.com/epoi_c10.htm

  91. there may be as few as 50 to 100 that have no counterparts in other species.

    From your source:

    “It’s also going to be very interesting for evolutionary biologists to try to develop a sense for where these humans-only genes come from,” Stahl says. “The building blocks of these genes may be present but not active in earlier species.”

    So may we assume that ID proponents are busy in their laboratories working this out? It’s an interesting question, for which you seem to have answered before the research is done.

  92. petruska, but were not pseudo genes just implicated to be functional? thus would not that totally overthrow that whole scenario from its roots? As well, I ask again, Please cite the studies to counter the main paper leading this thread, as well as the papers I have cited, seeing as gpuccio has thoroughly dismantled the one you presented. Or does the fact that you now have no empirical basis for your claims not bother you now? Or does scientific integrity come second to your belief in neo-Darwinism?

  93. Petrushka, while you are looking for something, anything, to counter the fact that you cannot empirically account for the origination of protein domains, nor even modest variation of existing functional proteins, and seeing as that could take you quite a long time to establish it to any convincing degree, I want to touch a just a little more on that whole chimp-human thing to give you something to chew on:

    Do Human and Chimpanzee DNA Indicate an Evolutionary Relationship?
    Excerpt: the authors found that only 48.6% of the whole human genome matched chimpanzee nucleotide sequences. [Only 4.8% of the human Y chromosome could be matched to chimpanzee sequences.]
    http://www.apologeticspress.org/articles/2070

    Even this more recent evolution friendly article found the differences in the protein coding genes of the Y chromosome between chimps and Humans to be “striking”:

    Recent Genetic Research Shows Chimps More Distant From Humans,,, – Jan. 2010
    Excerpt: “many of the stark changes between the chimp and human Y chromosomes are due to gene loss in the chimp and gene gain in the human” since “the chimp Y chromosome has only two-thirds as many distinct genes or gene families as the human Y chromosome and only 47% as many protein-coding elements as humans.”,,,, “Even more striking than the gene loss is the rearrangement of large portions of the chromosome. More than 30% of the chimp Y chromosome lacks an alignable counterpart on the human Y chromosome, and vice versa,,,”
    http://www.evolutionnews.org/2.....shows.html

    Chimp and human Y chromosomes evolving faster than expected – Jan. 2010
    Excerpt: “The results overturned the expectation that the chimp and human Y chromosomes would be highly similar. Instead, they differ remarkably in their structure and gene content.,,, The chimp Y, for example, has lost one third to one half of the human Y chromosome genes.
    http://www.physorg.com/news182605704.html

    The evolutionary scientists of the preceding paper offered some evolutionary “just so” stories of “dramatically sped up evolution” for why there are such significant differences in the Y chromosomes of chimps and humans, yet when the Y chromosome is looked at for its rate of change we find there is hardly any evidence for any change at all, much less the massive changes they are required to explain.

    CHROMOSOME STUDY STUNS EVOLUTIONISTS
    Excerpt: To their great surprise, Dorit and his associates found no nucleotide differences at all in the non-recombinant part of the Y chromosomes of the 38 men. This non-variation suggests no evolution has occurred in male ancestry.
    http://www.reasons.org/interpr.....lutionists

    as well Petrushka do you think Kangaroos should be given equal status as chimps as our ancestors?

    Kangaroo genes close to humans
    Excerpt: Australia’s kangaroos are genetically similar to humans,,, “There are a few differences, we have a few more of this, a few less of that, but they are the same genes and a lot of them are in the same order,” ,,,”We thought they’d be completely scrambled, but they’re not. There is great chunks of the human genome which is sitting right there in the kangaroo genome,”
    http://www.reuters.com/article.....P020081118

    further note:

    More Chimp-Human Genome Problems – Cornelius Hunter
    Excerpt: Even more interesting, at these locations the chimp’s genome is quite similar to other primates–it is the human that differs from the rest, not the chimp. (human accelerated regions (HARs).http://darwins-god.blogspot.co.....blems.html

    Scientific American: The Banality of Evil(ution) – Cornelius Hunter – March 2010
    Excerpt: Furthermore, these typos simultaneously must have altered two other genes which overlap with HAR1. That’s right, HAR1 (human accelerated region) lies in a region of overlapping genes. Imagine typing a paragraph which contains one message when read normally and a different message when read backward. Not only must evolution have created all of biology’s genetic information, but it composed the information in overlapping prose. Someday evolutionists will figure out how.
    http://darwins-god.blogspot.co.....-evil.html

  94. 95

    bornagain77,

    Just fyi, I think if you put ten links, or more, in one comment, the comment is put into moderation automatically. So if you don’t want a comment moderated automatically, you should keep the number of links, per comment, below ten.

  95. 96

    Petrushka,

    Evolution doesn’t produce changes according to demand. It doesn’t produce changes according to need. It doesn’t search for solutions. It doesn’t have goals or a direction.

    It’s so elusive it has to be believed.

  96. Nearly ten links and not one relevant to the question of whether the differences between human and chimp proteins ar beyond Behe’s Edge.

    Does Behe accept common descent of mammals?

  97. I thought I said some time ago that I have no idea how or when the protein domains arose. Could be magic. Or not.

    My questions are about evolution. Do you have evidence that specific classes, mammalia for example, contain differences in proteins that are beyond Behe’s edge?

  98. I’m somewhat confused by your train of reasoning. At some points you stress the differences between chimps and humans, and at other points you argue that proteins haven’t evolved much.

    There are two big questions at stake:

    1. Are the differences in proteins nested? Do you find sequences that don’t fit a pattern of descent?
    2. Are there protein sequences containing changes too great to be accomodated by evolution, that is, differences beyond the Edge?

  99. 100

    @bornagain77 (#90)

    You wrote:

    veils whatever I say, you do not think my opinions worthy, thus you think me ignorant of this topic as you yourself have clarified.

    First, you haven’t addressed any of my points. Instead, you seem to be trying to turn this into a personal attack.

    Second, you have it completely backwards. I think your claim is an just that, an ‘option’, because the papers you reference do not appear to support your claims. Given that you do not seem to realize this, the logical conclusion is that you do not actually understand the subject at hand.

    And indeed I have never claimed to be an expert in this particular area of quantum mechanics as you seem to think I have

    And I’m pointing out that posting links and quoting papers is *not* a substitute for not understanding the subject at hand. That you continue to quote and link to more papers, which also fail to support your position, implies you think otherwise.

    Again, if it’s just your option or a belief, then why bother quoting or linking to papers in the first place?

    But you do seem to think yourself expert in this area, whereas I have my reservations as to your expertise, but none-the-less from my own personal reading of the evidence, not from your personal reading of it, a reading which I do not agree with, I do not see protein folding to be amendable to quantum computing whereas you do.

    Born,

    Can you explain the difference between solving all NP-complete problems and solving a specific NP-complete problem using quantum algorithms? I’m asking because understanding this difference is key to interpreting the 2007 The Limits of Quantum Computing paper you cited. In addition, the link you provided to this paper only gave access to the summary. As such, it appears your “personal reading of the evidence” is limited to a few a paragraphs, which you lack the necessary foundation to understand.

    When I point this out, you do not dispute this or address any of my specific points. Instead, you merely repeat the claim that I’m belittling you or that I’ve insulted you by claiming your stupid.

    Again, If you want to scale back your claim to a “hunch” or a mere ‘belief’, that’s one thing. But continuing to claim the papers you referenced support your position, is something quite different.

    Since you’ve refused to address the issues I’ve raised and retreated to claims personal attacks, it’s unlikely further discussion will not be fruitful.

  100. 101

    GP,

    Thank you for that analysis. I am reading and re-reading the papers now. Much appreciated.

    Has Petrush or Maya responded to your comments yet?

  101. petrushka you state:

    “I thought I said some time ago that I have no idea how or when the protein domains arose.”

    Thus it appears that you concede that the paper you cited by Szostak is refuted? Do you mind thanking gpuccio for teaching you that you were in error?

    Since that is the lead off paper of the thread, and the main point to be established, I am quite happy with the development.

  102. veils, why the concern?, you see the evidence one way, I see it another. I do not think your reading of the evidence to be coherent, yet you are adamant that it is. This is all fine and well as that is how science progresses. Thus, instead of trying to convince you why I see the evidence differently than you do, I am willing to wait until more evidence is available. Why are you not willing to wait? Of what importance is it to you if I believe it impossible for computers to precisely mimic what happens with a single protein with fidelity? why in the world are you so concerned with this fairly esoteric point?
    To me it seems very peculiar that you would completely ignore the fact that we cannot account for the origination of a single protein, which is the main topic of this thread by the way. Why are you not puzzled by this fact instead of being completely sidetracked with man’s inability to mimic protein folding with fidelity? Myself I want to know exactly why, If neo-Darwinism is suppose to be this great powerhouse of innovation and diversity, why in blue blazes should material processes be stymied by its failure to explain the origination of just one single protein. Since that is in fact the main topic of the thread I would appreciate you addressing that point. In fact since petrushka conceded that he had no clue where proteins came from, I would appreciate if you’d focus you energy on that.

  103. Petrushka:

    Evolution … doesn’t have goals or a direction.

    So random and so meaningless, it’s amazing it produces so much cohesion and purposefulness.

  104. Thus it appears that you concede that the paper you cited by Szostak is refuted?

    I didn’t see Szostak claiming to have a time machine. It’s true he’s looking for pathways to self-assembly, but even ten years later he has unraveled all the mysteries. I’m quit comfortable with the possibility thait this won’t happen in my lifetime or his.

    The reason mainstream science makes progress is that the people involved make testable hypotheses and test them. It’s slow and painful. And much harder than plugging numbers into statistical formulas.

  105. So random and so meaningless, it’s amazing it produces so much cohesion and purposefulness.

    Yep, but what it doesn’t produce are specific molecular changes to order.

    And given large and sudden environmental changes, it doesn’t necessarily produce survival. A feature the Front Loader apparently forgot to install.

  106. Petrushka:

    I would like to remind you that, according to the paper that I cited in our past discussions, about 50% of protein domain superfamilies were probably present in LUCA (whatever it was: a single organism or a pool of organisms exchanging genes by HGT). You don’t want to address the origin of those domains, and therefore the problem of OOL, and for the moment I will humour you.

    But the remaining 50% of original protein domains superfamilies originated after LUCA, in the course of thet “evolution of the species” which is so dear to darwininsts and to you.

    And those domains originated in prokaryotes and eukaryotes, and in metazoa too.

    It is true that the rate of appearance of new domains has significantly slowed down, practically stopping at the level of mammals, and especially humans. As I have already commented, that shows probabvly that the search for lower level information is practically comnplete.

    Obviously, that does not mean that new proteins have not arisen in the latest stages too, because proteins are a higher level aggregation of information than simple domains.

    Anyway, I would like to know if you renpounce to explain the origin of protein domains not only at OOL, but also in the course of successive evolution, or if you believe that the darwinian causal model is capable to explain at least that part.

    For me, it is clear that if new domains have appeared throughout the whole history of life on our planet, very rapidly at furst, then at a slowing down rate, the simplest explanatio (Occam, Occam!) is that the same causal model should be able to explain the whole proteome appearance, and frankly I can’t see why we should postulate two different mechanisms for the same process (one for OOL, and one for later evolution).

    Regarding the existence of completely new proteins in humans, the answer is obviously not easy. The following paper is interesting, and brings about again a theme which is very dear to me, the role of transposons:

    “Evolution of primate orphan proteins”

    Macarena Toll-Riera, Robert Castelo, Nicola´ s Bellora and M. Mar Alba

    Abstract

    “Genomes contain a large number of genes that do not have recognizable homologues in other species. These genes, found in only one or a few closely related species, are known as orphan genes. Their limited distribution implies that many of them are probably involved in lineage-specific adaptive processes. One important question that has remained elusive to date is how orphan genes originate. It has been proposed that they might have arisen by gene duplication followed by a period of very rapid sequence divergence, which would have erased any traces of similarity to other evolutionarily related genes. However, this explanation does not seem plausible for genes lacking homologues in very closely related species. In the present article, we review recent efforts to identify the mechanisms of formation of primate orphan genes. These studies reveal an unexpected important role of transposable elements in the formation of novel protein-coding genes in the genomes of primates.”

    And speaking about Behe’s limit and humans, what do you think of the 49 HARs, with 18 nucleotide substitutions only in HAR1? Would you like to explain that, just as a start?

  107. Myself I want to know exactly why, If neo-Darwinism is suppose to be this great powerhouse of innovation and diversity, why in blue blazes should material processes be stymied by its failure to explain the origination of just one single protein.

    The history of the gaps argument is really amusing to the people whose work is filling the gaps. Whether it be missing links or simple, evolving chemical replicators.

    At some point, possibly in the next 20 years, we will see synthetic replicators produce functional proteins.

    By the way, how are you coming in finding some statement from Behe that the protein differences among mammals have no evolvable pathway?

    I understand that you are not obligated to research this for me, but I bet a shiny nickel that if you could find such a statement, you’d be happy to post it.

    Last time I looked, Behe had no argument against mammalian evolution.

    ID seems to have retreated to the pre-Cambrian, where the evidence has been erased over time.

  108. Petrushka:

    A feature the Front Loader apparently forgot to install.

    Well of course. The blind, ever so random, indeterminate “watchmaker” is more clever and purpose-driven than any intelligent being humans are capable of inventing.

  109. petrushka, when I read gpuccio, it seems that gpuccio is bringing up some very hard hitting points that are totally demolishing your argument, and when I read your post, you do not even answer his points, but instead you are merely flailing meaningless words about trying to find something, anything, to connect to. If I were the referee in a boxing match I would give you a standing count so you could clear your head. So please focus on the task in front of you, how do you answer his point on orphan genes in humans?

  110. It is true that the rate of appearance of new domains has significantly slowed down, practically stopping at the level of mammals, and especially humans. As I have already commented, that shows probabvly that the search for lower level information is practically comnplete.

    And that would be completely consistent with the slower rate of reproduction in mammals.

    Since rapid divergence is a characteristic of a depleted ecosystem (following a mass extinction) one would not expect much invention in th epresent. or near past. Mostly small variations on existing proteins.

    Which leads me to as you the same question I asked BA77. Are you aware of anything in Behe’s Edge of Evolution, he seems to accept common descent in mammals, and specifically cites evidence from molecular evolution.

    So which is it? Molecular evolution is too fast, or is it too slow. I’m confused about the ID position.

    Suppose, for the sake of argument, ZI conceded that the origin of protein domains was something akin to a miracle: how does that affect mammalian evolution?

    Do the variations found among proteins in mammals violate the Edge, and if so, can you find this argument in Behe?

  111. And speaking about Behe’s limit and humans, what do you think of the 49 HARs, with 18 nucleotide substitutions only in HAR1? Would you like to explain that, just as a start?

    I’m not a molecular biologist.

    Does HAR1 code for a protein?

  112. how do you answer his point on orphan genes in humans?

    I would recommend giving up on an explanation and calling it a miracle.

    Actually, there are countless research papers on the origin of orphan genes. Here’s one that doesn’t require a subscription:

    http://www.biochemsoctrans.org.....370778.pdf

    I’d like to point out once more that gaps arguments tend to evaporate over time, with research.

    If ID were a football team, it would capitulate every time it got behind. I can’t help but wonder what kind of mindset would actively oppose research.

    All these supposed dreadful problems for evolution are being uncovered by mainstream researchers, and yet they welcome them and look for more.

  113. you know petrushka this quote of yours that you keep bringing up,,,:

    “I’d like to point out once more that gaps arguments tend to evaporate over time, with research.”

    ,,,,seems to be a pure fabrication on your part from my point of view because the more I learn of the evidence the greater, and more impassable, the gaps (canyons?) have become for evolution:

    “Now, after over 120 years of the most extensive and painstaking geological exploration of every continent and ocean bottom, the picture is infinitely more vivid and complete than it was in 1859. Formations have been discovered containing hundreds of billions of fossils and our museums now are filled with over 100 million fossils of 250,000 different species. The availability of this profusion of hard scientific data should permit objective investigators to determine if Darwin was on the right track. What is the picture which the fossils have given us? … The gaps between major groups of organisms have been growing even wider and more undeniable. They can no longer be ignored or rationalized away with appeals to imperfection of the fossil record.” Luther D. Sunderland, Darwin’s Enigma 1988, Fossils and Other Problems, 4th edition, Master Books, p. 9

  114. 115

    @bornagain77 (#103)

    You wrote:

    veils, why the concern?

    Why the lack of concern? I welcome people calling me out on being ignorant since I do not want to reach conclusions based on things I do not understand. Apparently you don’t care one way or the other.

    You wrote:

    you see the evidence one way, I see it another.

    Not only are you not denying any of my points, you seem to be affirming them.

    If when you mean, the “other way” you “see” evidence is…

    - Making vague goals
    - Quoting from and linking to only the summaries of papers
    - Making comments that reveal you do not have the foundation necessary to interpret these incomplete papers
    - Making vague conclusions
    - Repeating the above
    - Retreating to claims of personal attacks when someone calls you out on it

    … then yes. It would seem we’re in complete agreement. You clearly do see the evidence differently. Which is precisely my point.

    I do not think your reading of the evidence to be coherent, yet you are adamant that it is. This is all fine and well as that is how science progresses.

    Then you should have no problem addressing my points. My guess is that you simply can’t, which you apparently think is just “fine.” Again, this is a sure fire way for science *not* to progress.

    Thus, instead of trying to convince you why I see the evidence differently than you do, I am willing to wait until more evidence is available. Why are you not willing to wait?

    Born, again you’re the one who made the claim, not me. I’ve made numerous clarifications to this effect, which you continual ignore. I’m not arguing for the positive. I’m arguing that your ‘opintion’ does not appear to be supported by the papers you’ve quoted and linked to.

    However, if you’re basing this conclusion on some other “evidence”, then please present it.

    why in the world are you so concerned with this fairly esoteric point?
    To me it seems very peculiar that you would completely ignore the fact that we cannot account for the origination of a single protein, which is the main topic of this thread by the way.

    So why make the claim in the first place? Are you suggesting the inability of solving protein folding problems by any means has no bering on “account[ing] for the origination of a single protein?”

    In fact since petrushka conceded that he had no clue where proteins came from, I would appreciate if you’d focus you energy on that.

    Are you conceding the claim that “[you] do not believe protein folding to be amendable [(what ever that means)] to quantum computing” is vague and not supported by the papers you quoted and linked to?

  115. Petrushka:

    among the confused mass of “arguments” you accumulated in your last posts, I will try to clarify some points which could be of interest for those who read (probably not for you).

    Chimps and humans are different. That’s a fact. For instance, the central nervous system of humans is much more complex, and, as all of us should know, it can do many important things that the chimps brain cannot do. Can you agree on that? Or is it a point you disagree upon?

    Let’s pretend you agree. Now, if the genomes of chimp and humans are really so similar, how can we explain the differences that really exist? We simply can’t.

    So, either the genomes are not enough to explain, or they are not really similar. I suppose both things are true.

    The slowing down of the appearance of new protein domains is not, as you say, “completely consistent with the slower rate of reproduction in mammals”. If it were only a question of reproduction rate, how would you explain that half of the existing domains were already present in LUCA? Was LUCA reproducing at an immensely higher rate than bacteria?

    There is a much simpler explanation. To build up the first living cells (LUCA, bacteria, archea, maybe they are more or less the same thing) a huge quantity of new basic level information was necessary. Hundreds of different proteins were needed where there was none. So, the “process which can find functional proteins” (let’s call it this way, not to be partial to any model) had to find a lot of them in a relatively short time, otherwise life could never have begun on our planet.

    After that, new protein domains were found by the PWCFFP eacxh time it was necessary: IOW, each time a new biochemical function was needed, which could not be obtained by the existing protein domains, or by a higher level combination nof them.

    The slowing down as evolution goes on has a very simple explanation: most functional protein domains (if not all) have already been found and exploited: the basic proteome is almost complete, and most basic biochemical functions can be accomplished with the existing superfamilies, expanding and specializing the existing folds.

    But then, how can we explain the huge variety and differences that we find among, for instance, metazoa?

    One answer could be that, even if only few new domains have emerged in the higher metazoa, still pertoeins can vary much in function, even remaining in the existing domains.

    That’s perfectly true, obviously, but I don’t believe that’s the main answer.

    The main answer is that, while basic protein functionality has been mostly found and exploited in the first steps of evolution, higher level of information and function have been accumulating, and have become incresingly complex.

    I mean, obviously, all the “procedures” which use proteins as their final effectors: IOW, the regulatory network.

    Now you may say: where is it? That’s a good question, because we don’t really know much about that. But it is somewhere. Maybe in epigenetic factors. Maybe in non coding DNA, in introns, in transposons, in pseudogenes, so adversed by official darwinism. Maby somewhere else.

    But, while we don’t really know in detail where that regulatory information is, we certainly know what it does:

    a) It controls the individual transcriptomes of about 10^14 cells (in humans) according to specific plans, sorting out form 20000 protein genes and many more possible proteins those which will be actively transcribed in each cell, their sequence and relative abundance of transcription and translation, their metabolism and catabolism, and so on. The transcriptome is highly specific for each cell type, and varies in the same cell type according to the functional moment, and to specific responses to the environment, to other cells, to general signals (hormones, cytokines), and so on.

    b) It determines and structures and controls the general body plan, and the specific morphology of systems, organs, tissues.

    c) It organizes and directs and controls the development and functioning of very complex interactive systems, aimed to modulate the adaptation to the external world, such as the immune system.

    d) And above all, and especially in humans, it determines, controls and regulates the specific morphology, architecture and functioning of the central nervous system, for a total of 10^11 cells and 10^14 ordered connections, allowing us to implement all the complex functions we know so well (no reason to list them all here), and especially, for tyhe first time in the natural history of our planet, to express our conscious intelligent representations in the form of abundant, rich and creative CSI.

    That’s the scenario. Not being a complete fool, I don’t believe that all that regulatory information is concentrated in 49 HARs, however fascinating they can be, or in small variations of the existing proteome.

    So, just to be clear, the point is not that mammals (or other higher metazoa) are just not so new or newly complex, just because most metazoa have a “similar” protein coding genome of about 10000 – 20000 genes. C. elegans, that little worm, one of the simplest metazoa, has 20100 protein coding genes, more or less like us. Amazing, isn’t it?

    Metazoa are vastly different one from the other becasue they differ in regulatory information. That’s the real answer, the only possible answer.

    HAR1, for instance, is probably an RNA gene, and its function is probably regulatory. But, whatever it is, the point is that it has 18 substitutions vs the chimp gene, and that those substitutions are believed to be very important for us to be human. Maybe true, maybe not. But let’s assume it’s true.

    So I ask, do you think those 18 important mutations (plus all the others which are present in the other HARs) are perfectly explainable as the result of darwinian evolution in primates? That they don’t exceed Behe’s limit?

    I don’t.

  116. Petrushka:

    Does Behe accept common descent of mammals?

    I believe he does. And so do I.

    By the way, I have noticed that you quote in #112 the same paper I quoted in #107. That’s really deep attunement!

  117. veils please address the topic of the thread

  118. 119

    @bornagain (#90)

    You wrote:

    But you do seem to think yourself expert in this area, whereas I have my reservations as to your expertise,

    Born, take the following analogy.

    Imagine someone claimed “An internal composition engine cannot get over 200mpg because the spark plug is connected to the crankshaft”

    Clearly, you don’t need to be an automotive engineer to realize this claim is unwanted.

    First, not all internal combustion engines even use a spark plug. One example is diesel engines. Nor are they actually “connected” to the crankshaft when are used. So, despite lacking the knowledge to deign a car from the ground up, you can clearly dismiss this claim as being unsupported.

    Note that in no way have you make a positive claim that cars *can* get over 200mpg in the process. It’s unnecessary.

    I’ve essentially done the same thing to your claim that “..I do not see protein folding to be amendable to quantum computing…

    However, at least the claim above makes a specific goal, 200mpg. while you’ve used the term “amendable” which is very vague.

  119. gpuccio, you are not going to believe the shoddy science this article on orphan genes reveals:

    Human Gene Count Tumbles Again – 2008
    Excerpt: Ironically, the way genes are recognized has triggered much of the confusion over the human gene count. Scientists on the hunt for typical genes — that is, the ones that encode proteins — have traditionally set their sights on so-called open reading frames, which are long stretches of 300 or more nucleotides, or “letters” of DNA, bookended by genetic start and stop signals. This method produced the most recent gene count of roughly 25,000, but the number came under scrutiny after the 2002 publication of the mouse genome revealed that many human genes lacked mouse counterparts and vice versa.

    Such a discrepancy seemed suspicious in part because evolution tends to preserve gene sequences — genes, by virtue of the proteins they encode, usually serve crucial biological roles. But like it or not, the 25,000 DNA sequences were already listed in the catalogs of human protein-coding genes, and skeptics had no systematic way to remove them. “At that point, no one had gone through the gene catalogs with a fine-toothed comb to find evidence that they weren’t valid,” said Michele Clamp, first author of the study and senior computational biologist at the Broad Institute.

    Far from blatant mistakes, non-gene sequences can masquerade as true genes if they are long enough and happen by chance to fall between start and stop signals. Despite having gene-like characteristics, these open reading frames may not encode proteins. Instead, they might have other functions or possibly none at all.

    To distinguish such misidentified genes from true ones, the research team, led by Clamp and Broad Institute director Eric Lander, developed a method that takes advantage of another hallmark of protein-coding genes: conservation by evolution. The researchers considered genes to be valid if and only if similar sequences could be found in other mammals – namely, mouse and dog. Applying this technique to nearly 22,000 genes in the Ensembl gene catalog, the analysis revealed 1,177 “orphan” DNA sequences. These orphans looked like proteins because of their open reading frames, but were not found in either the mouse or dog genomes.

    Although this was strong evidence that the sequences were not true protein-coding genes, it was not quite convincing enough to justify their removal from the human gene catalogs. Two other scenarios could, in fact, explain their absence from other mammalian genomes. For instance, the genes could be unique among primates, new inventions that appeared after the divergence of mouse and dog ancestors from primate ancestors. Alternatively, the genes could have been more ancient creations — present in a common mammalian ancestor — that were lost in mouse and dog lineages yet retained in humans.

    If either of these possibilities were true, then the orphan genes should appear in other primate genomes, in addition to our own. To explore this, the researchers compared the orphan sequences to the DNA of two primate cousins, chimpanzees and macaques. After careful genomic comparisons, the orphan genes were found to be true to their name — they were absent from both primate genomes. This evidence strengthened the case for stripping these orphans of the title, “gene.”

    After extending the analysis to two more gene catalogs and accounting for other misclassified genes, the team’s work invalidated a total of nearly 5,000 DNA sequences that had been incorrectly added to the lists of protein-coding genes, reducing the current estimate to roughly 20,500.
    http://www.sciencedaily.com/re.....161406.htm

    Can you believe that gpuccio? The evidence was literally molded directly by neo-Darwinism with no consideration given to ID at all,,, truly remarkable,, I wonder if Dr. Hunt saw that.

  120. 121

    @bornagain77 (#177)

    You wrote;

    veils please address the topic of the thread

    Again, are you suggesting the inability of solving protein folding problems by any means has no bering on “account[ing] for the origination of a single protein?”

    Did you not write?:

    I like this tidbit you had veilsofmaya:

    (perhaps a strange reflection of the possibility that the machinery of DNA itself may actually function using quantum search algorithms [3]).

    It would not surprise me in the least if this were true since I hold the Designer invented/invents quantum mechanics as well.

  121. correction I meant, I wonder if Dr. Hunter has seen this article,,,

  122. 123

    @bornagain77 (#199)

    Born, what do you mean when you say darwinian theory cannot explain something?

    A. darwinism has failed as an explanation

    Or

    B. Darwinism should be excluded from being used as an explanation because it’s somehow biased against ID?

    I’m asking because the success indicated in the the paper you quoted seems to suggest [B]

    Specifically, you quoted:

    If either of these possibilities were true, then the orphan genes should appear in other primate genomes, in addition to our own. To explore this, the researchers compared the orphan sequences to the DNA of two primate cousins, chimpanzees and macaques. After careful genomic comparisons, the orphan genes were found to be true to their name — they were absent from both primate genomes. This evidence strengthened the case for stripping these orphans of the title, “gene.”

    Then appear to complain that it Darwinism was successful but somehow excluded ID.

    The evidence was literally molded directly by neo-Darwinism with no consideration given to ID

  123. veils this is how I wrote it up for future reference:

    This following article shows that over 1000 “orphan” genes, that are completely unique to humans and not found in any other species, and that very well may directly code for proteins, were stripped from the 20,500 gene count of humans simply because the evolutionary scientists could not find corresponding genes in primates. In other words evolution, of humans from primates, was assumed to be true in the first place and then the genetic evidence was directly molded to fit in accord with that unproven assumption. Thus evolution was proven to be true because evolution was first assumed to be true:

    Human Gene Count Tumbles Again – 2008
    Excerpt: Scientists on the hunt for typical genes — that is, the ones that encode proteins — have traditionally set their sights on so-called open reading frames, which are long stretches of 300 or more nucleotides, or “letters” of DNA, bookended by genetic start and stop signals. This method produced the most recent gene count of roughly 25,000, but the number came under scrutiny after the 2002 publication of the mouse genome revealed that many human genes lacked mouse counterparts and vice versa. Such a discrepancy seemed suspicious in part because evolution tends to preserve gene sequences — genes, by virtue of the proteins they encode, usually serve crucial biological roles.,,, To distinguish such misidentified genes from true ones, the research team, led by Clamp and Broad Institute director Eric Lander, developed a method that takes advantage of another hallmark of protein-coding genes: conservation by evolution. The researchers considered genes to be valid if and only if similar sequences could be found in other mammals – namely, mouse and dog. Applying this technique to nearly 22,000 genes in the Ensembl gene catalog, the analysis revealed 1,177 “orphan” DNA sequences. These orphans looked like proteins because of their open reading frames, but were not found in either the mouse or dog genomes. Although this was strong evidence that the sequences were not true protein-coding genes, it was not quite convincing enough to justify their removal from the human gene catalogs. Two other scenarios could, in fact, explain their absence from other mammalian genomes. For instance, the genes could be unique among primates, new inventions that appeared after the divergence of mouse and dog ancestors from primate ancestors. Alternatively, the genes could have been more ancient creations — present in a common mammalian ancestor — that were lost in mouse and dog lineages yet retained in humans. If either of these possibilities were true, then the orphan genes should appear in other primate genomes, in addition to our own. To explore this, the researchers compared the orphan sequences to the DNA of two primate cousins, chimpanzees and macaques. After careful genomic comparisons, the orphan genes were found to be true to their name — they were absent from both primate genomes. This evidence strengthened the case for stripping these orphans of the title, “gene.”
    http://www.sciencedaily.com/re.....161406.htm

    The sheer, and blatant, shoddiness of the science of the preceding study should give everyone who reads it severe pause whenever, in the future, someone tells them that genetic studies have proven evolution to be true.

  124. So I ask, do you think those 18 important mutations (plus all the others which are present in the other HARs) are perfectly explainable as the result of Darwinian evolution in primates? That they don’t exceed Behe’s limit?

    A very long post before admitting that the 18 mutations took place in non-coding DNA. In general, which kind of DNA shows the most mutations — protein coding or non-coding?

    I was asking for Behe’s opinion. I haven’t read every word he’s written, but I think he makes a distinction between mutations that are potentially lethal, and mutations that are expressed as viable variations in phenotype.

    I would certainly agree that trying to repeat a specific sequence of 18 mutations would be futile, but no mainstream biologist would claim that you can predict specific sequences of mutations.

    Mutations do not occur because they’re needed or because they’re leading to some cool new structure.

  125. BA:

    Thank you for the very interesting paper. You are really a mighty resource!

    Your point is very good here, and I believe that even velisofmaya should acknowledge it. Let’s try to recapitulate it:

    a) Darwinists insist that human genome is strictly similar to chimp’s. Even too similar to be really believed, but well, that seems to be a fact.

    b) But you point to a paper revealing that almost 1000 to 5000 open readings frames were classified as “false genes” because they are not found in other mamals, and especially in other primates

    c) So I think that you, and I, and all sensible persons, have the right to ask: but how are darwinists demonstrating that the human genome is similar to the chimp’s? Just by excluding that the genes which are not in the chimp’s genome are true genes because they are not in the chimp’s genome? That’s circular reasoning, if ever there was one.

    So, veilsofmaya, here it’s not a question of ID or not ID. It’s simply a question of logic and common sense. If 1000 to 5000 ORFs in the human genome seem to be new genes, why classify them as false genes only because they are new? That means that the party line is to exclude that there are new genes in the human genome, at all costs.

    Now, I have no idea if those 1000 or 5000 ORFs are truly new protein coding genes, unique to humans, or not. But I am sure that it is important to understand that, and that such a problem deserves a better methodological approach than the one outlined in that paper.

  126. Petrushka (#124):

    A very long post before admitting that the 18 mutations took place in non-coding DNA. In general, which kind of DNA shows the most mutations — protein coding or non-coding?

    You really seem to miss the point. Those 18 mutations (and all the similar ones in other HARs) are currently believed to be the best candidates to explain, at least in part, why we are different from chimps. And that, not by me, or by ID. But by current darwinist science.

    So, what’s the point in stressing that they are in non coding DNA, and not in protein coding genes? HAR1 is believed to be an RNA gene, coding for a stable RNA structure whose function is not known in detail, but which is supposed to be very important in neural development.

    The point you seem to miss is: HARs are sequences highly conserved in other mammals, except in humans, where they show a great number of mutations. If they are highly conserved in mammals, then they are supposed to be highly functional. And if they change so much in humans, it is believed that this is a sign that they are extremely functional in the modified human version, so much so as to contribute to make humans humans.

    You may say that all this is theory, not yet supported by enough facts. I may agree, but please remember that it’s not my theory: it’s the currently accepted darwinian theory about HARs.

  127. 128

    @bornagain77 (#123)

    You wrote:

    Thus evolution was proven to be true because evolution was first assumed to be true:

    […]

    The sheer, and blatant, shoddiness of the science of the preceding study should give everyone who reads it severe pause whenever, in the future, someone tells them that genetic studies have proven evolution to be true.

    First, the problem is defined…

    Far from blatant mistakes, non-gene sequences can masquerade as true genes if they are long enough and happen by chance to fall between start and stop signals. Despite having gene-like characteristics, these open reading frames may not encode proteins. Instead, they might have other functions or possibly none at all.

    Please note that not being a “gene” does *not* imply universal-non function.

    Second, you mean the “shoddiness” found here?

    Such a discrepancy seemed suspicious in part because evolution tends to preserve gene sequences – genes, by virtue of the proteins they encode, usually serve crucial biological roles. But like it or not, the 25,000 DNA sequences were already listed in the catalogs of human protein-coding genes, and skeptics had no systematic way to remove them.

    and here…

    Although [the systematic method of conservation by evolution] was strong evidence that the sequences were not true protein-coding genes, it was not quite convincing enough to justify their removal from the human gene catalogs. Two other scenarios could, in fact, explain their absence from other mammalian genomes. For instance, the genes could be unique among primates, new inventions that appeared after the divergence of mouse and dog ancestors from primate ancestors. Alternatively, the genes could have been more ancient creations – present in a common mammalian ancestor – that were lost in mouse and dog lineages yet retained in humans.

    So how did they proceed?

    After careful genomic comparisons, the orphan genes were found to be true to their name — they were absent from both primate genomes. This evidence strengthened the case for stripping these orphans of the title, “gene.”

    Sounds like a far cry from blatant, shoddy science. Instead, it was an attempt to clarify what is or is not a gene.

    Furthermore, it’s not clear how reducing the number of genes in the human genome by 20,500 genes is proof that evolution is true.

    This simply does not follow. Again, not being classified as a gene does not necessitate universal non-function. You seem to be depending on a common misrepresentation, which as been clarified time and time again.

  128. 129

    Whoops.. Should have written

    Although [conservation by evolution] was strong evidence that the sequences were not true protein-coding genes,

    As a means of the systematically applying conservation by evolution had not yet been defined or implemented. This is why it was not was not quite convincing enough to justify their removal from the human gene catalogs

  129. veils you absolutely have to be kidding me, they removed the genes precisely because they were not found in the primates genomes. Veils,,, In order to correctly surmise if they are protein coding or not they should have saw if they in fact coded proteins or perhaps even employed a methodology of codon equiprobability like the one described in gpuccio’s reference:

    One way to assess whether a gene is protein-coding or not is by measuring the coding potential of its putative coding sequence. Codon usage scores are calculated as the log likelihood ratio of codon frequencies under two models: a
    coding model and a codon equiprobability model [19]. In the coding model, the likelihood of each codon is estimated using the relative frequency of that codon in a large dataset of bona fide protein-coding genes.
    Under the codon equiprobability model, the likelihood of each codon is considered to be 1/64. To be able to compare sequences of different length, the mean codon usage score across all codons in the sequence is computed. Positive values indicate that the sequence has a high probability of being protein coding, whereas negative values indicate that it does not.
    We applied this calculation to the dataset of 270 primate orphan genes and compared it with several coding and
    non-coding sequence datasets. The average coding potential of orphan genes was 0.063, similar to the value observed
    for short human genes encoding experimentally validated proteins (0.081), but clearly distinct from the values observed in a collection of non-coding RNA genes (?0.082) or of non-coding frames extracted from the primate orphan genes (?0.064) [12]. These results led us to conclude that the primate orphan genes we had identified mainly comprised protein-coding genes.
    http://www.biochemsoctrans.org.....370778.pdf

  130. veilsofmaya:

    excuse me, the point is that the article says:

    For instance, the genes could be unique among primates, new inventions that appeared after the divergence of mouse and dog ancestors from primate ancestors.

    Our point is simply this: why did they not consider the possibility that “the genes could be unique to humans, new inventions that appeared after the divergence of primate ancestors form human ancestors”?. IOW, they assume that genes can be new inventions in primates, but not strictly in humans. Why? What kind of gratuitous assumption is this?

    After all, humans are different from other primates…

    So, that’s not good scientific method, for me.

  131. 132

    @gpuccio (#125)

    You wrote:

    If 1000 to 5000 ORFs in the human genome seem to be new genes, why classify them as false genes only because they are new?

    gpuccio,

    The article does not suggest that any gene cannot be new merely because it’s not present in other primates. This is not what the article is saying. The subject of the article was should these ORFs have originally been classified as “new” just because they they are long enough and happen by chance to fall between start and stop signals.”?

    We’ve always had doubts, but lacked a systematic way to analyze them until now. As such, we begrudgingly left them in the library. That is, we only left them in because we had no way to determine otherwise.

    Furthermore, we know of at least 3 new genes not present in other primates, and possibly more.

    Finally, not being a gene does not imply universal non-function, that no function will be found, or that evolution is true or false.

  132. 133

    To clarify, based on the article alone, you’d have to assume testing an ORF for functionally hadn’t already been done or wasn’t considered a systematic way to know of a gene was new. The former is not implied in the paper and the latter seems very unlikely.

    To quote the article…

    “But like it or not, the 25,000 DNA sequences were already listed in the catalogs of human protein-coding genes, and skeptics had no systematic way to remove them.

    Since I haven’t read all of the comments or linked papers, can you point to something that suggests these ORFs had not been tested in the past or that testing for function would not be a considered systematic way of deterring if a gene was new?

    Again, the latter seems highly unlikely, especially since we have identified new human genes that do have function.

    Perhaps you mean, they are not in other primates and do not have a known function? Again, this assumes merely knowing an ORF has function was the only other systematic way to know if a gene was a gene, not just that it was a new gene, and it had not already been applied in the past.

    Neither of which are evident in the article.

  133. gpuccio even this following method would have been another method that would have been much more effective, and thus much more unbiased, at recognizing protein coding regions within human orphans, instead of just eliminating the suspected unique orphan protein coding sequences simply because they had no matches in primates or dogs::

    Prediction of protein coding regions by the 3-base periodicity analysis of
    a DNA sequence:
    Abstract: With the exponential growth of genomic sequences, there is an increasing demand to accurately identify protein coding regions (exons) from genomic sequences. Despite many progresses being made in the identification of protein coding regions by computational methods during the last two decades, the performances and efficiencies of the prediction methods still need to be improved. In addition, it is indispensable to develop different prediction methods since combining different methods may greatly improve the prediction accuracy. A new method to predict protein coding regions is developed in this paper based on the fact that most of exon sequences have a 3-base periodicity, while intron sequences do not have this unique feature. The method computes the 3-base periodicity and the background noise of the stepwise DNA segments of the target DNA sequences using nucleotide distributions in the three codon positions of the DNA sequences. Exon and intron sequences can be identified from trends of the ratio of the 3-base periodicity to the background noise in the DNA sequences. Case studies on genes from different organisms show that this method is an effective approach for exon prediction. ,,, r 2007 Elsevier Ltd. All rights reserved.
    http://www.nslij-genetics.org/gene/yin07.pdf

    gpuccio it is literally shocking, and unbelievable, that they would practice such a biased method of science as to not use cross checks as I have outlined (and surely other cross checks I am not aware of). It is apparent that the thought did not even begin to cross their minds to say that “Hey, Humans just might be different, and unique, from the animals after all”. ,,,, It is simply unbelievable.

    EMF – Unbelievable – music video
    http://www.youtube.com/watch?v=waacof2saZw

  134. If they are highly conserved in mammals, then they are supposed to be highly functional. And if they change so much in humans, it is believed that this is a sign that they are extremely functional in the modified human version, so much so as to contribute to make humans humans.

    Very interesting. I suppose it’s possible that all the laws of physics are set up to make this specific chain of mutations possible.

    Or it’s possible that only a few of the mutations are critical and the rest just noise.

    Or perhaps something else is at work.

    If it’s evidence against evolution, why do you suppose there’s so much work being done by mainstream biologists? You’d think they would be hiding it.

  135. gpuccio,,, I looked around for evidence of protein transcription of orphans and found this at a evolutionary website,,,,

    “artifactual genes (their name/slur for orphan genes) are supported by EST/cDNA data suggesting that they are transcribed.”

    ,,,, In spite of this transcription evidence for orphans, the evolutionist running the site sided with the biased methodology used to eliminate the orphans, though he clearly understood and cited the same methodology I have mentioned above.

  136. gpuccio, here is an eye opener for you this morning:

    A survey of orphan enzyme activities
    Abstract
    Background

    Using computational database searches, we have demonstrated previously that no gene sequences could be found for at least 36% of enzyme activities that have been assigned an Enzyme Commission number. Here we present a follow-up literature-based survey involving a statistically significant sample of such “orphan” activities. The survey was intended to determine whether sequences for these enzyme activities are truly unknown, or whether these sequences are absent from the public sequence databases but can be found in the literature.
    Results

    We demonstrate that for ~80% of sampled orphans, the absence of sequence data is bona fide. Our analyses further substantiate the notion that many of these enzyme activities play biologically important roles.
    Conclusion

    This survey points toward significant scientific cost of having such a large fraction of characterized enzyme activities disconnected from sequence data. It also suggests that a larger effort, beginning with a comprehensive survey of all putative orphan activities, would resolve nearly 300 artifactual orphans and reconnect a wealth of enzyme research with modern genomics. For these reasons, we propose that a systematic effort to identify the cognate genes of orphan enzymes be undertaken.
    http://www.biomedcentral.com/1471-2105/8/244

  137. #81,84 gpuccio

    “As BA has already pointed out in his post #40: “Having a 1 in 10^12 protein sequence “stick to” the universal energy molecule of ATP is not surprising, in fact I am surprised more sequences do not “stick to” the universal ATP.” ”

    The ATP is not just sticking in some random fashion to the selected protein at all. If you look at figure 4 of the szostack-paper you sell that even the change of a single atom (e. g. a hydrogen to a chlorine) leads to an notable loss of binding affinity. Similar observations are made for other groups of the molecule. Most importantly, all other naturally occuring nucleotides do not bind. Thus ATP is bound with both high affinity and selectivity by the selected protein as typical for many naturally occuring proteins. Its not just ‘sticking’ to it.

    “No, except for the fact that it has some 3D structure (probably very gross), and that it binds ATP. That’s not exactly a “function”, at best a biochemical property. (More on that in next post).”

    It has a very well defined structure as can be seen from its x-ray structure (nature structural biology, 2004, page 382-383) and it unfolds at a temperature of 56°C, which is a lot higher then many naturally occuring proteins (j. mol. biol. 2007, vol. 371, page 501-513, table 1).
    Furthermore, just ‘sticking to something’ (but of course in a specific way as also seen in this atp-binding protein) is the only but important function of many proteins in your body – thus this is a real function.

    “IOW, take, protein DX, pack it with its strong ligand, ATP, and some ATP becomes ADP, probably as a consequence of the bent conformation of the binding.”

    How do you think many naturally occuring enzymes work? Take the classic example of lysozyme: it breaks down sugar molecules by forcing them to bind in a bend conformation making them susceptible to hydrolysis at the bending point.

    ” … at best it explores the presence of random weak binding to ATP in a random library … ”

    Do you think all binding events have to be strong to be biologically relevant? Many functional enzymes bind their substrates with a much lower affinity then the selected ATP-binding proteins. In other important biological systems binding events are also often weak.

    “c) The uncomfortable truth: The demonstration of in vivo negative function”

    They overexpressed the protein from a plasmid. Such an overexpression lead to sick cells for many other proteins, sometimes even when you overexpress an E. coli protein in E. coli. The same would probably have happened when they would have overexpressed any naturally occuring ATP-binding protein in E. coli (the control experiment that is sadly missing from their paper – sloppy science?). And the protein carries out exactly the function it was selected for in cells – binding atp and thereby not unexpectedly draining the atp-pool of the cells. Btw, this shows that it folds correctly even inside a cell.

  138. rna, and exactly how was it demonstrated that the “random” 1 in 10^12 amino acid sequence worked out the exactly proper sequence so as to enable a highly advanced cruise control equation to be implemented throughout its structure?

    On top of the fact that Szostak’s work failed to demonstrate any novel biologically relevant proteins that enhanced the functionality of the cell, proteins have now been shown to have a “Cruise Control” mechanism, which works to “self-correct” the integrity of the protein structure from any random mutations imposed on them.

    Proteins with cruise control provide new perspective:
    “A mathematical analysis of the experiments showed that the proteins themselves acted to correct any imbalance imposed on them through artificial mutations and restored the chain to working order.”
    http://www.princeton.edu/main/...../60/95O56/

    Cruise Control?,, The equations of calculus involved in achieving even a simple process control loop, such as a dynamic cruise control loop, are very complex. In fact it seems readily apparent to me that highly advanced algorithmic information must reside in each individual amino acid used in a protein in order to achieve such control. This fact gives us clear evidence that far more functional information resides in proteins than meets the eye. For a sample of the equations that must be dealt with, to “engineer” even a simple process control loop like cruise control, for a single protein, please see this following site:

    PID controller
    A proportional–integral–derivative controller (PID controller) is a generic control loop feedback mechanism (controller) widely used in industrial control systems. A PID controller attempts to correct the error between a measured process variable and a desired setpoint by calculating and then outputting a corrective action that can adjust the process accordingly and rapidly, to keep the error minimal.
    http://en.wikipedia.org/wiki/PID_controller

    It is in realizing the staggering level of engineering that must be dealt with to achieve “cruise control”, for each individual protein, that it becomes apparent even Axe’s 1 in 10^77 estimate for finding specific functional proteins within sequence space, may be far too generous, since the individual amino acids themselves are clearly embedded with highly advanced mathematical language in their structures, which adds an additional severe constraint, on top of the 1 in 10^77 constraint, on finding exactly which of the precise sequences of amino acids in sequence space will perform a specific function.

    rna you have been shown this before, and that you would so willingly overlook this profound mystery as to how this feedback equation is embedded throughout the protein structure, preventing variation of shape, all the while ignoring the insurmountable difficulty that the rarity of 1 in 10^12 presents to RV + NS scenario in the first place. Is a testament to your unwarranted bias in this matter. As I asked you before, and you never answered,”Don’t you get paid money for believing purely material processes can generate functional information?”

  139. #138 bornagain

    Are you just trying to divert attention from the points I raised in my last post? I just think that some of gpuccio’s statements about szostaks work are not in agreement withc basic biochemical facts and would like to understand if I got him correctly.
    What any ‘cruise control’ has to do with that is beyond me. What kind of ‘cruise’ is szostaks protein undertaking that needs to be controlled? Or to cite your last paragraph – how can a ‘feedback equation’ be ‘embedded throughout the protein structure’.
    but maybe I just wait for gpuccio to come back with anything specific to what I said.

  140. rna, I just reread gpuccio’s posts on Szoztak’s work,,,,

    http://www.uncommondescent.com.....ent-358394

    gpuccio’s comments are careful, and measured, and he completely, and thoroughly, dismantles the work. Then I reread your post at 137 and you did not even address the devastating points gpuccio had brought up, such as the fact that they used many rounds of “intelligent selection”. And please tell me exactly how intelligent selection disproves intelligent design rna? In my opinion you made many spurious “excuses” that totally ignored the “elephant in the living room”. What is incredible is that the 1 in 10^12 number is not even functional by your own admission, and that “minimal functionality” was not even achieved until after round 18-19 of intelligent selection, as pointed out here by gpuccio,,,,

    http://www.uncommondescent.com.....ent-358401

    ,,,,thus should not, in all fairness, the original 1 in 10^12 number be multiplied by at least 18 or 19, to reflect the number of times the sequence was “adjusted” from the original 1 in 10^12 protein, to get to 1 in 10^216 and 1 in 10^228 to reflect the true rareness of the sequence?

  141. rna, I am not diverting attention at all, as this thread deals directly with the rarity of functional proteins, i.e. I want you to explain to me exactly how mindless processes embedded highly advanced mathematical language, which must reside within piecemeal within each amino acid, in the exactly correct sequence so as to achieve cruise control for each functional protein.

  142. # 140 bornagain

    “thus should not, in all fairness, the original 1 in 10^12 number be multiplied by at least 18 or 19, to reflect the number of times the sequence was “adjusted” from the original 1 in 10^12 protein, to get to 1 in 10^216 and 1 in 10^228 to reflect the true rareness of the sequence”

    10 exp 12 multiplied by 19 just yields: 1.9 x 10 exp 13 … so its not fair at all to adjust the number to 10exp216

  143. 10 exp 12 multiplied by 19 just yields: 1.9 x 10 exp 13 … so its not fair at all to adjust the number to 10exp216

    Forget it. He’s rolling.

  144. no rna, the exponent itself would have to be added against itself, for each “adjustment of the sequence”, to be fair, since in fact you are starting with a protein which was in itself residing within the original 1 in 10^12 rarity. i.e. Each time a “intelligent” selection is made (18 and 19 steps) you are in fact saying that out of a total of the 1 in 10^12 original proteins that you originally started out with only this fraction of the unique 1 in 10^12 intelligently selected protein itself, that weakly bound to ATP in the first place, will be functional when considered against the whole of the original 1 in 10^12 proteins. You cannot “nonchalantly” erase the starting population from consideration when trying to determine rarity for specific functionality.

  145. rna, to put it another way, using your reasoning, I should have only had to test 19 different original populations, of weakly binding non-functional proteins, to find the strong binding functionality that was present at the end of step 19. Which is clearly not true.

  146. rna:

    Maybe you didn’t read my posts carefully enough.

    Figure 4 of the paper, and also the structure analysis you cite, do not refer to the original sequences present in the random library, but to protein 18 – 19, which is the final product of a process of engineered and directed evolution. Indeed, when you choose a starting sequence and artificially mutate it, selecting for a pre-defined property, you are implementing directed evolution, and your results have no more any relationship with the natural occurence of something in a purely random library.

    I think that should be clear enough in my post. I paste here, for your convenience:

    “- What was present in the initial library?
    The answer is simple: possibly 4 molecules which loosely “stuck” to ATP, due to some favourable limited biochemical part of their sequence. Of those 4, only one is the ancestor of the final 18-19 ATP binding protein.

    - How is the final protein obtained?
    Through traditional protein engineering: rounds of artificial mutation and specific selection, using as seeds the 4 families derived from the 4 original sequences which loosely “stuck” to ATP.

    - What is the result?
    Protein 18-19, a protein with some folding and a strong binding to ATP. Those (both the folding and the strong binding) are the result of the engineering process which “found” the important AA positions which contribute to folding and ATP binding, and which were not present in the original random sequences”

    So, your first points are simply wrong. All your observations, like those from the original paper and those of many other darwinists, refer to the product of directed evolution, and not to the original sequences present in the library (the original B family). The differences between protein 18 – 19 and the original sequences in the B family are well documented in the paper, and are the result of the mutation/selection process. The final result has nothing to do with pure random occurrence.

    Now, let’s go to your remarks about structure and function in the final result (protein 18 – 19), always bearing in mind that anyway that protein was not present in the original library.

    You say:

    It has a very well defined structure as can be seen from its x-ray structure (nature structural biology, 2004, page 382-383) and it unfolds at a temperature of 56°C, which is a lot higher then many naturally occuring proteins (j. mol. biol. 2007, vol. 371, page 501-513, table 1).

    If that is the case, how do you explain this statement from the paper I referenced in my second post?

    “Unlike many naturally occurring proteins, protein 18-19 requires high concentrations of free ligand in order to remain stably folded and soluble. In an attempt to overcome this limitation and to evolve a non-biological protein toward a folded state that more closely resembles the ligand-independent folded state of many natural proteins, we designed an in vitro selection experiment using mRNA display to isolate variants of protein 18-19 that remained bound to an ATP agarose affinity resin in the presence of increasing concentrations of chemical denaturant.”

    That’s why I defined “gross” the structure of protein 18 – 19, and I suppose that the structure of protein DX could be considered more refined. You may not agree on the words, but the concept is very clear. And protein DX needed further rounds of directed evolution (artificial mutation plus even finer intelligent selection) to be obtained.

    And finally, the function. My point is very simple: simple binding to ATP, even strongly, is not IMO a function which can be used in a biological context, if not to “subtract” ATP (not certainly to “regulate” it). It is not certainly a function which would be selectable in any known living context. It is no more function than the capacity of EDTA to subtract calcium from blood in a test tube.

    The ability to hydrolyze ATP is something more, obviously, and it is more similar to a function. But my point here was only that such a function is not really present in protein DX in more than an extremely primitive form, and only in extreme conditions, and therefore it does not in any way resemble a true biological selectable function. That’s why the authors of the paper which “discovers” the function comment, in the final discussion:

    “This achievement suggests that relatively undemanding catalytic reactions may not have been that much harder for primordial
    proteins to attain than ligand binding.”

    Emphasis mine.

    With all that, I repeat again that, even if you want to consider protein 18 -19 or protein DX as “fucntional proteins” (your choice), that has nothing to say about the original sequences which were present in the original random library. It is completely wrong to state, as many have done, that protein 18 -19 or protein DX are the product of a purely random procedure. That’s simply not true.

    And about computations, just bear in mind that, once you implement mutation and artificial selection, you cannot really know how much you are optimizing the search by your search algorithm (see the examples of the various weasels and similar). So, the only mathematical results with which we are left is the one I clearly stated in my post:

    “- What was present in the initial library?

    The answer is simple: possibly 4 molecules which loosely “stuck” to ATP, due to some favourable limited biochemical part of their sequence. Of those 4, only one is the ancestor of the final 18-19 ATP binding protein.”

  147. Your calculation assumes that the 18 mutations must happen in sequence, whereas in a population, every individual is a possible source of mutation. All 18 could happen in one generation.

    You cannot simply multiply the exponent by the number of mutations.

  148. rna, to put it another way, using your reasoning, I should have only had to test 19 different original populations, of weakly binding non-functional proteins, to find the strong binding functionality that was present at the end of step 19. Which is clearly not true.

    You have a citation for your claim that all mutations must be present to confer any advantage?

  149. BA:

    You are obviously right in principle about how intelligent selection shifts the probability calculations of huge orders of magnitude, although I would say that it is impossible to say how much in this specific case, because it depends on many variables which we cannot know in detail.

    We must remember what they did:

    “Mutagenic PCR amplification.

    Mutagenic PCR (11, 12) was used in rounds 10–12, with an average mutagenic rate of 3.7% at the amino-acid level as determined by DNA sequencing. Serial transfer mutagenic PCR was carried out to an average mutagenic extent of 3.7% at the amino-acid level, with aliquots being combined to give a broad range of mutagenic extents.”

    And they did that for three rounds, and each time the results were selected according to ATP binding. That is a very effective algorithm to create and select a simple “function” like ATP binding, especially starting for a seed molecule which has already been selected for a primordial binding to ATP. And the mutation and selection process is obviously responsible also for the acquisition of some folding (be it gross or not).

    But we already know that protein engineering can do that. The subtle but serious misinterpretation here is to “shift” the properties of what has been obtained by intelligent design to what was randomly present in a random library, and to use that false reasoning to give figures of probability which are completely wrong.

  150. petrushka, I have absolutely no reason to believe that a non-functional (useless) proteins will be selected for in the first place, as gpuccio has so clearly pointed out:

    Figure 4 of the paper, and also the structure analysis you cite, do not refer to the original sequences present in the random library, but to protein 18 – 19, which is the final product of a process of engineered and directed evolution. Indeed, when you choose a starting sequence and artificially mutate it, selecting for a pre-defined property, you are implementing directed evolution, and your results have no more any relationship with the natural occurence of something in a purely random library.

  151. veilsofmaya (#132):

    First of all, here is the link to the original paper, so that you can read it for yourself:

    “Distinguishing protein-coding and noncoding genes in the human genome”

    Michele Clamp*†, Ben Fry*, Mike Kamal*, Xiaohui Xie*, James Cuff*, Michael F. Lin‡, Manolis Kellis*‡,
    Kerstin Lindblad-Toh*, and Eric S. Lander

    PNAS  December 4, 2007  vol. 104  no. 49 

    http://www.pnas.org/content/104/49/19428.full.pdf

    If you read it carefully, you will see that the main argument, if not the only one, to cancel more than 1000 genes form the list of human protein coding genes is the absence of homologues in mammals and primates.

    Indeed, the parameters they use, RFC score and CSF score, are striclty dependant on the presence of homologues.

    The darwinist a priori assumptions in their reasoning are especially obvious here:

    “If the orphans represent valid human protein-coding genes, we would have to conclude that the vast majority of the orphans were born after the divergence from chimpanzee. Such a model would require a prodigious rate of gene birth in mammalian lineages and a ferocious rate of gene death erasing the huge number of genes born before the divergence from chimpanzee. We reject such a model as wholly implausible. We thus conclude that the vast majority of orphans are simply randomly occurring ORFs that do not represent protein-coding genes.”

    Emphasis mine. It is obvious that the darwinian model of the origin of new genes is given as incontrovertible fact. There’s absolutely no game.

    To be honest, there is in the paper some later attempt to corroborate the conclusion independently, here:

    “Experimental Evidence of Encoded Proteins. As an independent check on our conclusion, we reviewed the scientific literature for published articles mentioning the orphans to determine whether there was experimental evidence for encoded proteins. Whereas the vast majority of the well studied genes have been directly shown to encode a protein, we found articles reporting experimental evidence of an encoded protein in vivo for only 12 of
    1,177 orphans, and some of these reports are equivocal (SI Table
    2). The experimental evidence is thus consistent with our conclusion that the vast majority of nonconserved ORFs are not
    protein-coding. In the handful of cases where experimental evidence exists or is found in the future, the genes can be restored to the catalog on a case-by-case basis.”

    Well, that spunds a little bit more reasonable, although scarcely conclusive. Do you really find it is strange that “the vast majority of the well studied genes have been directly shown to encode a protein” (emphasis mine), while that could well not be true of genes which have been just discovered (and immediately discredited)?

    But again, all the strength of the argumentation rests on the fact itself that these genes are “new”. QED.

  152. gpuccio I appreciate your work on clearly articulating what is going on “behind the curtain” in order to arrive at a number that was so far out of line with Axe’s and Sauer’s work.

  153. gpuccio, I saw that 12 in 1177 number also last night, and thought the same thing as you did as to the weakness of the assumption, and thus I dug around a little deeper this morning and found this.

    A survey of orphan enzyme activities
    Abstract

    Background
    Using computational database searches, we have demonstrated previously that no gene sequences could be found for at least 36% of enzyme activities that have been assigned an Enzyme Commission number. Here we present a follow-up literature-based survey involving a statistically significant sample of such “orphan” activities. The survey was intended to determine whether sequences for these enzyme activities are truly unknown, or whether these sequences are absent from the public sequence databases but can be found in the literature.

    Results
    We demonstrate that for ~80% of sampled orphans, the absence of sequence data is bona fide. Our analyses further substantiate the notion that many of these enzyme activities play biologically important roles.

    Conclusion
    This survey points toward significant scientific cost of having such a large fraction of characterized enzyme activities disconnected from sequence data. It also suggests that a larger effort, beginning with a comprehensive survey of all putative orphan activities, would resolve nearly 300 artifactual orphans and reconnect a wealth of enzyme research with modern genomics. For these reasons, we propose that a systematic effort to identify the cognate genes of orphan enzymes be undertaken.
    http://www.biomedcentral.com/1471-2105/8/244

    And though this study is not solely a study of human orphans, it surely gives ample reason to believe that they were much, much, to hasty in removing the orphans from the gene count

  154. 155

    @bornagain77 (#135)

    You wrote:

    …it is literally shocking, and unbelievable, that they would practice such a biased method of science as to not use cross checks as I have outlined (and surely other cross checks I am not aware of).

    Born, what’s shocking and unbelievable is that It seems you’ve done the very same thing yet again, after having pointed it out to you time and time again.

    If you had actually read the entirety of the original paper behind the article, you’d notice the following…

    Once a putative protein-coding gene has been entered into the human gene catalogs, there has been no principled way to remove it. Experimental evidence is of no utility in this regard. Although one can demonstrate the validity of protein-coding gene by direct mass-spectrometric evidence of the encoded protein, one cannot prove the invalidity of a putative protein-coding gene by failing to detect the putative protein (which might be expressed at low abundance or in different tissues or at different developmental stages).

    However, there is currently no scientific justification for excluding ORFs simply because they fail to show evolutionary conservation; the alternative hypothesis is that these ORFs are valid human genes that reflect gene innovation in the primate lineage or gene loss in other lineages. As a result, the human gene catalog has remained in considerable doubt. The resulting uncertainty hampers biomedical projects, such as systematic sequencing of all human genes to discover those involved in disease.

    Here, we provide strong evidence to show that the vast majority of the nonconserved ORFs are spurious. The analysis begins with a thorough reevaluation of a current gene catalog to identify conserved protein-coding genes and eliminate many putative genes resulting from clear artifacts. We then study the remaining set of nonconserved ORFs. By studying their properties in primates, we show that the vast majority are neither (i) ancestral genes lost in mouse and dog nor (ii) novel genes that arose after divergence from mouse or dog.

    Experimental Evidence of Encoded Proteins. As an independent check on our conclusion, we reviewed the scientific literature for published articles mentioning the orphans to determine whether there was experimental evidence for encoded proteins. Whereas the vast majority of the well studied genes have been directly shown to encode a protein, we found articles reporting experimental evidence of an encoded protein in vivo for only 12 of 1,177 orphans, and some of these reports are equivocal (SI Table 2). The experimental evidence is thus consistent with our conclusion that the vast majority of nonconserved ORFs are not protein-coding. In the handful of cases where experimental evidence exists or is found in the future, the genes can be restored to the catalog on a case-by-case basis.

  155. veilsofmaya (#154):

    I think I had in some way anticipated those points in my post #151.

    While I appreciate your contribution to understanding what the article is really saying, I believe that the our main point of a serious cognitive bias remains valid, as I have tried to show. I will be happy of any further feedback from you on that.

  156. and veils, did you happen to notice the article I referenced in 153?

    A survey of orphan enzyme activities
    Abstract

    Background
    Using computational database searches, we have demonstrated previously that no gene sequences could be found for at least 36% of enzyme activities that have been assigned an Enzyme Commission number. Here we present a follow-up literature-based survey involving a statistically significant sample of such “orphan” activities. The survey was intended to determine whether sequences for these enzyme activities are truly unknown, or whether these sequences are absent from the public sequence databases but can be found in the literature.

    Results
    We demonstrate that for ~80% of sampled orphans, the absence of sequence data is bona fide. Our analyses further substantiate the notion that many of these enzyme activities play biologically important roles.

    Conclusion
    This survey points toward significant scientific cost of having such a large fraction of characterized enzyme activities disconnected from sequence data. It also suggests that a larger effort, beginning with a comprehensive survey of all putative orphan activities, would resolve nearly 300 artifactual orphans and reconnect a wealth of enzyme research with modern genomics. For these reasons, we propose that a systematic effort to identify the cognate genes of orphan enzymes be undertaken.
    http://www.biomedcentral.com/1471-2105/8/244

    And though this study is not solely a study of human orphans, it surely gives ample reason to believe that they were much, much, to hasty in removing the orphans from the gene count. and in fact gives fairly clear evidence that Darwinists are impeding science by their methodology of excluding sequences from the gene database that could go a long way in helping in identifying the large percentage of orphan enzyme activities!!!

  157. BA:

    Interesting paper. You are really untiring!

    So, we have both orphan genes and orphan enzymatic activities. I hope some of those can be matched…

  158. 159

    @ gpuccio (#151)

    The darwinist a priori assumptions in their reasoning are especially obvious here:

    Gupuccio,

    Again, the question being asked by the paper was, were these genes incorrectly added in the first place? If so, what is one possible way to help determine false positives now and in the future.

    The experimental evidence is thus consistent with our conclusion that the vast majority of nonconserved ORFs are not protein-coding. In the handful of cases where experimental evidence exists or is found in the future, the genes can be restored to the catalog on a case-by-case basis.

    Note that the section you quoted from was : Orphans Do Not Represent Protein-Coding Genes.

    The misunderstanding is similar to the fact that quantum computing cannot solve all NP-complete problems in polynomial time but can be highly effective on specific NP-complete problems.

    If all you knew about a gene was that it was an orphan (and it were long enough and happen by chance to fall between start and stop signals) would we have been justified in adding it to library? If all orphans were coding, then all 1,177 would have had to have to had changed from it’s ancestor, which they reject as implausible.

    Now, you might suggest this is biased, but remember the question is should ORFs be added merely because they are orphans, which itself a designation based on whether they exist in species with a common ancestor.

    We can rephrase this as, “Should a ORF be considered a gene merely because it’s not present in a species which shares a common ancestor?” The results of the paper strongly suggests the answer is no.

  159. 160

    in (#158) I should have wrote:

    If all you knew about a ORF was that it was an orphan (and it were long enough and happen by chance to fall between start and stop signals) would we have been justified in adding it to library?

  160. 161

    @bornagain77 (#156)

    Born,

    Please point out exactly where universal non-function was claimed.

    For example, this was addressed in the article…

    Despite having gene-like characteristics, these open reading frames may not encode proteins. Instead, they might have other functions or possibly none at all.

    Which I noted here

    Again, not being classified as a gene does not necessitate universal non-function.

    Just because an RFC or gene is classified as an orphan does not mean it is universally non-functional. As with “junk DNA”, this misrepresentation has been addressed on multiple occasions, yet it continues to be brought up time and time again.

  161. veils the whole point, as is amply illustrated in the paper I referenced in 157, is that they were not the least bit justified to remove orphans from the Gene database based primarily, and overwhelmingly, on unwarranted neo-Darwinian assumptions, and they should have in fact conducted as thorough a search of orphan enzyme activity in humans as possible in order to fully validate the orphans removal from the Gene database, as the article I referenced clearly stated:

    Results
    We demonstrate that for ~80% of sampled orphans, the absence of sequence data is bona fide. Our analyses further substantiate the notion that many of these enzyme activities play biologically important roles.

    Conclusion
    This survey points toward significant scientific cost of having such a large fraction of characterized enzyme activities disconnected from sequence data. It also suggests that a larger effort, beginning with a comprehensive survey of all putative orphan activities, would resolve nearly 300 artifactual orphans and reconnect a wealth of enzyme research with modern genomics. For these reasons, we propose that a systematic effort to identify the cognate genes of orphan enzymes be undertaken.

  162. veils if you agree with their extremely biased methodology, a methodology which in fact only gave a passing nod as to a thorough cross check, you are in fact condoning forcing the evidence to fit a preconceived solution. No wonder Darwinists have been able to get away with such deception for so long, they literally make up the rules to science as they go along! i.e. Why is Darwinism true? Because the evidence says so. Why does the evidence say so? Well, because Darwinism is true of course! As guppcio clearly said earlier, “it would be hard to find more circular reasoning”.

  163. veilsofmaya:

    “If all you knew about a ORF was that it was an orphan (and it were long enough and happen by chance to fall between start and stop signals) would we have been justified in adding it to library?”

    Yes, adding all ORFs of a certain length to a library is standard procedure. Obviously, anybody can give further analysis and propose reviews. But the main standard to hypothesize that a gene is a protein coding gene in bioinformatics is that it is an open reading frame, whether it has homologues or not. That’s why orphan genes are called orphan genes. Because they are possible protein coding genes (ORFs) without homologues (orphan).

    Now, it is not new or surprising that darwinists hate orphan genes. They fit very badly in their causal model, and it’s not a case that Larry Moran has been particularly critic of the concept itself. One of the main purposes of darwinists is to demonstrate that orphan genes do not exists, or that if they exist they are very, very few…

    Couple that with a firm refusal to discuss OOL (a la Petrushka), and you have partially solved that bad problem of having to explain how genes arose.

    Human orphans are even worse than generic orphans. You have read the point in the paper: simply stated, if we find that humans have hundreds of new protein coding genes, which are not shared even by primates, how do we explain that?

    After all, humans are recent, they reproduce slowly, and they are not that many. Some reshuffling and a bundle of mutations in HARs can always be tolerated, provided that we can affirm that the genomes of humans and chimps are 99% or something similar. But 1000 new genes?

    And so, the answer is simple:

    “Such a model would require a prodigious rate of gene birth in mammalian lineages and a ferocious rate of gene death erasing the huge number of genes born before the divergence from chimpanzee. We reject such a model as wholly implausible.”

    But such a model is not implausible at all. It is only implausible for darwinists. But if you reflect on the obvious fact that humans and chimps are very different, that humans are practically unique in their ability of abstract intelligent thoughts, that they have changed the world they live in under many respects, that they have built varied civilizations and explored reality scientifically and in other ways, then few hundreds of new genes in their basic level proteome information could in some way seem justified…

    And if it is true, like ID believes, that new genes do not come out of nothing, nor do they come out of slow RV + NS, then you can see that the model our friends darwinists reject as “wholly implausible” is not implausible at all.

    That’s what is called a cognitive bias.

  164. 165

    bornagain77

    they should have in fact conducted as thorough a search of orphan enzyme activity in humans as possible in order to fully validate the orphans removal from the Gene database

    This is in fact quite interesting. Perhaps a happy medium in the meanwhile is if we were to classify these “orphans” as “currently unknown” and leave them be for now? After all, nobody is talking about deleting the data itself, just reorganizing it really.

    In that light, do you still object so fiercely?

  165. gpuccio, though slightly off topic (but not by much) this article which came up on Crevo may interest you:

    Do New Fossils Soften the Cambrian Explosion?
    Excerpt: Second, these fossils are of dubious interpretation. They may be nothing more than fairy-ring colonies growing outward like bacteria in a Petri dish. Perhaps the matlike remains were flexible enough to fold on the inside in some cases. There is no indication of a coelum or tissue differentiation. They do not appear transitional to Ediacaran fossils, let alone to Cambrian animals.
    http://www.creationsafaris.com.....#20100705a

  166. BA:

    Yes, I had read of those fossils when Petrushka pointed to them.

    I must say that I am intrigued, but cautious. I am intrigued because I do think that the vast spread of timje between OOL and the Ediacara – Cambrian explosions is really a bit of a mystery, and I would appreciate any new information on what could have happened in those 3 billion years or more. Cautious, because obviously those fossils are too little, and there are many possible interpretations of them.

    Even the appearance of eukaryotes is in itself some mystery. I have recently found a paper which states that the original ancestor was more an eukaryote than a prokaryiote.

    So, both OOL and what came after still hold great challenges to our knowledge. It’s not an exaggeration, but the only thing that I am really sure of is that biological information is designed :)-

    Anyway, I don’t think that if we discover new and strange ancient forms of life, like the fossils in question, that will change in any measure the powerful meaning of the two metazoan explosions. The ediacara and cambrian events are so strange and amazing in themselves that I doubt that any new finding will be able to lessen their cognitive impact. And let’s remember that those explosions are not about the appearance of eukaryotic or multicellular life, but rather about the sudden appearance of multiple, complex, macroscopic body plans, in two successive waves, probably unrelated one to the other. That’s something, isn’t it?

  167. # 146 gpuccio

    let’s start with the simple things:

    ” ..That’s why I defined “gross” the structure of protein 18 – 19, and I suppose that the structure of protein DX could be considered more refined. … ”

    The structures of 18-19 and DX are virtually identical as you can see from the j. molecular biology paper I quoted above, as might be expected from their ~ 80% sequence conservation. Since the proteins from the original B family have similar levels of sequence similarity they also adopt most likely a very similar structure. There is ample precedence for that in hundreds of other protein families.

    Many naturally occuring proteins need the addition of ligands, other additives or binding partners to be stably folded and to remain functional. (A whole class of functional proteins is designated as ‘intrinsically unfolded’.) If you want to work with such proteins in the lab you normally find a family member with more suitable properties normally from a different organism with some mutations in the sequence.

    ” … It is no more function than the capacity of EDTA to subtract calcium from blood in a test tube…”

    So myoglobin is in your opinion not a functional protein? Go on try living without it. Same goes for ferritin or calbindin or …

    “It is completely wrong to state, as many have done, that protein 18 -19 or protein DX are the product of a purely random procedure.”

    Of course the procedure was not random – it was an experiment. What was random, was the starting sequence library. The first question of the experiment was to answer the question of how many proteins of a given function (‘defined here by the experimenter as the simple function: atp-binding ability) can one find in a number of random (10exp12)sequences. The answer is at least 4 dominating families with this capability after eight rounds of selection. The selection procedure employed in the first eight rounds is just a procedure to find those sequences it does not modify what is in the original library. One could instead synthesize 10exp12 random dna sequences, translate them into proteins one by one and characterize each individual protein for atp-binding ability. this is not possible in a normal lab, so you have to use a selection + amplification procedure as your magnifying glass to find the functional sequences. No ‘engineering’ so far. the real amazing number is that 0.1 % of all sequences bound to atp after the first round aka 10exp9 of the 10exp12 sequences. thus, function in the form of atp-binding is very abundant in random sequences.
    this is contrary, to all the claims very abundant in this blog that function is very, very hard to find starting from random sequences. this is even more astonishing when one takes into account that the 10exp12 sequences synthesized cover only a tiny and also randomly selected fraction of the overall search space (20exp80).

    the additional rounds use ‘random mutations’ to improve the originally selected sequences. To characterize this as ‘protein engineering’ is a bit misleading in my opinion. and this can and does only improve a function that was already present after eight rounds of selection. The only thing that is really designed in this experiment is the ‘fitness landscape’ for the selection – a one dimesional one with atp-binding capability as the only parameter. But what choice does the experimenter have if he is looking for atp-binding molecules.

  168. # 144,145 bornagain

    … the original 1 in 10^12 number be MULTIPLIED by at least 18 or 19 …

    your choice of words

  169. sorry for the wrong choice of words rna, but do you agree the exponent must be added to correctly keep in line with what is going on, at least to a certain extent? If not, please go through only 19 original libraries to prove me wrong? I can easily see the correct approach for determining true probability very quickly falling in line with Sauer’s 1 in 10^64 number and with Axe’s 1 in 10^77 number, As well rna it seems to me that only by sheer want of any evidence whatsoever to make their case with, even remotely, that evolutionists are so willing to claim this “hoodwinked” 1 in 10^12 result represents anything close to true functionality.

  170. gpuccio, I actually have a little evidence of the timespan between the OOL and the Cambrian explosion, that gives some strong clues as to the “designed terra-forming” that was going on during that time. Maybe soon as we get off this topic I will be able to show you some of it.

  171. rna:

    let’s try to come to a reciprocal understanding:

    a) The structures of 18-19 and DX may be virtually identical, but their folding and functional properties are not, at least according to what the creators of DX state. Anyway, that’s not really important, because both are the product of directed engineering.

    b) Both 18 – 19 and DX differ form the original B family for 16 AAs, 20%. That’s a lot, especially if you consider that those AAs are exactly those mutations whcih were actively selected to confer both folding and function.

    c) I have seen no evidece in the papers that the original B family sequences had the same structure, or any significant function. As far as I know, that’s only your conjecture. The 80% sequence similarity is easily explained by tfhe history of those proteins (aftyer all, they were evolved using the B family as a seed), but there is no evidence that in itself it confers structure or function. The only information we have about function in the original B family is that they stuck to ATP enough to be separated from the other sequences. If you have any other information about structure and function of the original sequences, I would be happy to know it.

    d) I am well aware that the selection procedure used in the first rounds was not protein engineering. If you read my posts, you will see that I have never said anything different. It’s the three rounds of mutational PCR followed by selection which are protein engineering. And it’s those 3 rounds which found the necessary mutations to confer folding and “function” (in the sense of a strong binding to ATP.

    e) I think that something more can be said about the function of myoglobin than about protein 18 – 19. The following is from wikipedia:

    “The binding affinities for oxygen between myoglobin and hemoglobin are important factors for their function. Both myoglobin and hemoglobin binds oxygen well when the concentration of oxygen is really high (Eg. in Lung), however, hemoglobin is more likely to release oxygen in areas of low concentration (Eg. in tissues). Since hemoglobin binds oxygen less tightly than myoglobin in muscle tissues, it can effectively transport oxygen throughout the body and deliver it to the cells. Myoblobin, on the other, would not be as efficient in transferring oxygen. It does not show the cooperative binding of oxygen because it would take up oxygen and only release in extreme conditions. Myoglobin has a strong affinity for oxygen that allows it to store oxygen in muscle effectively. This is important when the body is starve for oxygen, such as during anaerobic excercise. During that time, carbon dioxide level in blood streams is extremely high and lactice acid concentration build up in muscles. Both of these factors cause myoblobin (and hemoglobins) to release oxygen, for protecting the body tissues from getting damaged under harsh conditions. If the concentration of myoglobin is high within the muscle cells, the organism is able to hold the breath for a much longer period of time.
    Myoglobin, an iron-containing protein in muscle, receives oxygen from the red blood cells and transports it to the mitochondria of muscle cells, where the oxygen is used in cellular respiration to produce energy. Each myoglobin molecule has one heme prosthetic group located in the hydrophobic cleft in the protein. The function of myoglobin is notable from Millikan’s review (1) in which he put together an accomplished study to establish that myoglobin is formed adaptively in tissues in response to oxygen needs and that myoglobin contributes to the oxygen supply of these tissues. Oxymyoglobin regulates both oxygen supply and utilization by acting as a scavenger of the bioactive molecule nitric oxide. Nitric oxide is generated continuously in the myocyte. Oxymyoglobin reacts with NO to form harmless nitrates, with concomitant formation of ferric myoglobin, which is recycled through the action of the intracellular enzyme metmyoglobin reductase. Flogel (2) conducted a study that showed how the interaction of NO and oxymyoglobin controls cardiac oxygen utilization.”

    f) The problem about defining function is that, if you are interested in the occurrence of function in random sequences as evidence for darwinian model, then you shoud stick to aselectable functions which can confer a reproductive advantage in a living being. Do you really believe that protein DX has such a property? Or, even better, one of the original B family sequences? And do you really beleieve that its incorporation in bacteria was harmful only because it was overexpressed? What scenario would you suggest where “normally expressed” protein DX would confer a reproductive advantage? Or, even better, one of the original B family sequences?

    g) You ask:

    “But what choice does the experimenter have if he is looking for atp-binding molecules.”

    The answer is simple: the experimenter’s intention was not to look for atp-binding molecules. The experimenter’s declared purpose was to look for naturally occurring functional sequences in a random library.

    To do that, all they had to do was to select and expand atp-binding sequences (as they have done in thefirst rounds) and then study those sequences and prove that they were functional. So, why have they gone on modifying those sequences by designed evolution, if not in order to build some apparent function which obviously was not there in the beginning, so that they could be able to state that they had found “function” in a random library?

    That procedure is completely unnecessary and biased. Its purpose was not to prove what had to be proved, but to give the false impression of having proved it.

    Again, this is not science, but ideologically driven research.

  172. 173

    @gpuccio (#163)

    You wrote:

    Yes, adding all ORFs of a certain length to a library is standard procedure. Obviously, anybody can give further analysis and propose reviews. But the main standard to hypothesize that a gene is a protein coding gene in bioinformatics is that it is an open reading frame, whether it has homologues or not. That’s why orphan genes are called orphan genes. Because they are possible protein coding genes (ORFs) without homologues (orphan).

    Gpuccio,

    First, it seems you’ve assumed the “standard procedure” currently in place is “non-biased”, has a good track record or has not been made obsolete by recent discoveries about identifying ORFs. On what basis have you made this assumption?

    Second, it seems your argument depends on the end result of actually removing these orphans from the library based on this qualification alone. However…

    - The removal has not yet occurred. This was a test to developed to test a prediction if orphan genes were likely to be coding.

    - Merely having a status of “orphan” is not recommended as the only criteria for removing them from the library. (All orphans should be removed regardless of any research, such as the discovery of coding proteins)

    - Future ORFs would not be barred if other research suggested they were protein-coding despite being an orphan.

    - Removal is not a claim of universal non-function. Nor does it demand that future search for function will not be performed.

    Now, it is not new or surprising that darwinists hate orphan genes. They fit very badly in their causal model, and it’s not a case that Larry Moran has been particularly critic of the concept itself. One of the main purposes of darwinists is to demonstrate that orphan genes do not exists, or that if they exist they are very, very few…

    I fail to see how this “test” reveals “hate” for orphan genes.

    I’ll pose the same question to you as I have to Bornagain77: Are you suggesting that Darwinism fails at explaining phenomena or that it cannot be used to explain phenomena because it’s somehow biased against or “hates” ID?

    The paper made it quite clear that there is currently no scientific justification for excluding ORFs merely because they fail to show evolutionary conservation.. This is because an ORF can be labeled many different ways, including an orphan, protein coding, etc. Than any particular gene is an orphan does not mean it cannot be protein coding or is universally non-functional, etc.

    The question being asked is, in the absence of some other systematic method, if only thing you know about a group of ORFs is that they are all orphans in respect to it’s closest relatives, are the vast majority likely to be non-coding? A specific test was hypothesized, rigorously applied and tested. The result? They indeed found that, based on pre-existing research, only 12 out of 1,177 reported protein encoding.

    So, it appears the real complaint here isn’t that the prediction was actually applied, was wrong or that it would be applied over and above research that suggested any specific orphan actually did code proteins, but that it was based on common decent.

    To reiterate, this is similar to Bornagain77′s claim regarding NP-complete problems. That quantum computing may be unable solve all NP-compete problems in polynomial time does not mean a specific quantum algorithm could not be used to solve a specific NP-complete problem by exploiting it’s problem space.

    Suggesting that all orphans are unlikely to be protein coding is not the same as banning any specific orphan from the library even if it were found to encode proteins. Nor would it prevent any specific orphan from being found to have some other function.

  173. veils to reiterate 161 and 162 since it seems you seemed to have completely missed it,

    veils the whole point, as is amply illustrated in the paper I referenced in 157, is that they were not the least bit justified to remove orphans from the Gene database based primarily, and overwhelmingly, on unwarranted neo-Darwinian assumptions, and they should have in fact conducted as thorough a search of orphan enzyme activity in humans as possible in order to fully validate the orphans removal from the Gene database, as the article I referenced clearly stated:

    Results
    We demonstrate that for ~80% of sampled orphans, the absence of sequence data is bona fide. Our analyses further substantiate the notion that many of these enzyme activities play biologically important roles.

    Conclusion
    This survey points toward significant scientific cost of having such a large fraction of characterized enzyme activities disconnected from sequence data. It also suggests that a larger effort, beginning with a comprehensive survey of all putative orphan activities, would resolve nearly 300 artifactual orphans and reconnect a wealth of enzyme research with modern genomics. For these reasons, we propose that a systematic effort to identify the cognate genes of orphan enzymes be undertaken.

    veils if you agree with their extremely biased methodology, a methodology which in fact only gave a passing nod as to a thorough cross check, you are in fact condoning forcing the evidence to fit a preconceived solution. No wonder Darwinists have been able to get away with such deception for so long, they literally make up the rules to science as they go along! i.e. Why is Darwinism true? Because the evidence says so. Why does the evidence say so? Well, because Darwinism is true of course! As guppcio clearly said earlier, “it would be hard to find more circular reasoning”.

  174. As well, to reiterate their primary justification in removing the orphans as stated in their own words as quoted by gpuccio:

    “And so, the answer is simple:

    “Such a model would require a prodigious rate of gene birth in mammalian lineages and a ferocious rate of gene death erasing the huge number of genes born before the divergence from chimpanzee. We reject such a model as wholly implausible.”

    But such a model is not implausible at all. It is only implausible for darwinists. But if you reflect on the obvious fact that humans and chimps are very different, that humans are practically unique in their ability of abstract intelligent thoughts, that they have changed the world they live in under many respects, that they have built varied civilizations and explored reality scientifically and in other ways, then few hundreds of new genes in their basic level proteome information could in some way seem justified…”

  175. gpuccio, you may find this paper interesting for a rough figure as to exactly how many orphan enzymes remain unsequenced and therefore unmatched to genetic sequences:

    Orphan enzymes could be an unexplored reservoir of new drug targets
    Excerpt: Despite the immense progress of genomics, and the current availability of several hundreds of thousands of amino acid sequences, >39% of well-defined enzyme activities (as represented by enzyme commission, EC, numbers) are not associated with any sequence. There is an urgent need to explore the 1525 orphan enzymes (enzymes having EC numbers without an associated sequence) to bridge the wide gap that separates knowledge of biochemical function and sequence information. Strikingly, orphan enzymes can even be found among enzymatic activities successfully used as drug targets. Here, knowledge of sequence would help to develop molecular-targeted therapies, suppressing many drug-related side-effects.
    http://www.sciencedirect.com/s.....505925db1a

  176. veilsofmaya (#171):

    Your atrtempt at defending what cannot be defended has become so generic, non technical and convoluted that frankly I will not go on discussing it. As I believe that you are really convonced of what you say, I respect your opinions. Just a couple of quick comments on your final sum up:

    The question being asked is, in the absence of some other systematic method, if only thing you know about a group of ORFs is that they are all orphans in respect to it’s closest relatives, are the vast majority likely to be non-coding? A specific test was hypothesized, rigorously applied and tested. The result? They indeed found that, based on pre-existing research, only 12 out of 1,177 reported protein encoding.

    So, it appears the real complaint here isn’t that the prediction was actually applied, was wrong or that it would be applied over and above research that suggested any specific orphan actually did code proteins, but that it was based on common decent.

    The only test was if new genes were new, whioch is a tautology. I can’t see how that can be seriously “hypothesized, rigorously applied and tested”. The observation about reported proteins was in no way the object of the paper, but only an indirect confirmation which, in itself, was not convincing, as I have already argued. Otherwise, the paper would have been something like: “Let’s see how many of human orphan genes, which are obviously new, have some independent demonstration of a corresponding protein in scientific literature”.

    And my complaint is not that their model and final conclusions are based on common descent. As you should know, I have nothing again common descent.

    My complaint is that their model and final conclusions are based on the blind assumption that new genes must be causally explained by darwinian theory, and therefore need times and modalities compatible wwith RV and NS to emerge (or at least, compatible with darwinist’s biased evaluation of those times and modalities: as you know, for us IDists no empirical time is compatible with that explanatory model :) ).

    My complaint is that they categorize as implausible what they are observing (1000 new ORFs, potential protein coding genes, in humans) only because they can’t explain how they could have arisen in such a short time by RV and NS.

    I will then answer your explicit question:

    I’ll pose the same question to you as I have to Bornagain77: Are you suggesting that Darwinism fails at explaining phenomena or that it cannot be used to explain phenomena because it’s somehow biased against or “hates” ID?

    It’s very clear for me. I have always “suggested” (indeed, stated and tried to demonstrate in detail) that “Darwinism fails at explaining phenomena”, and that is obviously because it is a theory both internally inconsistent and unsupported by facts. That statement refers only to darwinism as a causal model, not to uncommon descent, just to stay clear.

    Its inconsistency rests mainly in the fact that it uses a causal model based on RV + NS, and that the random part (RV), if quantitatively analyzed (which is absolutely necessary in an explanatory model) has not the probabilistic power to explain what it should explain. That, in itself, makes the explanatory model logically inconsistent.

    It is also unsupported by facts because no empirical evidence exists that such a mechanism can really cause macroevolution.

    I hope my position is clear.

    The problem of the bias, or hate, against ID is a logical consequence of that. If you are part of a majority detaining practically all power in the scientific community, and that in the name of a wrong theory, it is very easy to predict your feelings against those in a minority group who are seriously trying to point to the falsity of your theory, and even to give an alternative explanation which is incompatible with your personal beliefs and general views of reality.

    That’s elementary psychology. Calling it “bias” is perhaps a little too refined…

  177. gpuccio, I realized something about the “kicking the orphans out in the street” paper, even though the authors completely failed to cite the literature that argues strongly for a rigorous effort to match orphan protein sequences to orphan genetic sequences, they also failed to apply their criteria consistently. i.e. They should have gone through the “accepted” gene coding sequences to eliminate all the ones that have failed to be matched directly to proteins so far (perhaps several thousand). But more importantly they also failed to mention, as I pointed out earlier in 129 and 133, that there actually are unbiased methods for determining the likelihood of whether a gene is protein encoding or not that are completely “theory neutral”,,,

    As I referenced in 129,,,

    codon equiprobability of determining protein encoding
    http://www.uncommondescent.com.....ent-358458

    and as I referenced in 133,,,,

    Prediction of protein coding regions by the 3-base periodicity analysis of a DNA sequence:
    http://www.uncommondescent.com.....ent-358463

    Thus the authors truly are without excuse since they allowed their preconceived ideas to directly dictate what the evidence must say instead of using the unbiased scientific methods for determining the likelihood of protein encoding capability. Methods that were readily, and easily, available to them to work with, indeed it seems the thought did not even cross their minds to be unbiased in their study, as you pointed out this bias of theirs earlier:

    “Such a model would require a prodigious rate of gene birth in mammalian lineages and a ferocious rate of gene death erasing the huge number of genes born before the divergence from chimpanzee. We reject such a model as wholly implausible.”

    I wonder what Francis Bacon, who popularized the scientific method, would think about that statement?

  178. BA:

    here is another interesting piece of the paper:

    “ORF lengths.
    The orphans have a GC content of 55%, which is much higher than the average for the human genome (39%) and similar to that seen in protein-coding genes with cross-species counterparts (53%). The high-GC content reflects the orphans’ tendency to occur in gene-rich regions.
    We examined the ORF lengths of the orphans, relative to their
    GC-content.
    The orphans have relatively small ORFs (median 393 bp), and the distribution of ORF lengths closely resembles the mathematical expectation for the longest ORF that would arise by chance in a transcript-derived form human genomic DNA with the observed GC-content (SI Fig. 4).”

    IOW, the GC content in human orphans points to them being protein coding genes, but as they have already decided that they are not, they interpret the fact the other way round: as they are GC rich, it is more likely that they appear to contain long enough ORFs, becasue a stop codon is less likely to be observed frequently (stop codons are AT rich).

    Bias, again. You assune what you want to find.

    From Wikipedia:

    “GC ratios and coding sequence

    Within a long region of genomic sequence, genes are often characterised by having a higher GC-content in contrast to the background GC-content for the entire genome.”

  179. So gpuccio in layman’s terms, what you are trying to tell me is that they actually knew when they wrote the paper that unbiased reading of GC content directly indicated the 1177 orphans, or at least a very higher percentage of them, were active protein encoding regions???: That is truly incredible!!! A more biased example of science would be hard to find if this is true,,, to quote your quote from the paper:

    “The orphans have a GC content of 55%, which is much higher than the average for the human genome (39%) and similar to that seen in protein-coding genes with cross-species counterparts (53%).”

    gpuccio as far as I can see, that just about seals the deal for Intelligent Design because, as you well know, there simply is no way for material processes, operating within the known laws of physics, especially the second law and Conservation of Information, to account for the generation of even one completely unique gene/protein much less 1000. 1000 completely unique genes in humans is like overkill times 1000. Sure the t’s have to be crossed and the i’s dotted, i.e. orphans have to be properly matched (proteins to genes) and they have to be insured of isolation from primates, but other than that task, which is certainly easier said than done, the evidence is certainly VERY ID friendly.

  180. gpuccio Here is a article that just came out involving ATP, that reminded me that Szostak, nor rna, nor petrushka, even so much as offered a “just so” story for how the ATP molecule arose before the ATP enzyme that produces it. It would have been nice for them to at least ry to do that so as to be able to justify using Szostak’s experiment for truly ascertaining the ability of purely material processes to even generate proteins (functional or non-functional) in the first place.

    Nanomachines in the Powerhouse of the Cell: Architecture of the Largest Protein Complex of Cellular Respiration Elucidated – July 2010
    Excerpt: The total surface of all mitochondrial membranes in a human body covers about 14.000 square meter. This accounts for a daily production of about 65 kg of ATP. (A little over 143 pounds).
    http://www.sciencedaily.com/re.....100414.htm

    Your Inner Locomotive Revealed – July 2010
    http://www.creationsafaris.com.....#20100706a

    further notes:

    Evolution vs ATP Synthase – Molecular Machine – video
    http://www.metacafe.com/watch/4012706

    The ATP Synthase Enzyme – exquisite motor necessary for first life – video
    http://www.youtube.com/watch?v=W3KxU63gcF4

  181. 182

    @bornagain77 (#172)

    Born,

    Please read carefully…

    You wrote:

    …they were not the least bit justified to remove orphans from the Gene database based primarily, and overwhelmingly…

    Didn’t happen. Nothing was actually removed. Fiction created to generate outrage and appeal to emotion. Evidence of protein coding, among others, would exclude an orphan from being removed.

    …on unwarranted neo-Darwinian assumptions…

    Regardless of what theory the hypothesis was based on, it was rigorously applied, tested and the results matched the prediction.

    In regards to reading frame conservation…

    The RFC score shows virtually no overlap between the well studied genes and the random controls (SI Fig. 5). Only 1% of the random controls exceed the threshold of RFC >90, whereas 98.2% of the well studied genes exceed this threshold. The situation is similar for the full set of 18,752 genes with cross-species counterparts, with 97% exceeding the threshold (Fig. 2 a). The RFC score is slightly lower for more rapidly evolving genes, but the RFC distribution for even the top 1% of rapidly evolving genes is sharply separated from the random controls (SI Fig. 5).

    By contrast, the orphans show a completely different picture. They are essentially indistinguishable from matched random controls (Fig. 2 b) and do not resemble even the most rapidly evolving subset of the 18,572 genes with cross-species counterparts. In short, the set of orphans shows no tendency whatsoever to conserve reading frame.

    In regards to codon substitution frequency…

    The results again showed strong differentiation between genes with cross-species counterparts and orphans. Among 16,210 genes with simple orthology, 99.2% yielded CSF scores consistent with the expected evolution of protein-coding genes. By contrast, the 1,177 orphans include only two cases whose codon evolution pattern indicated a valid gene. Upon inspection, these two cases were clear errors in the human gene annotation; by translating the sequence in a different frame, a clear cross-species orthologs can be identified.

    Again, you seem to be complaining for no other reason that it’s based on neo-darwinism.

    However, neo-darwinism also predicts that viruses, cancer cells, etc. would mutate and become resistant to drugs. We know this occurs. Are you suggesting we should *not* take this into account when both administering existing treatments and developing new treatments because it’s based on neo-darwinism?

    From the original paper…

    Our focus has been on excluding putative genes from the human catalogs. We have not explored whether there are additional protein-coding genes that have not yet been included, although it is clear that cross-species analysis can be helpful in identifying such genes. Preliminary analysis from our own group and others suggests that there may be a few hundred additional protein-coding genes to be found but that the final total is likely to remain under ?21,000. The largest open question concerns very short peptides, which may still be seriously underestimated.

    Furthermore, it’s clear one of the goals is to clean up the human gene catalog to improve future study of both genes…..

    As a result, the human gene catalog has remained in considerable doubt. The resulting uncertainty hampers biomedical projects, such as systematic sequencing of all human genes to discover those involved in disease.

    And even promote the future study of non-coding transcripts…

    Finally, the creation of more rigorous catalogs of protein-coding genes for human, mouse, and dog will also aid in the creation of catalogs of noncoding transcripts. This should help propel understanding of these fascinating and potentially important RNAs.

    Your assertion that removal would cause “neglect” is yet another crystal clear example of perpetuating the myth that darwinism hampers research. The irony is that a paper that you yourself quoted clearly and explicitly suggests otherwise.

    Finally, in regards to the second paper you referenced, it’s not clear which, if any, of the “orphan enzymes” listed are the same as the orphan genes determined to be non-coding in the first. Specifically, the first paper was focused only orphan genes already in the human genome, while the second paper references orphan enzymes in 287 species. The maximum number of valid orphans enzymes in a single species was 18, which is close to the 12 orphan genes previously identified in human beings.

  182. veilsofmaya:

    Just a couple of comments.

    RFC score and CSF score are obviously related to the existence of homologues. I can’t see how they could be correctly applied to orphan genes, if not to confirm what has been already established, that they are orphans. Instead, the GC content is an independent clue, and it is in the sense of confirming that they are protein coding genes.

    Whatever you say about removing or not removing and all other considerations, I think you seem to have a naif idea of how a scientific community works today. While it is true that any new researcher can make new points, change old interpretations and so on (that’s still true, thanks God, except maybe if you name ID), it is equally true that a big and authoritative paper like the one we are discussing is likely to very much influence the general way of thinking and future research. That’s why researchers can be held responsible for the kind of methodology and reasoning they use when making conclusions from their data, because many people, even in the scientific community, will be influenced mainly by their conclusions, and not their data. So, faulty reasoning can and must be criticized.

    Finally, I think you are right that the paper about orphan enzymatic activities is not a direct support to the protein coding nature of human orphan genes, but it is an interesting piece of information anyway.

  183. veils, as gpuccio has clearly pointed out, in his usual clear and well-reasoned way, GC content was a obvious indication that orphans are most likely protein coding genes, I listed two other unbiased methods in 178 ,,,

    codon equiprobability of determining protein encoding:

    and

    Prediction of protein coding regions by the 3-base periodicity analysis of a DNA sequence:
    http://www.uncommondescent.com.....ent-358544

    ,,, unbiased methods that could easily have been implemented but were not, As well as has been pointed out earlier, if they were really interested in “cleaning up” the protein coding catalog, as you insist they were, then they should have removed all of the “accepted genes” that have not been properly matched to protein sequences yet,,, i.e.

    ‘they failed to apply their criteria consistently. i.e. They should have gone through their “accepted” gene coding sequences to eliminate all the ones that have failed to be matched directly to proteins so far (and this could be perhaps several thousand genes given the >39% of orphan enzymatic activities of humans).

    That you would use RFC score and CSF score as the lead off to try to defend this extremely poor excuse for a scientific paper, is really sad and says a lot about their, and your, ulterior motives, since the scores in fact, as gpuccio already pointed out earlier, reflect exactly the point they want to make in the end.

    veils it is abundantly clear to see the preconceived bias of the researchers drove these results severely astray of any meaningful point to be made in the research. My question to you is why in blue blazes are you trying so desperately to defend a paper that is so obviously void of scientific integrity? Of what possible benefit is it for you to sully your integrity by vainly trying to defend their lack thereof? What possible reward do you have in defending a bankrupt materialistic theory that promises you nothing but death in the first place?

    Wake up veils!!!!

    Nickelback – Savin’ Me
    http://www.youtube.com/watch?v=jPc-o-4Nsbk

  184. BA:

    I think veilsofmaya is in absolute good faith, and that he is trying to express his own judgements on the matter. I could agree with you that he is probably sidetracked by too much faith in scientific literature in itself, a bias which can easlily excused in those who do not have to deal with it professionally (veils, I apologize if I am wrong, but I don’t think your approach is really “skeptical” enough; if you had the same experience I have with scientific literature in my own field, maybe you would be a little more suspicious about scientific papers).

    With that, I am not encouraging hyperskepticism: all my remarks about the paper start from the paper itself,assuming good faith in the authors, but freely criticizing the explicit methodology expressed in the paper itself as implicitly revealing a cognitive bias.

    All that is perfectly legitimate. One may agree or not on the specific points, but it is obvious that scientific papers can and must be scrutinized for their methodology and conclusions.

  185. gpuccio, for me what this paper is a very good example of is exactly the type of thinking that the scientific method was supposedly set up to remove. Richard Feynman has some interesting quotes in relation to this type of thinking exemplified in his paper “cargo cult science”,,,,,,,,

    The first principle is that you must not fool yourself–and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.

    [...] It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid–not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked–to make sure the other fellow can tell they have been eliminated.

    Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can–if you know anything at all wrong, or possibly wrong–to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. [...]

    In summary, the idea is to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another. [...]

    But this long history of learning how to not fool ourselves–of having utter scientific integrity–is, I’m sorry to say, something that we haven’t specifically included in any particular course that I know of. We just hope you’ve caught on by osmosis

  186. gpuccio: here is a bit of trivia on protein folding”

    Youthful Aging Depends on Proper Protein Folding
    excerpt:If you were to make small chains consisting of only five amino acids (linked like beads on a string), but using all possible combinations of the 20 different amino acids of which our proteins are composed, then the number of possible three-dimensional molecular configurations arising from all those five-unit chains would be 104,857,600,000! (Ripley, where are you when we need you?)

    The number is that large because: (1) there are 3,200,000 different ways (20 x 20 x 20 x 20 x 20) in which a 5-unit chain can be made from among the 20 amino acids;* and (2) the number of different configurations that are possible when those chains twist and turn and fold in on themselves is 32,768 for each configuration (based on certain molecular-geometric considerations).
    http://www.life-enhancement.co.....sp?id=2015

  187. veils it is abundantly clear to see the preconceived bias of the researchers drove these results severely astray of any meaningful point to be made in the research.

    Preconceived bias?

    How old is the earth BA?

  188. gpuccio, here is a article that just came out on Protein folding:

    Computer program takes on protein puzzle
    Though the proteins assemble themselves in nature almost instantly, the Rice team’s algorithm took weeks to run the simulation. Still, that was far faster than others have achieved.
    http://www.physorg.com/news197658752.html

  189. # 172 gpuccio

    “a) The structures of 18-19 and DX may be virtually identical, but their folding and functional properties are not …”

    both proteins bind ATP and fold into the same structure. Their affinities for ATP differ a bit as does their stability against heat denaturation.
    If you take the same protein from different organisms lets say myoglobin from a whale and a human they will differ in their affinity for oxygen and their stability. so would you say that they differ in their function and folding?

    “… because both are the product of directed engineering …”

    How could they be the product of directed engineering? The structure was not known at the time and could not have been guessed since there was no sequence homology to any known protein. It was not known which amino acids in the sequence contributed to ligand binding or folding. The mutagenesis in this three rounds was random for every amino acid position and could have lead to an exchange to any other amino acid at every single position of the protein. So it was random mutations coupled to selection that yielded the improvement in protein stability and affinity.

    “b) Both 18 – 19 and DX differ form the original B family for 16 AAs, 20%. That’s a lot, ….”

    normally, it possible to predict the 3D-structure of a protein correctly, when you find another protein with a known structure and ~ 30% similarities in sequence. Thus, 80% sequence IDENTITY is a pretty good hint that their structures a very similar. (Of course, there are cases where a single amino acid change changes the structure completely but that is maybe the topic of another discussion.) There are many cases, where structures of such pairs of proteins have been determined experimentally to support this notion.

    ” … especially if you consider that those AAs are exactly those mutations whcih were actively selected to confer both folding and function… ”

    If you look at the structure of 18-19 or DX you see that the binding site for ATP is made up by the two aromatic amino acids phenylalanine and tyrosine and an arginine (at specific positions in the sequence). This amino acids are already the same in the original B-family supporting the idea that the function atp-binding was present from the beginning. Importantly the tyrosine is also involved in atp-cleavage – so even that was there from the beginning.

    “The only information we have about function in the original B family is that they stuck to ATP enough to be separated from the other sequences.”

    I dont get the fundamental physical difference between ‘sticking to ATP’ in the original b-family and ATP-binding of e.g. dx. as i pointed out above the important amino acids binding to atp via electrostatic and aromatic stacking interactions are already in place in the original b-family.
    In more general tems if you look at the amino acid exchanges between b-family and 18-19 – many of these exchanges are between very similar amino acids. E.g. lysines are exchanged against arginines which as you very well know have similar biophysical properties. such changes in other proteins normally go hand in hand with keeping the structure and the function.

    ” … And it’s those 3 rounds which found the necessary mutations to confer folding and “function” (in the sense of a strong binding to ATP …”

    as I pointed out above – that is not true. Furthermore, what is the reason for you to think that only strong binding counts as function. If you look at atp-affinities for natural atp-binding proteins e.g. the nbd-domains of different abc-transporters you find that their atp-affinites vary between ~100 nM and larger > 1mM which is 10000fold weaker then protein 18-19. These are functional natural proteins depending on atp-binding …

    ” e) myoglobin”
    myoglobin binds oxygen when there is a high enough oxygen concentration, diffuses, and releases oxygen when it moves into an environment with lower oxygen concentrations. thereby it helps to regulate oxygen levels.
    dx in a cell binds atp when there is atp, diffuses, and releases atp when atp-concentrations in its environment are low.
    Where is the fundamental difference?

    “And do you really beleieve that its incorporation in bacteria was harmful only because it was overexpressed?”

    Strictly, I can not answer that because they did not do the proper control experiment – expressing a natural atp-binding protein with the same atp-affinity at the same intracellular protein levels.
    from experience – often overexpression of a protein decreases the fitness of the poor cells forced to produce it. it drains a lot of resources. If that protein now binds and thereby depletes the cell from atp thats serious damage.
    Do you think that its harmfullness results from it being ‘unnatural’?

    “The experimenter’s declared purpose was to look for naturally occurring functional sequences in a random library”

    First paragraph of their paper:
    The frequency of occurrence of functional proteins in collections
    of random sequences is an important constraint on models of the
    evolution of biological proteins. Here we have experimentally
    determined this frequency by isolating proteins with a specific
    function from a large random-sequence library of known size. We
    selected for proteins that could bind a small molecule target with
    high affinity and specificity as a way of identifying amino-acid
    sequences that could form a three-dimensional folded state with a
    well-defined binding site and therefore exhibit an arbitrary specific
    function. ATP was chosen as the target for binding to allow
    comparison with known biological ATP-binding motifs …

    “why have they gone on modifying those sequences by designed evolution, if not in order to build some apparent function which obviously was not there in the beginning …”

    If the atp-binding ability would not have been there in the beginning the sequences would never have made it through the first rounds of selection. For further arguments see above …
    Why have they gone on modifying – for biophysical experiments such as measuring affinity, determining a structure etc. you need to produce a certain amount of the protein normally by overexpression in bacteria in much larger amounts then what you have in the actual selection experiments and be able to purify it (where you produce your protein by a process called in-vitro translation in smaller amounts). thus, 18-19 was the better choice for these experiments because it was easier to handle.

  190. Petrushka, I answered that question yesterday, as to the what I think the age of the earth is, on David Tyler’s post after you had asked me about it.

    http://www.uncommondescent.com.....ent-358566

    As I find it peculiar today, I thought it very peculiar for you to ask that question yesterday as I had just posted several posts, prior to your question, on his thread pertaining exactly to the extremely long “Intelligent Designed terra-forming of the earth”. Post that should have left no doubt whatsoever that I believe in an ancient age for the earth. But I’m curious as to why should you ask today, since it has nothing to do with the issue at hand. Are you trying to divert attention away from this biased “kick the orphan genes out into the street paper” by trying to attack what you falsely think to be a young earth weakness in me instead of trying to defend whatever scant merits the paper might or might not actually have? Do you in fact agree the paper is without true scientific integrity as is clearly apparent to both me and gpuccio? Are you now admitting you have no recourse but to “attack the man” by trying to undermine what you falsely perceive to be the lack of integrity within my judgment? Exactly what is your reasoning so as to employ such a shallow, and I might add intellectually dishonest, ploy that you have just used to try to defend a paper that is in fact not worth defending in the first place?

  191. rna, I find your response to gpuccio to be “excuses” not reasons. The burning question is why in the world did they even mess with the 1 in 10^12 protein if it was truly functional?

    For the 1 in 10^12 number to retain integrity, as to a true measure for the rarity of functional proteins, the blatant manipulation of the protein must in fact be factored into the result or removed from the experiment entirely. Why are you trying so desperately to defend what is clearly a “jerry rigged” result? Do you really think if you add enough “excuses” for the manipulation it will legitimize the result. Face it rna, they were caught with their hand in the cookie jar and the result is no more legitimate, than are a insane man’s claims to being the king of America.

  192. rna, as well I would like for you to explain to me exactly how 1 in 10^12 (trillion) for rarity functional proteins vs. useless proteins would even begin to help explain anything as for the evolution of larger mammals with smaller populations… Let’s say the evolution of the whales within 10 to 50 million years perhaps???

    Whale Evolution Vs. Population Genetics – Richard Sternberg PhD. in Evolutionary Biology – video
    http://www.metacafe.com/watch/4165203

  193. Petrushka, I answered that question yesterday,

    Sorry.

    I find this site extremely hard to navigate. Threads get pushed down rathere quickly, and there’s no indicator of whether a thread of interest is active. I simply lose track of comments.

    It doesn’t help that when viewing long threads with lots of comments, the entire thread has to load. On my connection it can take over a minute for a thread to load.

    Enough complaining. I’ll try to be more careful.

  194. rna (#190):

    First of all I would like to really thank you for your detailed and very competent comments, and especially for the very seious, respectful and dedicated tone of your discourse. I really appreciate that, and believe me, it’s not so common to experience such a fruitful exchange, even in disagreement.

    That said, I must just the same disagree with you on some important points. I will try again to explain why, as clearly as possible. Then, I leave it to you. We can probably agree to disagree on those points, if you think we have made clear our respective positions. But if you have any further point to make, I will be happy to go on with the discussion.

    So, I will try to follow your arguments in the order they are given.

    a)both proteins bind ATP and fold into the same structure. Their affinities for ATP differ a bit as does their stability against heat denaturation. If you take the same protein from different organisms lets say myoglobin from a whale and a human they will differ in their affinity for oxygen and their stability. so would you say that they differ in their function and folding?

    Well, it’s not a very important point for me to define the differences between protein 18 – 19 and protein DX. If you prefer the first, or if you think that they are grossly equivalent, that’s fine for me. I was just quoting the opinion of the researchers who created protein DX, but maybe they are in some way partial to their creature :). It is certain that protein DX was created from protein 18 – 19 through further rounds of “directed evolution” (see later for comments on that), but after all it is not a rule that any activity of protein engineering should necessarily be very successful. It is true, also, that many of the following “experiments” (hydrolysis, in vivo testing) were carried out with protein DX. But again, this is not an important point.

    b) How could they be the product of directed engineering? The structure was not known at the time and could not have been guessed since there was no sequence homology to any known protein. It was not known which amino acids in the sequence contributed to ligand binding or folding. The mutagenesis in this three rounds was random for every amino acid position and could have lead to an exchange to any other amino acid at every single position of the protein. So it was random mutations coupled to selection that yielded the improvement in protein stability and affinity.

    This is probably the most important point of all, so I will spend some more words on it. You certainly know that protein engineering can be done in two different ways: top-down and bottom-up. In that sense, protein Top7 is a good example of top down design (the protein was designed starting from current knowledge of protein sequences and protein folding), while our ATP binding protein is a good example of bottom up design. It’s not a case that those two proteins are listed in SCOP under class 11 (Designed Proteins), at the first two fold classes:

    1. New fold designs (Top7)

    2. In vitro evolution products (our protein; not sure if 18 – 19 or DX, probably the first).

    Fold 2, like fold 1, includes indeed only one protein, our one, under the classification:

    Superfamily: Function-directed selections -> 1. Artificial nucleotide binding protein

    I paste here the beginning of the protein entry:

    HEADER ARTIFICIAL NUCLEOTIDE BINDING PROTEIN 28-JAN-04 1UW1
    TITLE A NOVEL ADP- AND ZINC-BINDING FOLD FROM FUNCTION-DIRECTED
    TITLE 2 IN VITRO EVOLUTION
    COMPND MOL_ID: 1;
    COMPND 2 MOLECULE: ARTIFICIAL NUCLEOTIDE BINDING PROTEIN (ANBP);
    COMPND 3 CHAIN: A;
    COMPND 4 FRAGMENT: NUCLEOTIDE BINDING DOMAIN;
    COMPND 5 ENGINEERED: YES;

    Well, that’s SCOP’s point of view. But I owe you mine.

    You say:

    “The structure was not known at the time and could not have been guessed since there was no sequence homology to any known protein.”

    In a bottom up process, the structure is never known in advance. That’s why the process is bottom up. You could object that a bottom up process can start from some sequence of known protein, to modify it. That’s true. But in this case, the process of engineering willingly starts from random sequences. So, it’s a bottom up process starting form random sequences. Do we agree on that? The start is random.

    You say:

    “It was not known which amino acids in the sequence contributed to ligand binding or folding.”

    That’s true.

    You say:

    “The mutagenesis in this three rounds was random for every amino acid position and could have lead to an exchange to any other amino acid at every single position of the protein.”

    That’s true.

    You say:

    “So it was random mutations coupled to selection that yielded the improvement in protein stability and affinity.”

    That’s true.

    So, where is the problem? The problem is that “random mutations coupled to selection” is bottom up protein engineering. It is directed engineering.

    And what is “directed”?

    The mutations are not “directed”, although I could argue that they are in some way “targeted” (a choice is made about how to produce the mutations, in how many rounds, with what rate of mutation, and so on). But that is a minor point. I can agree that the induced mutations can to some degree be considered a simulation of “natural” RV.

    The major point is about the selection.

    This is intelligent selection. Not natural selection.

    Therefore, this is directed engineering, and not spontaneous evolution, nor is it in any way a correct simulation of it.

    I am really amazed that such a striking difference seems so elusive to many intelligent people. I will try to give my explicit definitions, to better make my point:

    1)Natural selection: any system with replicators where, after some replicator through various means has developed some new “property”, such property is expanded in the population as a consequence of the spontaneous improvement in replication that the new property confers

    2)Intelligent selection: any system with replicators where, after some replicator through various means has developed some new “property”, the property is actively recognized, measured, selected and expanded in the population by the system itself, independently from any objective advantage conferred by the property to the replicator

    Can’t you see the difference? 1 is completely different from 2, and hugely less powerful.

    2 is intelligent selection: the function “does not act”: it is only recognized, because the system has been set up to recognize it. ATP binding in the Zsostac system is recognized by a purification system engineered to recognize it, but is not conferring any advantage in the replication (in this case, the PCR replication system).

    Intelligent selection is very efficient in bottom up engineering. It can easily recognize a desired property even in minimal form, and amplify it, and develop it through rounds of mutations and selection. NS can do nothing like that, unless a truly functional property in a truly living context is first built, and unless it is capable to confer reproductive advantage.

    That’s why both protein 18 – 19 and protein DX are the product of intelligent protein engineering.

    We have in nature a very good model of intelligent protein engineering, realized through a very brilliant algorithm of random search and intelligent selection. I have often pointed to it on this blog. It’s the mechanism of antibody maturation. In it, the low affinity antibody of the primary repertoire, after the first exposure to the antigen, undergoes maturation through a process, still not completely understood, of targeted random mutation and intelligent selection, using the antigen epitopes (probably stored in the antigen presenting cells) as a selecting tool. And the results are brilliant.

    c) normally, it possible to predict the 3D-structure of a protein correctly, when you find another protein with a known structure and ~ 30% similarities in sequence. Thus, 80% sequence IDENTITY is a pretty good hint that their structures a very similar. (Of course, there are cases where a single amino acid change changes the structure completely but that is maybe the topic of another discussion.) There are many cases, where structures of such pairs of proteins have been determined experimentally to support this notion.

    But there is no reason why this should be one of those cases. As I have already argued, these two proteins (I mean the B family ancestor and protein 18 – 19) are 80% similar because the second is derived from the first. Their history, their “common descent”, if you want, explains their similarity. But that tells us nothing about the structure, because the structure was probably selected through the rounds of mutations and intelligent selection. Otherwise, why would the ATP binding property have increased so much through those rounds?

    d) If you look at the structure of 18-19 or DX you see that the binding site for ATP is made up by the two aromatic amino acids phenylalanine and tyrosine and an arginine (at specific positions in the sequence). This amino acids are already the same in the original B-family supporting the idea that the function atp-binding was present from the beginning. Importantly the tyrosine is also involved in atp-cleavage – so even that was there from the beginning.

    Here I have to definitely disagree. I have never denied that some simple binding to ATP took place in the original sequences. That’s obvious, otherwise they would not have been selected. And the two AAs you cite may well have been responsible for that. But how can you conflate that with the high ATP binding activity which was developed later, and which reasonably depends on the acquired folding, which almost certainly was not present in the beginning? How can you deny what the authors themselves state clearly, that they introduced the mutagenic – selective rounds to increase ATP binding and improve folding?

    One possible explanation for this low level of ATP-binding is conformational heterogeneity, possibly reflecting inefficient folding of these primordial protein sequences. In an effort to increase the proportion of these proteins that fold into an ATP-binding conformation, we mutagenized the library and carried out further rounds of in vitro selection and amplification.

    And here comes the final point: why are we here, forced to speculate about the possible properties of the original sequences, or at least of the B family ancestors?

    It’simple: because the researchers, instead of doing what was consistent with their initial purpose and with their methodological context, did another thing. Instead of purifying and studying the family B proteins, instead of defining their structure, instead of measuring their ATP binding activity, instead of trying to assess how functional were those sequences derived rather directly from their random library, they chose to change those sequences, to improve their folding and their ATP binding activity.

    That’s the simple truth. If they had acted in a methodologically correct way, we would not probably know the final truths, but at least we would be discussing of the properties of a sequence which was really in a random library. Instead, we are wasting our time discussing protein 18 – 19, which is the product of mutation and intelligent selection, and speculating about how much it could possibly be similar to its ancestor.

    So, again, why the mutations? Why didn’t they just expand and purify the existing sequences? They knew the exact sequences. Why change them? What is the methodological justification of such a procedure?

    e) I dont get the fundamental physical difference between ’sticking to ATP’ in the original b-family and ATP-binding of e.g. dx. as i pointed out above the important amino acids binding to atp via electrostatic and aromatic stacking interactions are already in place in the original b-family. In more general tems if you look at the amino acid exchanges between b-family and 18-19 – many of these exchanges are between very similar amino acids. E.g. lysines are exchanged against arginines which as you very well know have similar biophysical properties. such changes in other proteins normally go hand in hand with keeping the structure and the function.

    I can agree with you on one thing: we are certainly not sure that all 16 AA changes are functionally important. Some of them could be just neutral mutations. But many, certainly, are not.
    I have already stated my answer to the other point: the binding to ATP was already present in some form, but not the folding, and the special conformation which made possible the higher binding affinity, the molecule stability, and probably even the primordial hydrolytic activity.

    f) as I pointed out above – that is not true. Furthermore, what is the reason for you to think that only strong binding counts as function. If you look at atp-affinities for natural atp-binding proteins e.g. the nbd-domains of different abc-transporters you find that their atp-affinites vary between ~100 nM and larger > 1mM which is 10000fold weaker then protein 18-19. These are functional natural proteins depending on atp-binding …

    What is not true? You are certainly not implying that those three rounds did not accomplish anything. Then why were they performed? Just to spend time?

    Regarding the problem of ATP affinity, your example (abc transporters), which you certainly know much better than I can, is a very good demonstration of the difference between a true function and simple biochemical binding of a molecule. Just a couple of hints from Wikipedia for those who are reading (I am sure you don’t need them):

    “The common feature of all ABC transporters is that they consist of two distinct domains, the transmembrane domain (TMD) and thenucleotide-binding domain (NBD).”

    “The structural architecture of ABC transporters consists minimally of two TMDs and two ABCs. “

    “The ABC domain consists of two domains, the catalytic core domain similar to RecA-like motor ATPases and a smaller, structurally diverse ?-helical subdomain that is unique to ABC transporters. The larger domain typically consists of two ?-sheets and six ? helices, where the catalytic Walker A motif (GXXGXGKS/T where X is any amino acid) or P-loop andWalker B motif (????D, of which ? is a hydrophobic residue) is situated. The helical domain consists of three or four helices and the ABC signature motif, also known as LSGGQ motif, linker peptide or C motif. The ABC domain also has a glutamine residue residing in a flexible loop called Q loop, lid or ?-phosphate switch, that connects the TMD and ABC. The Q loop is presumed to be involved in the interaction of the NBD and TMD, particularly in the coupling of nucleotide hydrolysis to the conformational changes of the TMD during substrate translocation. The H motif or switch region contains a highly conservedhistidine residue that is also important in the interaction of the ABC domain with ATP. The name ATP-binding cassette is derived from the diagnostic arrangement of the folds or motifs of this class of proteins upon formation of the ATP sandwich and ATP hydrolysis.”

    “Dimer formation of the two ABC domains of transporters requires ATP binding.”

    “Nucleotide binding is required to ensure the electrostatic and/or structural integrity of the active site and contribute to the formation of an active NBD dimer.[35] Binding of ATP is stabilized by the following interactions: (1) ring-stacking interaction of a conserved aromatic residue preceding the Walker A motif and the adenosine ring of ATP,[36][37](2) hydrogen-bonds between a conserved lysineresidue in the Walker A motif and the oxygen atoms of the ?- and ?-phosphates of ATP and coordination of these phosphates and some residues in the Walker A motif with Mg2+ ion,[24][28] and (3) ?-phosphate coordination with side chain of serine and backbone amide groups of glycine residues in the LSGGQ motif.[38] In addition, a residue that suggests the tight coupling of ATP binding and dimerization, is the conserved histidine in the H-loop. This histidine contacts residues across the dimer interface in the Walker A motif and the D loop, a conserved sequence following the Walker B motif.[“

    And, especially, this final point:

    “ABC transporters are active transporters, that is, they require energy in the form of adenosine triphosphate (ATP) to translocate substrates across cell membranes. These proteins harness the energy of ATP binding and/or hydrolysis to drive conformational changes in the transmembrane domain (TMD) and consequently transports molecules.”
    So, the real problem is not how strongly you bind ATP, but rather what you do as a consequence of that binding.

    g) Do you think that its harmfullness results from it being ‘unnatural’?

    No, I think that its harmfulness derives form it being simple and gross, and lacking a true function. If its binding to ATP were lower, or if it were simply “less expressed”, it would just be useless.

    h) Why have they gone on modifying – for biophysical experiments such as measuring affinity, determining a structure etc. you need to produce a certain amount of the protein normally by overexpression in bacteria in much larger amounts then what you have in the actual selection experiments and be able to purify it (where you produce your protein by a process called in-vitro translation in smaller amounts). thus, 18-19 was the better choice for these experiments because it was easier to handle.

    Purifying and expressing the protein could have been done without intentionally modifying it through rounds of mutational PCR. They could just have used simple PCR. Whatever you can say, the introduction of mutations before selection was a deliberate choice, an intentional act of engineering, contrary to the aims of the experiment, and totally unjustified.

    And I agree that protein 18 – 19 was “easier to handle”. That’s exactly my point. It should not have been.

    In the end, I want again to thank you for your part in this discussion. Any possible “strength” in my words is in no way directed against you, but simply motivated by my sincere convictions about the subject.

  195. The major point is about the selection.

    This is intelligent selection. Not natural selection.

    Therefore, this is directed engineering, and not spontaneous evolution, nor is it in any way a correct simulation of it.

    I am really amazed that such a striking difference seems so elusive to many intelligent people.

    Apparently it eluded Darwin, since artificial selection gave him the idea for natural selection.

    So, again, why the mutations? Why didn’t they just expand and purify the existing sequences? They knew the exact sequences. Why change them? What is the methodological justification of such a procedure?

    Why not? What is the justification for excluding mutation and selection?

  196. Petrushka:

    Why not? What is the justification for excluding mutation and selection?

    Obviously, because the aim of the study was to analyze how many functional sequences could be found in a random library, not how many functional sequences could be found after a process of mutation and selection.

  197. Petrushka:

    Apparently it eluded Darwin, since artificial selection gave him the idea for natural selection.

    And inference by analogy, it seems… Just like ID! :)

  198. Obviously, because the aim of the study was to analyze how many functional sequences could be found in a random library, not how many functional sequences could be found after a process of mutation and selection.

    I see no reason why a study can’t accomplish several related goals.

    random functional sequences would have purely academic interest, but finding random sequences with minimal function that could be enhanced through mutation and selection would be a much richer and more evocative finding.

    It’s something that would be an obvious follow-up study under any circumstances.

    It speaks directly to the question of whether protein functionality is a gradient that can be traversed through mutation and selection.

  199. Petrushka:

    a study can accomplish several related goals, provided that the differnts goals are clearly stated and that the conclusions be kept well seoarated. That is not the case in that study.

    And I believe you can traverse any space, gradient or not, continuous or not, through the right quantity and quality of intelligent engineering (including RV + intelligent selection). Otherwise, intelligent agents, including humans, could never design proteins.

    The important thing is not to attribute the results of intelligent engineering to randomness, or to RV + NS. That’s simply wrong.

    IOW, research must be honest and clear, in its aims, in its procedures, in its methodology, in its epistemology, and in its conclusions.

  200. The important thing is not to attribute the results of intelligent engineering to randomness, or to RV + NS. That’s simply wrong.

    Are you suggesting that the induced variation was not random?

  201. Petrushka:

    Now, stop kidding. We have better things to do.

    If you want, and if you can, you will find all my argumentations very clearly in my #195 (or in the other parts of my exchange with rna).

    As you perfectly know.

  202. 203

    @gupuccio (#177)

    You wrote:

    The only test was if new genes were new, which is a tautology

    This is a highly simplistic interpretation which ignores much of the research performed.

    01 The entire catalog was filtered from scratch beginning with an assumption that none of the genes were orphans. A new protocol was developed by which the criteria focused on human, mouse and dog genomes due to the high quality of sequence data available. Development of this specific process was a key part of the study as…

    A. It would be part of methodology for evaluating future proposed additions to the human gene catalog.

    B. It identified pseudogenes that slipped into the Ensembl catalog.

    C It identified numerous errors in human genome annotations.

    D. It identified 36 putative genes as valid genes, including 10 primate specific.

    E. It allowed for more detailed analysis and comparison of the properties of the resulting orphans with ortholog and random controls, rather than merely determining they were orphans.

    From the paper….

    Finally, we note that the careful filtering of the human gene catalog above was essential to the analysis above, because it eliminated pseudogenes and artifacts that would have prevented accurate analysis of the properties of the orphans.

    02. When this process was applied to the Ensembl (v38) catalog, an additional 598 genes were found due to more accurate identification of cross-species counterparts. This was due to the use of the more accurate dog and mouse sequence data. The filtering process would have resulted in a net loss was due to attrition, not merely removal of orphans previously classified as such using earlier methods.

    [If it wasn't a tautology], the paper would have been something like: “Let’s see how many of human orphan genes, which are obviously new, have some independent demonstration of a corresponding protein in scientific literature”.

    It’s no longer obvious that they are new. This is because what we know about genes has changed significantly since they were added. That any of these ORFs are new genes is the goal of some other research project. In fact, as indicated above, the filtering process would be helpful starting point for such a project.

    My complaint is that they categorize as implausible what they are observing (1000 new ORFs, potential protein coding genes, in humans) only because they can’t explain how they could have arisen in such a short time by RV and NS.

    Having the potential to code proteins is not the same as positive experimental evidence that a majority of the code proteins.

    Other researchers observed 12 of the orphans identified in the study had been previously identified as coding. This is not implausible.

    Nor was the fact that over a 1000 human ORFs which were not present in other specific mammals and had been previously entered into a human gene catalog before specific recent discoveries had been made. While each specific ORF is a potential gene, as a group they exhibit specific properties that makes them unlikely tot be actually protein coding.

    Specifically…

    Recent studies have made clear that the human genome encodes an abundance of non-protein-coding transcripts (1–3). Simply by chance, noncoding transcripts may contain long ORFs. This is particularly so because noncoding transcripts are often GC-rich, whereas stop codons are AT-rich. Indeed, a random GC-rich sequence (50% GC) of 2 kb has a ?50% chance of harboring an ORF ?400 bases long

    Furthermore, out of these 1000 ORFs, the paper is *not* suggesting than any one in particular does not have the potential to be a new gene. Instead, they are suggesting that, in the absence of any other positive experimental evidence, it’s unlikely that a majority of them would be protein coding. Nor is this solely based on addition and deletion rates specific to darwinism. The recommendation is, unless research suggests otherwise, they should be reclassified as a non-coding, just as ORFs external to the catalog would not be added given what we now know.

    Nor would this be a barrier to re-entry for ORFs should other experimental evidence be found in the future.

    Finally, reclassification does not exclude an ORFs from further study. In fact, it seems quite the opposite. To quote the paper..

    Finally, the creation of more rigorous catalogs of protein-coding genes for human, mouse, and dog will also aid in the creation of catalogs of noncoding transcripts. This should help propel understanding of these fascinating and potentially important RNAs.

    That you see the resulting net attrition as “hate” appears to be caused by constant misrepresentations which have been addressed time and time again.

  203. veilsofmaya:

    I disagree with your reading, but I am afraid we cannot go on forever on the same points.

    I want to specify that I have nothing to object to your point 01 and 02. My objections were only relative to the discussion about the 1000 and such orphans: the precious filtering operations are OK for me. I never said anything crictical about those passages, so I don’t understand why you mention that here: just to show that the authors are good guys?

    It’s no longer obvious that they are new. This is because what we know about genes has changed significantly since they were added. That any of these ORFs are new genes is the goal of some other research project. In fact, as indicated above, the filtering process would be helpful starting point for such a project.

    I am afraid I cannot even start to understand what you are saying here. Why “it’s no longer obvious that they are new”? They are new. They have no known homologue. That’s why they are called orphans. Nobody, not even the paper’s authors, denies that.

    And in what sense “what we know about genes has changed significantly since they were added”? I am not aware of any such change.

    “That any of these ORFs are new genes is the goal of some other research project”. What does that mean? ORFs are potentially considered genes, until differently proven. That has not changed. The (correct) filtering process in points 1 and 2 has put into discussion some of humans ORFs as protein coding genes out of reasonable arguments. That’s not the same for the last part, regarding the orphans. There, the simple argument of their being orphans in just the human species, not allowing enough darwinian time for their evolution, has been considered sufficient. Which is the reason for my criticism.

    Having the potential to code proteins is not the same as positive experimental evidence that a majority of the code proteins.

    But positive experimental evidence of protein coding is lacking for a lot of ORFs, human and not human. If researchers had to follow the criteria you suggest, the databases of genes should be drastically reduced!

    And so on…

    I am sorry, but I cannot go on forever with that. You are free to consider that paper as methodologically correct. I don’t.

  204. Now, stop kidding. We have better things to do.

    The alternative is that you consider artificial selection somehow tainted. I suppose that’s to maintain the assertion that the result is intelligently designed.

  205. Petrushka:

    What do you mean? Please, be less cryptic and sometimes spend a few more words to clarify your thoughts!

    The random variation induced by mutagenic PCR is random (targeted, but random).

    The intelligent selection is intelligent selection.

    RV + IS = Intelligent protein engineering. (What do you mean by “tainted”?)

    I had alredy stated all that, and you had already read it.

    I paste it here again, in case you are too lazy to go back to my post:

    “So, where is the problem? The problem is that “random mutations coupled to selection” is bottom up protein engineering. It is directed engineering.

    And what is “directed”?

    The mutations are not “directed”, although I could argue that they are in some way “targeted” (a choice is made about how to produce the mutations, in how many rounds, with what rate of mutation, and so on). But that is a minor point. I can agree that the induced mutations can to some degree be considered a simulation of “natural” RV.

    The major point is about the selection.

    This is intelligent selection. Not natural selection.

    Therefore, this is directed engineering, and not spontaneous evolution, nor is it in any way a correct simulation of it.

    I am really amazed that such a striking difference seems so elusive to many intelligent people. I will try to give my explicit definitions, to better make my point:

    1)Natural selection: any system with replicators where, after some replicator through various means has developed some new “property”, such property is expanded in the population as a consequence of the spontaneous improvement in replication that the new property confers

    2)Intelligent selection: any system with replicators where, after some replicator through various means has developed some new “property”, the property is actively recognized, measured, selected and expanded in the population by the system itself, independently from any objective advantage conferred by the property to the replicator

    Can’t you see the difference? 1 is completely different from 2, and hugely less powerful.

    2 is intelligent selection: the function “does not act”: it is only recognized, because the system has been set up to recognize it. ATP binding in the Zsostac system is recognized by a purification system engineered to recognize it, but is not conferring any advantage in the replication (in this case, the PCR replication system).

    Intelligent selection is very efficient in bottom up engineering. It can easily recognize a desired property even in minimal form, and amplify it, and develop it through rounds of mutations and selection. NS can do nothing like that, unless a truly functional property in a truly living context is first built, and unless it is capable to confer reproductive advantage.

    That’s why both protein 18 – 19 and protein DX are the product of intelligent protein engineering.”

  206. 207

    @gpuccio (#204)

    The point I’ve been making on this entire thread is that the options or conclusions presented are not supported by the papers sited.

    If you have an opinion, that’s fine. However, as I’ve illustrated, “hate” for orphan ORFs, that genes were actually removed using this particular methodology alone or that removal would prevent further study is clearly not evident in this paper.

    Furthermore, papers have been cited as proving support, despite not actually being directly related. That they are “interesting” does not mean they support your position.

    You wrote:

    I never said anything critical about those passages,

    You implied the research paper was merely a tautology, which it’s clearly not.

    Again to quote the paper…

    Specifically, it suggests that nonconserved ORFs should be added to the human gene catalog only if there is clear evidence of an encoded protein. It also provides a principled methodology for evaluating future proposed additions to the human gene catalog. Finally, the results indicate that there has been relatively little true innovation in mammalian protein-coding genes.

    These passages of the study showed the specific properties of non-conserved ORFs identified using this particular methodology matched those of random controls. This was in strong contrast to conserved ORFs.

    I am afraid I cannot even start to understand what you are saying here. Why “it’s no longer obvious that they are new”? They are new. They have no known homologue. That’s why they are called orphans. Nobody, not even the paper’s authors, denies that.

    I’m afraid that you’re not trying very hard. They are obviously new ORFs, but not necessary genes that code new proteins, which is the subject of the entire paper.

    There, the simple argument of their being orphans in just the human species, not allowing enough darwinian time for their evolution, has been considered sufficient. Which is the reason for my criticism.

    But what is this criticism based on ? Are you assuming they are new protein encoding genes because you’re assuming they were “designed?” This is not evident. The paper is not saying these orphans cannot be found to be protein coding in some explicit way. Again this is the topic of other research projects.

    And in what sense “what we know about genes has changed significantly since they were added”? I am not aware of any such change.

    See references (1-3) from the paper.

    “That any of these ORFs are new genes is the goal of some other research project”. What does that mean? ORFs are potentially considered genes, until differently proven.

    Other research projects is how 12 of the over 1,000 or so ORFs in question were determined to be protein coding. The goal of this project was to set a base line by which other research could build on. Nor would the application of this methodology exclude such discoveries.

    But positive experimental evidence of protein coding is lacking for a lot of ORFs, human and not human.

    Which is the precisely the point of the paper. Again, you seem to suggest that we should assume these orphans code proteins because you believe they are “designed” rather than based on experimental evidence. Merely being GC-rich is not a clear indicator of coding proteins.

    If researchers had to follow the criteria you suggest, the databases of genes should be drastically reduced!

    If the methodology was followed, the number of genes would be

    A. Reduced due to attrition, not merely removal of orphans. The database would be more accurate.

    B. Experimental evidence that showed an orphan was protein coding would immediately cause it to be reinstated. Again, this is setting a baseline by which all positive data would then be applied.

    Instead, you seem to imply it would somehow permanently ban these orphans from being added, which is clearly not the case.

  207. veilsofmaya:

    excuse me, but I believe your confusion is huge. I cannot follow you any more on that line. I apologize.

    Just one example:

    “But positive experimental evidence of protein coding is lacking for a lot of ORFs, human and not human.”

    Which is the precisely the point of the paper. Again, you seem to suggest that we should assume these orphans code proteins because you believe they are “designed” rather than based on experimental evidence. Merely being GC-rich is not a clear indicator of coding proteins.

    The point of the paper?? I am afraid you are confusion ORFs with orphans. I am saying that “positive experimental evidence of protein coding is lacking for a lot of ORFs, human and not human”, and you answer that I “seem to suggest that we should assume these orphans code proteins because I believe they are “designed” !!!

    This is complete non sequitur.

    All the best.

  208. Can’t you see the difference? 1 is completely different from 2, and hugely less powerful.

    That’s the question I’d address.

    In actual living things, natural selection addresses all changes to all parts of the genomes of all individuals in parallel. Artificial selection essentially waits for some specific advantage to appear.

    It’s the difference between a targeted search, which evolution isn’t, and exploitation of fitness gradients, without targets.

  209. gpuccio, here is a excellent article on protein folding (chaperonins) that just came out on Crevo:

    Proteins Fold Who Knows How
    Excerpt: New work published in Cell shows that this “chaperone” device speeds up the proper folding of the polypeptide when it otherwise might get stuck on a “kinetic trap.” A German team likened the assistance to narrowing the entropic funnel. “The capacity to rescue proteins from such folding traps may explain the uniquely essential role of chaperonin cages within the cellular chaperone network,” they said. GroEL+GroES therefore “rescues” protein that otherwise might misfold and cause damage to the cell.
    The GroEL barrel and its GroES cap spend 7 ATP energy molecules opening and closing. The process can work in reverse, taking a misfolded protein and unfolding it as well. It might take several rounds for a complex protein to reach its native fold. These chaperonins operate in bacteria as well as higher organisms – and they are not the only chaperones. “Bacterial cells generally contain multiple, partly redundant chaperone systems that function in preventing the aggregation of newly synthesized and stress-denatured proteins,” the authors said. “In contrast to all other components of this chaperone network, the chaperonin, GroEL, and its cofactor, GroES, are uniquely essential, forming a specialized nano-compartment for single protein molecules to fold in isolation.”
    http://www.creationsafaris.com.....#20100709a

    So gpuccio, on top of multiple layers of error correction that prevent “random changes’ from occurring in the DNA and amino acid sequences in first place, we now have redundant layers of folding machines preventing the proteins from exploring “random” structures as well.

    And just how is “the fact” of evolution suppose to occur if it is prevented from occurring in the first place?

  210. Petrushka you state:

    “It’s the difference between a targeted search, which evolution isn’t, and exploitation of fitness gradients, without targets.”

    That’s the theory, yet the evidence says that even “without targets” evolution goes downhill.

    Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness – May 2010
    Excerpt: Despite the theoretical existence of this short adaptive path to high fitness, multiple independent lines grown in tryptophan-limiting liquid culture failed to take it. Instead, cells consistently acquired mutations that reduced expression of the double-mutant trpA gene. Our results show that competition between reductive and constructive paths may significantly decrease the likelihood that a particular constructive path will be taken.
    http://bio-complexity.org/ojs/.....O-C.2010.2

    Testing Evolution in the Lab With Biologic Institute’s Ann Gauger – audio
    http://www.idthefuture.com/201.....lab_w.html

  211. I’ve been following this discussion along these last few days. Szostak’s article appears relatively old—2000, I think.

    Here’s a proposal to the NIH for a project involving ORFans. It was submitted this year:

    DESCRIPTION (provided by applicant): The majority of genes in bacterial genomes, even in species for which extensive experimental evidence is available, are of hypothetical or unknown functions. The aims of this proposal are to investigate this enigmatic class of genes by elucidating the source and functions of “ORFans”, i.e., sequences within a genome that encode proteins having no homology (and often no structural similarity) to proteins in any other genome. Moreover, the uniqueness of ORFan genes prohibits use of any of homology-based methods that have traditionally been employed to establish gene function. Thus, these genes present a major challenge to discovering their roles in bacterial genomes. In many respects, these genes constitute the most intriguing portion of bacterial genomes because they give clue to how new genes originate, and likely contribute to the remarkable diversification and adaptation of bacteria. Although it has been hypothesized that ORFans might represent non-coding regions rather than actual genes, we have recently established that the vast majority that ORFans present in the E. coli genome are under selective constraints and encode functional proteins. By combining experimental and bioinformatic approaches, the present proposal will analyze the origins, functions and structural properties of ORFans, and how they have assumed key roles in cellular function.

    I think this rather seals the deal for Darwin’s demise.

  212. The person submitting the project was: Howard Ochman, at the Univesity of Arizona.

  213. PaV:

    Thank you indeed for this very pertinent contribution, which very clearly confirms the points BA and I were trying to make.

    I hope veilsofmaya may appreciate it too :)

  214. 215

    This thread, and its implications, has been over the top. And now, with this latest post from PAV, its simply astounding. Thanks GP and BA (and Petrush and Veils) for talking it through.

    Very illuminating.

  215. PaV thanks for finding the paper directly implicating functionality for ORFans. Amazing what a little light can do. 8)

  216. Here’s another paper:

    Origin of Primate Orphan Genes: A Comparative Genomics Approach

    http://mbe.oxfordjournals.org/cgi/reprint/26/3/603

    The Wikipedia site on orphan genes has almost nothing: two small entries, maybe five sentences altogether.

    Reading through this thread, I get the feeling that this is the dirty little secret that Darwinists don’t want to talk about, so devastating to their theory is it.

    The above paper talks about TE’s and gene duplications as possible mechanisms. But there remain {at least} about 15 genes which appear to have developed de novo. Well, there’s this other paper: Relaxed Purifying Selection and Possibly High Rate of Adaptation in Primate Lineage-Specific Genes [http://gbe.oxfordjournals.org/cgi/reprint/2/0/393] that wants to find some kind of answer to how these orphan genes arose de novo: was it a diminished purifying NS—that is, NS leaving those nasty, or not so nasty, mutations alone. But this doesn’t appear to be the case. They rule it out. So, Option Two: positive NS. They ran two tests for positive NS, one showed no positive NS taking place, and the other test showed a little—but there were questionable elements to it.

    So, basically, they’re up a creek without a paddle. Here are these de novo genes, and there’s only a hint that NS is acting differently in the origination process. And even if it were shown that positive selection is involved, it would have to be taking place at a rate 4 times normal—how then would they explain that. (And, of course, there’s then Nachman’s Paradox and Haldane’s Dilemna to contend with on a gargantuan scale) So they are definitely in trouble here, and I don’t think they want to talk about it publicly. Again, it’s their dirty little secret.

    Finally, as a general commentary, this appearance of distinct species specific gene structure reminds me of the rise of the Neutral Theory, Kimura’s attempt to deal with the immense polymorphism found by the electrolysis studies of the 60′s, none of which was “predicted” by neo-Darwinism. As we know, the Neutral Theory really ended up being “non-Darwinian”, and, was attacked as such. Here we go again. Molecular biology providing us with information that is completely in discord with dominant neo-Darwinian theory—AND, the “gene duplication with neutral drift” scenario that is so comfortable to the boys—and this time around they are carefully trying to shove it under the rug.

    But then there’s Doug Axe, and his devastating analysis. One would hope that some time soon a bold group of biologists would say, “Enough is enough!” Alas.

  217. One final note, full disclosure, as I was looking around at papers, I noticed that Howard Ochman first made the claim about E. Coli function in a 2004 paper—although the part about still being under selective constraint was not.

  218. PaV:

    About how much a previous paper can influence future research:

    From the paper you cite at #217:

    “It has recently been argued that most of the annotated human orphan proteins are likely to be spurious ORFs that are not functional (Clamp et al. 2007). Here we only considered human gene products that showed significant similarity to putative macaque and chimpanzee proteins and, with this data set, we reached quite different conclusions
    regarding the possible functionality of orphan genes.”

    IOW, in this paper the 1000+ purely human orphans were not considered.

    So, the Clamp paper has already inflenced the research approach of others, and will continue to do so.

  219. gpuccio:

    As I say, it’s their dirty little secret. Wouldn’t the sensible thing to do be to test these human orphan genes for function? They don’t seem to be inclined to do anything like that.

  220. 221

    @Gupuccio (#208)

    gpuccio,

    I apologize if I’ve put words in your mouth.

    As I understand it, ORFs have the potential to be protein coding. Orphans are ORFs that are assumed to code proteins or have experimental evidence of coding proteins, but are not found in other species.

    The question is, should specific ORFs be considered part of the human genome given we now know being GC-rich alone is not necessary a good indication of protein coding. In other words, are they really orphan genes or ORFs that do not code proteins.

    Now, to the quote I referenced, expanded to clarify.

    But positive experimental evidence of protein coding is lacking for a lot of ORFs, human and not human. If researchers had to follow the criteria you suggest, the databases of genes should be drastically reduced!

    As I’ve mentioned in this thread, I’m not making a positive argument. Instead, I’m suggesting that the claims made are NOT supported by specific paper sited. So, when I’m referring to ORFs that lack experimental evidence, I’m referring to orphans which are non-conserved because this is what the paper specifically indicates.

    Perhaps you’re suggesting allowing conserved ORFs that do not have positive experimental evidence to remain is the “problem?” But this seems unlikely given that, the paper is clear regarding this issue. So what other reason do you have?

    To quote the paper….

    However, there is currently no scientific justification for excluding ORFs simply because they fail to show evolutionary conservation; the alternative hypothesis is that these ORFs are valid human genes that reflect gene innovation in the primate lineage or gene loss in other lineages.

    While orphans were studied as a group, it’s the specific characteristics they exhibit which forms the basis for the proposed methodology. If you only read the summary, I can see how you might conclude orphans are being singled out for merely being orphans. However, in the case of this paper, the use of the term ‘orphans’ is referring to a group of ORFs with specific properties.

    Remember, my objection was to Bornagain77′s conclusions based on the science daily article and additional articles which were not related. This is precisely the kind assumption I’ve been referring to which is visible on thread and others.

    In regards to the birth and death rate in between chimpanzees and humans, this was a hypothesis which appears to be supported by other experimental evidence in published literature. Nor would the resulting methodology prevent new experimental evidence from including these ORFs in the future.

    Finally, the paper clearly indicates that removal would help create a catalog of non-coding ORFs for further study.

    Again, no positive argument here. I’m only noting that the papers in question do not support the claims made. In fact, they suggest otherwise.

  221. 222

    @pav (#212)

    Pav,

    You’ve only quoted the summary of the NIH project on orphans. This leaves several open questions.

    - What methodology was used to identify these ORFs as orphans?
    - Was there any overlap of the orphans in this project and the paper behind the article Bornagain77 referenced?
    - What properties did these specific orphans exhibit which indicated they were likely to code proteins and are they present in all orphans, including those in the human genome?

    This information is absent from the summery of the project.

    As such, it’s unclear if the project has any bering on the conclusions reached by the Distinguishing protein-coding and noncoding genes in the human genome paper, or is a representation of orphans as a whole. For example, are the rates of evolution in bacteria the same as human beings or other species?

    In other words, even if a vast majority of orphans in bacteria have been determined to protein coding (by some means absent in the summary) this does not mean that the vast majority of orphans as whole are protein coding.

    I think this rather seals the deal for Darwin’s demise.

    While I can see how you might assume this is the case, it appears to be just that: an assumption. Of course, despite being just a summary, this doesn’t seem to have prevented you from referencing it anyway.

  222. 223

    @Pav (#217)

    Here, it looks like you’ve found something closer to the actual paper we’ve been discussing (in the realm of primates, rather than bacteria) which you think supports the demise of Darwinism.

    You wrote:

    Reading through this thread, I get the feeling that this is the dirty little secret that Darwinists don’t want to talk about, so devastating to their theory is it.

    The above paper talks about TE’s and gene duplications as possible mechanisms. But there remain {at least} about 15 genes which appear to have developed de novo.

    First, mutations at a higher rate than predicted is not “devastating” for Darwinism as our predictions are based on the specific mechanisms we currently know of and our specific understanding of how they are applied.

    What would have been “devastating” was the discovery that each organism had their own form of DNA based on completely different molecules. This is because we had yet to discovery DNA or the role it plays in evolution before Darwinism was formed.

    Second, would this “secret” include the experimental evidence of protein coding for 12 of the orphans I’ve repeatedly mentioned several times in this thread? Is repeating or publishing findings what you’d expect someone to do if they wanted to keep them a “secret?”

    Finally, as for the paper you referenced, the information it contains represents knowledge we had yet to gain. Specifically, at some point in the past, we did not know younger genes seem to mutate faster than older genes.

    However, the very same process you claim has supposedly put Darwinism “up a creek without a paddle” could turn around and provide Darwinism paddles in spades. This is the nature of discovery.

    You must assume that we have the ability to discover newer genes mutate faster yet lack the ability to explain them. In other words, you must assume explanations do not exist or they cannot be discovered.

    On what basis have you reached this conclusion?

  223. 224

    @PaV (#220)

    You wrote:

    Wouldn’t the sensible thing to do be to test these human orphan genes for function? They don’t seem to be inclined to do anything like that.

    PaV,

    Wouldn’t the sensible thing for ID to as well? Yet, nearly all of the research citied here is published by scientists who are supposedly hiding things. This includes the experimental evidence for protein coding in 15 known orphan genes in the papers cited. How is this possible if they are not “inclined to do anything like that?”

    Furthermore, when will ID explain why the designer chose to mutate genes at just the specific rate we observe? After all, if the designer actually chooses a particular rate, any rate could have been selected, including changing 10s of thousands of genes all at once. Why would the level of mutation be even remotely close to what darwinism predicts?

    Howe does ID explain this?

    When will ID explain the method the designer used to determine which genes to change and the specific order to change them? Clearly, such information would be incredibly useful in a wide range of applications, from designing organisms to clean up oil spills, create new energy sources, synthesize drugs, etc.

    Also, how did the designer change just the right genes while leaving the rest completely unchanged? Surely, such information would be incredibly useful in gene therapy, DNA repair, targeting specific kinds of diseases such as cancer, etc.

    Wouldn’t explaining these observations be the sensible things for ID to do? However, ID “doesn’t seem to be inclined to do anything like that.”

    In fact, as I’ve claimed before, this lack of explanation is why ID is a convoluted elaboration of Darwinism.

  224. Dear Veils:

    Neo-Darwinism is meaningless, passe, done with.

    I’ve been reading up on ORFans, and this gets you into what’s called RNA editing, which occurs on the pre-mRNA strands. With RNA editing in play, protein sequences are to a degree now disconnected from the genomic sequence we observe. Coupling this with the tremendous level of variation, intraspecifically, that genome wide studies have shown, there is no longer room for what we call “population genetics”. It’s become meaningless.

    So, if you want to ask about “rates”, well that’s just a game that molecular biologists play using the assumption that they really understand what’s happening at the genomic level. This just isn’t so. Older, younger genes; faster, slower rates. All this means is that if you take all the mutations found in some genomic line, and divide it by the time since it split off from its last known ancestor, it turns out to be higher/lower than average. Garbage in; garbage out.

    In fifty years, supposing the world is still here, they will look back at the articles written over the last twenty years, and they’ll be rolling on the floor laughing so silly will the thinking appear to them.

    When you have RNA editing that allows insertions and deletions, which can give rise to reading frame shifts, and that can convert certain bases into others—and, at select spots—and when this editing is extremely important (as in the proper functioning of the human brain), then we’re looking at a functional system that surpasses our ability to grasp—at least for now.

    As I say, population genetics, and with it, neo-Darwinism, is dead. Right now it’s no more than an amusing pastime.

    As to ID “explaining these observations”, you seem to infer that neo-Darwinism can explain them. Well it can’t, just like ID can’t explain it. But ID can point future research in the right direction much more efficiently than neo-D can, and that’s its importance.

  225. veilsofmaya (#221):

    I am happy that you took the time to clarify better your reading of the paper.

    As I understand it, ORFs have the potential to be protein coding. Orphans are ORFs that are assumed to code proteins or have experimental evidence of coding proteins, but are not found in other species.

    That’s correct.

    The question is, should specific ORFs be considered part of the human genome given we now know being GC-rich alone is not necessary a good indication of protein coding. In other words, are they really orphan genes or ORFs that do not code proteins.

    The question is legitimate, but it needs legitimate answers.

    GC content is a very good clue to the protein coding nature of a ORF. Obviously, it is not in itself conclusive (nothing in itself is conclusive except for the real demonstration of the protein, but as I said that is available only part of the ORFs in the various databases, and still the other ORFs are retained as ORFs, and counted as genes, until some specific contrary evidence is found). And it is not true that “now we know”: knowledge about the meaning of CG content has not changed, as far as I know. The authors of the paper have simply unilaterally chosen not to consider it a as a valid point against their assumptions, not even at the level of discussion. That’s serious methodological bias.

    Instead, they analyze in detail the results of two scores which are obviously calculated in relation to the presence of homologues in other species, and which are therefore not applicable to a set of ORFs which by definition are human orphans. That was one of my points, and it is made very clearly in the project description quoted by PaV:

    “Moreover, the uniqueness of ORFan genes prohibits use of any of homology-based methods that have traditionally been employed to establish gene function.”

    Can you understand that verb: prohibits? That’s not because of any authority, but because the basic principles of methodology prohibit to use a score which is not a appropriate to the subject we are studying: in this case, the two scores applied were bound to give a negative result for truly orphan genes, simply because they are orphans. This is another serious methodological bias. In comparison, GC content can be considered an unbiased estimator of the protein coding nature of an orphan ORF, because it has nothing to do with homologues in other species.

    However, there is currently no scientific justification for excluding ORFs simply because they fail to show evolutionary conservation; the alternative hypothesis is that these ORFs are valid human genes that reflect gene innovation in the primate lineage or gene loss in other lineages.

    You keep quoting this paragraph from the paper without really understanding what they are saying. The key concept here is in the primate lineage. IOW, they are happy to admit that an ORF is a valid human gene provided that it has at least some homologues in the primate lineage, while they reject as “wholly implausible” that new genes may have arisen in the human species alone. But the only correct, unbiased statement should be the following:

    “However, there is currently no scientific justification for excluding ORFs simply because they fail to show evolutionary conservation in any species; lack of evolutionary conservation, even in primates is not justification for excluding ORFs, unless one assumes as absolute truth one’s own expectation from one’s own theory about how genes arise.”

    While orphans were studied as a group, it’s the specific characteristics they exhibit which forms the basis for the proposed methodology. If you only read the summary, I can see how you might conclude orphans are being singled out for merely being orphans. However, in the case of this paper, the use of the term ‘orphans’ is referring to a group of ORFs with specific properties.

    That’s simply not true. I have carefully read the whole paper, and I can’t see any “specific property” of this set of human orphans which is presented as justification for excluding them from the list of ORFs which are retained as possible protein coding genes, except for:

    a) two scores which, as previously said, are not appropriate to the question

    b) the simple fact that they have no homologues in primates

    c) the indirect fact that proteins are not known, which is true of many of the ORFs retained in the list, and which is to be expected for genes which have been found only recently, and about which no literature has accumulated.

    d) The GC content, which is in favour of their protein coding nature.

    So, to which “specific properties” are you referring?

    Finally, the paper clearly indicates that removal would help create a catalog of non-coding ORFs for further study.

    That’s simply hypocritical. As I have shown in my post #219, the immediate result of the paper is that those ORFs will as a rule no more be considered in the general research about human protein coding genes.

  226. 227

    @Gupccio (#226)

    you wrote:

    And it is not true that “now we know”: knowledge about the meaning of CG content has not changed, as far as I know.

    Gupccio,

    While the CG-rich sequences represent a possibility for protein coding, there are a number of other factors present. As our understanding of these factors changes, so does the resulting probability that any GC-rich sequence may be protein coding. So, even though the meaning of GC content may not have “changed”, it’s presence does not necessary indicate that a sequence is protein-coding.

    The authors of the paper have simply unilaterally chosen not to consider it a as a valid point against their assumptions, not even at the level of discussion. That’s serious methodological bias.

    So, why would they acknowledge that GC-rich sequences have a 50% change of coding proteins? Why would they take the time to discover the orphans in question have a GC content of 55%, which is higher than the average for the human genome (39%) and similar to protein coding genes in cross-species counterparts (53%)?

    Clearly, this is part of the discussion and was considered while developing the methodology. However, the aspect being discussed is that merely being GC-rich is considered insufficient to remain in the catalog.

    Instead, they analyze in detail the results of two scores which are obviously calculated in relation to the presence of homologues in other species, and which are therefore not applicable to a set of ORFs which by definition are human orphans.

    To quote the paper…

    Characterizing the Orphans. We characterized the properties of the orphans to see whether they resemble those seen for protein-coding genes or expected for randoms ORFs arising in noncoding transcripts.

    In addition to conserved properties, RFC scores and codon substitution frequency, ORF lengths were examined, which took into consideration GC content.

    Human orphans had the opportunity to match the characteristics of well studied human genes with orthologs of the dog and mouse, macaque and chimpanzee, but did not. It’s unclear how this failed opportunity is not applicable to human orphans or how failure was somehow guaranteed.

    Can you understand that verb: prohibits? That’s not because of any authority, but because the basic principles of methodology prohibit to use a score which is not a appropriate to the subject we are studying: in this case, the two scores applied were bound to give a negative result for truly orphan genes, simply because they are orphans.

    Obviously, their uniqueness prohibits the use homology-based methods directly on human orphans themselves because, well, they are human orphans. However, this in now way prohibits comparing the properties of human orphans to the properties of well studied human genes that do have homology in multiple orthologs. This in no way guarantees failure.

    In addition to mice and dog, macaque and chimpanzee, If we included human beings as a potential fourth ortholog for comparison with human orphans, they showed strong differentiation in three out of four cases. However, clearly, no such comparison could be made between human orphans vs human genes that have orthologs in humans as there is no such thing.

    Given that no other method was available in this fourth case, a different metric was used, based on the estimated rate of possible deletions and additions necessary for the majority of these orphans to be protein coding from the previous ortholog: chimpanzees. This metric was verified as part of the independent check for published articles. Out of the 1,177 orphans, only 12 were found to have experimental evidence of protein coding. .

    Even if this fourth case was discarded, human orphans still showed strong differentiation across multiple factors in three out of four cases.

  227. 228

    @Pav (#225)

    You wrote:

    With RNA editing in play, protein sequences are to a degree now disconnected from the genomic sequence we observe. Coupling this with the tremendous level of variation, intraspecifically, that genome wide studies have shown, there is no longer room for what we call “population genetics”. It’s become meaningless.

    PaV, that population genetics is only part of the picture is non-controversial. Nor does it’s incompleteness imply it is “meaningless” any more than the incompleteness quantum gravity implies that quantum mechanics is “meaningless.”

    Also, could you be referring to the study referenced this passage from the Distinguishing protein-coding and noncoding genes in the human genome paper?

    Finally, the creation of more rigorous catalogs of protein-coding genes for human, mouse, and dog will also aid in the creation of catalogs of noncoding transcripts. This should help propel understanding of these fascinating and potentially important RNAs.

    This strongly suggests one of the benefits of the methodology is to determine which ORFs are not directly related to protein coding so their indirect influence can be studied further. As such, it’s unclear how laying the foundation for this sort of study is “meaningless.”

    So, if you want to ask about “rates”, well that’s just a game that molecular biologists play using the assumption that they really understand what’s happening at the genomic level.

    That we have obvious gaps in our understanding of the human genome is non-controversial. No one suggests otherwise. Nor is this the question I asked.

    Agin, unless you’re suggesting it all happened instantaneously, there is a rate at which change occurred. And if the paper is correct, then older genes actually did change slower than newer genes. Nor would it be impossible for a designer to intentionally decide to change or create newer genes faster than older genes.

    So, again, why we would observe a rate that is even remotely close to what neo-Darwinism predicts, rather than, say, 10,000+ all at once. If there is no constraints, how do you explain this particular rate? You simply have no answer, other than “That’s what the designer happened to have chosen.”, which is a non-answer.

    In fifty years, supposing the world is still here, they will look back at the articles written over the last twenty years, and they’ll be rolling on the floor laughing so silly will the thinking appear to them.

    And you will somehow be immune from such future review?

    Again, with the exception of dramatic effect, this is non-controversial. Nor is it unexpected given that the problem space is enormous and the limitations of our current technical abilities. However, this doesn’t mean that neo-Darwinism is dead. Instead, it may mean we gain a better understanding of the mechanisms behind it.

    As I say, population genetics, and with it, neo-Darwinism, is dead. Right now it’s no more than an amusing pastime.

    I’m not making a positive argument here. Instead, the question I’m asking on this thread is, are the claims made actually supported by the papers you cited? I don’t see it. It’s not clear that population genetics be “dead”, whatever that means. Nor is it clear it would be the death of neo-Darwinsm? This simply doesn’t follow.

    As to ID “explaining these observations”, you seem to infer that neo-Darwinism can explain them. Well it can’t…..

    You’re the one making the assumption by making the claim.

    Again, for any particular recent or forthcoming discovery to kill Darwinism it would also require the absence of corresponding recent or forthcoming discoveries that explain them. Your claim clearly suggests you have some specific reason to think the former will appear while the latter will not. What is this reason?

    just like ID can’t explain it. But ID can point future research in the right direction much more efficiently than neo-D can, and that’s its importance.

    It’s unclear how the knowledge that intelligent designer “did it” will get us anywhere more “efficiently” since it doesn’t answer any of these questions I posed. In fact, positing a designer effectively draws a line that claims we cannot hope to understand how the designer did it. It’s a non-explanation

    Since ID refuses to address these questions, the fact that we too are designers does not give us any special insight. We still have to figure the answers to these questions for ourselves. Essentially, we’re in the same boat as if there was no designer because we can’t know anything about him/her other than the supposed act of design.

    Finally, an agent could use a process that happens to closely match what neo-Darwinism predicts, with the exception that it was supposedly chosen rather than naturally occurring.

    Again, this is why I’m suggesting that ID is a convoluted elaboration of deo-Darwinism. It attempts to explain away neo-dariwnism, rather than explain what we observe.

  228. veilsofmaya:

    Again, this is why I’m suggesting that ID is a convoluted elaboration of deo-Darwinism. It attempts to explain away neo-dariwnism, rather than explain what we observe.

    First, this isn’t a logical statement. If ID is trying to explain away neo-Darwinism, then it’s anti-neo-Darwinist, not a form of neo-Darwinism.

    Second, ID tries to explain what we see. My earlier point about population genetics/neo-Darwinism being dead addresses the fact that we are now seeing genetic mechanisms at work that are so sophisticated, with such higher levels of interplay than ever suspected, that PG/ND just can’t begin to deal with them.

    Kimura—BTW, I’ve read his book on Neutral Theory, have you?—makes it clear that good old fashioned PG can’t really cope with with the discoveries of the 60′s, and nothing I’ve ever read (and I’ve read Fisher’s work, I’ve read portions of Sewell Wright’s works, Kimura, etc) can come up with any satisfactory explanations of what organisms/cells do, other than, of course, the kinds of adaptive strategies that cells have and which, like bacteria cells switching from lactose to sucrose metabolism, have been documented. But these are almost trivial examples of what cells can actually do. (And, BTW, in a less than year-old study, for bacterial colonies that “switch” their metabolism, contary to population genetic dogma, the colonies don’t become “fixed”; that is, they don’t 100% switch over—a small fraction retains the original function. This, again, shows just how little PG can really explain). We’re dealing with sophisticated machinery driven by a “software system” that is mind-boggling in its complexity. For example, I talked earlier about RNA editing. It’s entirely possible that long stretches of DNA are so constructed that depending on ‘where’ the editing takes place, entirely different protein/regulatory RNA stretches are produced. This means that its not the ‘codon’ structure that’s important, but each, individual bp. In this case, then, very high conservation of nucleotide bases would be required. And, in the case of ORF’s, this is what we see. Now it is the position of Darwinists that this mind-boggling complex machinery came about simply by chance—please don’t bother to announce that because of NS the whole Darwinist project is not random, since we know that the replicative demands for the building up of tiny changes to the genome are generally beyond anything that RV+NS can deal with, which is, of course, the whole point of Behe’s Edge of Evolution. That ID, in the face of this elaborately complex cellular mechanisms and machines, says that this is the product of design, not of chance, seems to me to be rather sensible. Don’t you agree? And this seems to be a much better “[explanation of] what we observe” than neo-Darwinism could ever be.

    Now you may disagree, but for me, at a personal level, I’m completely convinced that every kind of explanation that neo-Darwinism has given in the past—to which the musings of the evo-devo people will always be firmly attached—is an utter failure, other than, of course, the trivial kinds of adaptations that we see taking place (and I mean ‘trivial’ in the sense of involving only simple, basic pathways). To me, all of this is now beyond argumentation. And, for me, it’s simply a matter of waiting for Darwinist’s to “throw in the towel”. And that’s why I say that in fifty years hence, they’ll be rolling on the floor laughing at what was once considered ‘true science’.

    As an analogy, all of this is like watching ‘scientists’ examining the remains of a crashed, alien space vessel and claiming the whole time that what they’re looking at and investigating can really be explained by natural processes alone. Well, please excuse me if I say: “No, it can’t.”

  229. Pav,

    Second, ID tries to explain what we see

    So far, I have searched in vain for that explanation. What has ID contributed to our understanding beyond the level of Genesis?

    To me it looks like the ancients said “This is beyond comprehension, it must be the work of God.”

    Some thousands of years have got us to “This is too complex; it must be the work of a designer.”

    In what way is ID different than classical creationism?

  230. 231

    Cabal, every since your appearance here, your arguments have always been the cheapest of the cheap.

    I don’t mean in cheap in the sense of a “cheap shot”, I mean cheap in the sense of having the least amount of intellectual integrity.

    There have been numerous times where people have tried “in vain” to get you to get off your God complex long enough to hear the evidence for design instead; long enough for you to come face-to-face with the actual issues regarding what is observed through science.

    It has been futile.

    Like one of those wind-up robots who mindlessly walks into a wall, then spins around with an unchanged expression and starts walking off in another direction, you never seem to understand or engage a damn thing.

    You are therefore left to constantly demean and misrepresent the ID argument. After all, that is so much easier than trying to grapple with the fact that chemicals do not form encoded abstractions of themselves.

  231. chemicals do not form encoded abstractions of themselves.

    They do, however, replicate themselves, with occasional errors.

    Words and phrases like “information” and “encoded abstractions” are equivocations, an attempt to prove something by surreptitiously changing definitions.

    If you wish to argue what chemicals can do or cannot do, demonstrate it in the language of chemistry.

    What physical process required by evolution violates any established laws of physics or chemistry?

    Try responding without invoking metaphorical language.

  232. Petro-

    George Orwell called, he said he wants you to read his book 1984 and do a 5 page book report about though control and language.

  233. veilsofmaya:

    So, even though the meaning of GC content may not have “changed”, it’s presence does not necessary indicate that a sequence is protein-coding.

    It never has. But it is a good clue.

    So, why would they acknowledge that GC-rich sequences have a 50% change of coding proteins? Why would they take the time to discover the orphans in question have a GC content of 55%, which is higher than the average for the human genome (39%) and similar to protein coding genes in cross-species counterparts (53%)?

    Here is what they say:

    “The orphans have a GC content of 55%, which is much
    higher than the average for the human genome (39%) and similar
    to that seen in protein-coding genes with cross-species counterparts
    (53%). The high-GC content reflects the orphans’ tendency to
    occur in gene-rich regions.
    We examined the ORF lengths of the orphans, relative to their
    GC-content. The orphans have relatively small ORFs (median
    393 bp), and the distribution of ORF lengths closely resembles the
    mathematical expectation for the longest ORF that would arise by
    chance in a transcript-derived form human genomic DNA with the
    observed GC-content”

    IOW, they never aknowledge that the high GC content can simply be a sign that they are protein coding genes. On the contrary, they unilaterally interpret that data in the sense that they are ORFs which arise by chance “in gene-rich regions”. The interpretation is the only one given, and there is no discussion of the other interpretation, that they could really be protein coding genes, in that paragraph.

    merely being GC-rich is considered insufficient to remain in the catalog.

    The fact is that merely bring ORFs should be enough to remain in that catalof, unless valid contrary arguments are provided (and they were not).

    Obviously, their uniqueness prohibits the use homology-based methods directly on human orphans themselves because, well, they are human orphans.

    That’s exactly the point.

    However, this in now way prohibits comparing the properties of human orphans to the properties of well studied human genes that do have homology in multiple orthologs.

    Again, which properties?

    This in no way guarantees failure.

    Yes, if you look at the “properties” they considered (see later).

    In addition to mice and dog, macaque and chimpanzee, If we included human beings as a potential fourth ortholog for comparison with human orphans, they showed strong differentiation in three out of four cases. However, clearly, no such comparison could be made between human orphans vs human genes that have orthologs in humans as there is no such thing.

    ???? What does that mean? Please, explain. I cannot comment on something which I simply cannot understand.

    Just to be clear, I again maintain that the properties they checked (scores, indels, etc.) are all dependent on a comparison between orthologues in different species. Therefore, they cannot tell us anything about genes which, we already know, have no orthologues in other species.

    I quote here from the supplementary material of the paper:

    “SI Figure 6
    Fig. 6. RFC score and indel patterns. (a) Illustration showing how RFC score is calculated for a pairwise alignment. Species 1 shows a human putative gene sequence in which translation starts in reading frame 0 (that is, codons are read from the first base). Each human base can be assigned as being in codon position 0, 1, or 2. Species 2 shows the orthologous DNA sequence in the mouse genome, aligned to the human sequence with gaps indicated by dashes.”

    Emphasis mine.

    Given that no other method was available in this fourth case, a different metric was used, based on the estimated rate of possible deletions and additions necessary for the majority of these orphans to be protein coding from the previous ortholog: chimpanzees. This metric was verified as part of the independent check for published articles. Out of the 1,177 orphans, only 12 were found to have experimental evidence of protein coding.

    ????? Is it possible to be more obscure?

    Even if this fourth case was discarded, human orphans still showed strong differentiation across multiple factors in three out of four cases.

    Yes, it is…

  234. 235

    Petrushka,

    Replication with heredity REQUIRES an abstraction of the parent.

  235. In the language of physics and chemistry, what exactly is an “error?”

  236. Replication with heredity REQUIRES an abstraction of the parent.

    Replication is chemistry. Copy errors are chemistry.

    If not, point to the place in the process that is not chemistry. And if it’s not chemistry, what exactly is it?

  237. In the language of physics and chemistry, what exactly is an “error?”

    A near, but not exact replica.

  238. It is possible to describe crystal formation in the language of physics and chemistry, without reference to the concept of information.

    It is also possible to describe imperfect crystal formation without reference to the concept of information.

    It is possible to describe DNA replication without reference to information. DNA can be replicated without reference to it’s “meaning.”

    So what I’m asking is what part of the replication or imperfect replication requires reference to the concept of information.

  239. Cabal: Several thousand years ago Lucretius (following Epicurus) said nature evolved into being through a slow process of accretion and adaptation.

    To me it looks like some of the ancients said “I don’t see God, so God couldn’t have created nature.”

    Some thousands of years have got us to “nature did it by a mysterious process called Natural Selection with the assistance of deep time.”

    In what way is Darwinism different from Classical materialism?

  240. In what way is Darwinism different from Classical materialism?

    It’s the difference between a conjecture and a theory having fifteen decades of accumulated observation and experiment that are consistent and consilient.

  241. 242

    Consilient?

  242. veilsofmaya:

    Just to take a momentary rest from molecular biology, here are some comments to some points you raised with PaV:

    If there is no constraints, how do you explain this particular rate? You simply have no answer, other than “That’s what the designer happened to have chosen.”, which is a non-answer.

    I have never thought that there are no constraints for the designer. The designer acts in a context and according to laws, and that context and those laws are rich of constraints.

    It’s unclear how the knowledge that intelligent designer “did it” will get us anywhere more “efficiently” since it doesn’t answer any of these questions I posed. In fact, positing a designer effectively draws a line that claims we cannot hope to understand how the designer did it. It’s a non-explanation

    I absolutely don’t agree. Recognizing that active functional information was introduced by a designer at specific moments in natural history, or continuously, or with any other modality (which is an issue completely open to empirical analysis) can certainly help us understand how the designer did it and why, at least to a certain degree. And it can certainly help us understand the nature of the design, and its workings.

    On the contrary, going on believing that biological information was generated according to a model which is completely wrong can only help us to remain in confusion.

    Since ID refuses to address these questions, the fact that we too are designers does not give us any special insight. We still have to figure the answers to these questions for ourselves. Essentially, we’re in the same boat as if there was no designer because we can’t know anything about him/her other than the supposed act of design.

    I absolutely disagree that “ID refuses to address these questions”. What ID says is that the identity of the designer and the knowledge of the modalities of implementation of the design are not necessary for design detection. And that is absolutely true.

    But that does not mean that, once the design detection and the ID scenario have been achieved, we cannot go on with further questions: who is the designer, how did he implement design, with which modalities in time and space, and so on.

    While personal ideas about how much we can find the final answers to those questions may vary, there is no doubt that those questions can be scientifically addressed. And I have always been very clear that I believe that many detailed answers can and will be found.

    Finally, an agent could use a process that happens to closely match what neo-Darwinism predicts, with the exception that it was supposedly chosen rather than naturally occurring.

    I have always been very clear about what I think of that argument: it is rubbish.

    If the empirical facts will be shown to “closely match what neo-Darwinism predicts”, for me ID is falsified. In that case, I would promptly admit that neo-darwinism is the best explanation, and that no designer hypothesis is necessary.

    Again, this is why I’m suggesting that ID is a convoluted elaboration of deo-Darwinism. It attempts to explain away neo-dariwnism, rather than explain what we observe.

    Simply not true, also in the light of what I have said before.

  243. Recognizing that active functional information was introduced by a designer at specific moments in natural history, or continuously, or with any other modality (which is an issue completely open to empirical analysis) can certainly help us understand how the designer did it and why, at least to a certain degree.

    So the basic entailment of ID is that some unspecified agency did some unspecified thing or things at unspecified times and places using unspecified methods for unspecified purposes, and that has made all the difference?

  244. Petrushka:

    Sometimes (but not always) your comment fall below any level of credibility:

    It is possible to describe crystal formation in the language of physics and chemistry, without reference to the concept of information.

    It is also possible to describe imperfect crystal formation without reference to the concept of information.

    It is possible to describe DNA replication without reference to information. DNA can be replicated without reference to it’s “meaning.”

    So what I’m asking is what part of the replication or imperfect replication requires reference to the concept of information

    Information is what is replicated. If imformation were not there, how could you replicate it? So the answer to your question:

    “what part of the replication or imperfect replication requires reference to the concept of information”

    is very simple: the information which is replicated. Even a child would easily understand that.

  245. 246

    It occurred to me that the entire worldview of atheist evolutionists can be summarized as the telephone game. The telephone game (of which I’m assuming everyone is familiar) is used by atheists as an argument against the oral tradition of religion; an oral tradition is bound to make copying mistakes as the tradition is passed down to future generations (so they contend), and will eventually become something else, something different, from what was originally believed and passed down, and thus, religion should not be believed. The atheist evolutionist also believes that the telephone game (minor changes over time) is the real story in biological evolution. The animal, through reproduction, is bound to make copying mistakes as it’s progeny is passed down to future generations (so they contend), and will eventually become something else, something different, from what the animal was originally, and thus, evolution should be believed. The telephone game is the atheist evolutionist’s mantra. The telephone game for goodness sakes. You would think that it would be based on something more substantial; but it’s not. Belief in evolution is based on a belief in the telephone game. Skepticism of an oral tradition is based on a belief in the telephone game.

  246. I could have sworn it is molecules being replicate and not disembodied abstractions.

    But I will borrow a question from another poster on an unrelated forum:

    If “information” interacts with matter. Does it form mutual potential energy wells with other information or with matter? Can it scatter off matter? Can particles of information form orbits around matter, or vice-versa?

    In other words, how does information influence or determine the spatial arrangements of matter?

  247. 248

    “Replication is chemistry. Copy errors are chemistry.

    If not, point to the place in the process that is not chemistry. And if it’s not chemistry, what exactly is it?”

    The arrangement of adenine-cytosine-adenine only results in threonine in the context of DNA transcription (and only in that context). One thing means (stands for, is mapped to) another, but is not the physical product of it. There is nothing you can chemically do to adenine and cytocine to end in threonine (outside of the semiotic context instantiated within DNA transcription).

    In other words, chemistry alone cannot provide an explaination for it; it requires the semiotic context (rules) within transcription in order to operate.

    The semiotic context is not chemistry. That is the point.

    As far as relying on the idea that “its all chemistry” as a way to ignore the larger point, I would just remind you that a red plasic ball is all chemistry – but there is nothing in the plastic that tells it to form a sphere and dye itself red. That little part of the red plastic ball is not in the chemistry.

    You take too much for granted.

  248. The animal, through reproduction, is bound to make copying mistakes as it’s progeny is passed down to future generations (so they contend), and will eventually become something else, something different, from what the animal was originally, and thus, evolution should be believed.

    Analogies and metaphors always have limited domains, but I’ll play along for a moment.

    Have you ever actually played telephone? Does the message turn to random gibberish, or does it morph into something different, but still a coherent message?

  249. Petrushka:

    So the basic entailment of ID is that some unspecified agency did some unspecified thing or things at unspecified times and places using unspecified methods for unspecified purposes, and that has made all the difference?

    Let’s say that:

    the basic entailment of ID is that some unspecified agency (about which we can certainly build scientific theories) did some very specified thing (input functional information in the genomes of living beings) at unspecified times and places (not more unspecified than they are for darwinists: only the empirical knowledge and correct evaluation of genomes and of their evolution can give us the right scenario about times and places), using unspecified methods ((about which we can certainly build scientific theories, and certainly not more unspecified of those through which our consciousness interacts with our brains) for unspecified purposes (not really completely unspecified: the immediate purpose of many designed functions is obvious, and for higher level purposes reasonable inferences can be made, and possibly tested), and that has certainly made all the difference.

  250. The arrangement of adenine-cytosine-adenine only results in threonine in the context of DNA transcription …

    We can’t have a discussion if you try to change the topic. I asked about replication and imperfect replication.

    You are trying to change the subject to transcription.

    But in any case, what part of the machinery of the cell is not chemistry? If part is information, what are the rules of interaction between disembodied information and atoms?

  251. Petrushka:

    Have you ever actually played telephone? Does the message turn to random gibberish, or does it morph into something different, but still a coherent message?

    That’s really easy!

    Who plays telephone?

    Answer: conscious intelligent beings.

    So I ask: who “morphs the message into something different, but still a coherent message”?

    Try to answer yourself…

  252. 253

    Petrushka,

    Have you ever actually played telephone? Does the message turn to random gibberish, or does it morph into something different, but still a coherent message?

    It’s a message, in the respect that it’s usually still words, but it isn’t coherent.

  253. Petrushka:

    In other words, how does information influence or determine the spatial arrangements of matter?

    and:

    But in any case, what part of the machinery of the cell is not chemistry? If part is information, what are the rules of interaction between disembodied information and atoms?

    Wrong, wrong, wrong! It’s not information which interacts with atoms. Where do you take those ideas form?

    It’s the consciousness and intelligence of the designer which interacts with matter, and confers to matter a special order which corresponds to his mental representations governed by intelligent information. Information is just “written” in the higher level order (semantic order) imparted by the designer’s consciousness.

    We can’t have a discussion if you try to change the topic.

    Look who’s talking :)

  254. 255

    @PaV (#229)

    You wrote:

    First, this isn’t a logical statement. If ID is trying to explain away neo-Darwinism, then it’s anti-neo-Darwinist, not a form of neo-Darwinism.

    Pav,

    Take solipsism, which is clearly anti-external reality. The solipsist experiences everything you and I accept as external to ourselves, but claims it is somehow internal to themselves. Since solipsism predicts exactly the same empirical observations we observe, this means every every discovery in technology, medicine and particle physics also “supports” solipsism. They just happen to be internal to the solipsist, rather than external.

    Furthermore, it’s designed to explain away reality, not actually explain what we observe.

    Solipsism suggests there are dream-like aspects of myself that act like autonomous conscious beings which surprise me, have different personalities and even disagree with me on Solipsism. And there object-like facets of myself that obey laws of physics like facets even though, as a non-physicist, I can’t do the math that describes their behavior. Not to mention that these supposed people-like facets of myself discover new things about myself (physics like facets) all the time, which I wasn’t aware of previously.

    However, Solipsism makes no attempt to explain *why* object-like facets of one’s self would obey laws of physics-like facets of one’s self, etc. No explanation is presented at all. Instead, the claim is based on a supposed limitation that we cannot know anything exists outside of ourselves.

    In other words, Solipsism consists of the theory of realty with the added exception of it all being elaborate facets of the internal self. It merely attempts to explain way the currently tenable theory of reality. As such, solipsism is a convoluted elaboration of reality, despite being anti-reality, which can be discarded.

    We can make a similar comparison with biological ID.

    It’s essentially the same as neo-darwinsm except, at a point depending on which variant of ID you happen to support, a designer caused/selected a change rather than a natural process. The empirical observations are the same, except a mysterious designer causes all of the positive changes, or orchestrated a specific series of positive and negative changes that resulting in a specific desired outcome, etc. (i’m obviously simplifying here to keep things short)

    Now, you might suggest the implications of this detail represents a massive difference between ID and neo-Darwinism, so it’s something completely different. However, as I’ve illustrated above, the implications between solipsism and realism are massive as well, despite solipsism being an obvious convoluted elaborate of reality.

    Nor does ID explain why the changes in biology we observe even remotely match predictions of neo-Darwinism, but is caused by a intelligent designer instead. The particular rate of change we observe is simply the rate the designer happened to have chosen, etc. it’s a non-explanation. Since it asserts that we can’t explain what we observe, ID claims a designer must have done it. This is similar to the solipsist’s claim that we cannot know anything outside of ourselves, therefore everything must be internal to himself and external reality does not exist.

    Despite being incredibly useful, we simply cannot know the method designer used to determine which changes to make, when to make them and how to make only the desired changes in just the right organisms without changing others. Furthermore, the reason why ID claims such information cannot be known is obvious. You “know” exactly who the designer supposedly is and such knowledge is impossible by definition.

    As such, biological ID it appears to be a convoluted elaboration of neo-darwinism.

    My earlier point about population genetics/neo-Darwinism being dead addresses the fact that we are now seeing genetic mechanisms at work that are so sophisticated, with such higher levels of interplay than ever suspected, that PG/ND just can’t begin to deal with them.

    Again, the incompleteness of PG is non-controversial, just as the incompleteness of quantum-gravity is non-controversial. No biologist claims we know everything. This appears to be hand waiving.

    We’re dealing with sophisticated machinery driven by a “software system” that is mind-boggling in its complexity.

    Which is why it’s no surprise that our current understanding is incomplete. Nor may we ever understand it completely. However, this would in no way necessitate that a designer must have done it. Nor does posing a designer actually provide an explanation. You’re merely accounting for what we observe, not explaining it.

    That ID, in the face of this elaborately complex cellular mechanisms and machines, says that this is the product of design, not of chance, seems to me to be rather sensible. Don’t you agree?

    If you’re asking my option, the answer is no. I do not think it’s sensible. I’m not a sophist or a ID supporter because they both represent convoluted elaborations of reality and neo-Darwinism, respectively. As such, I think they are bad explanations we can discard.

    Should ID start providing explanations, this could change. However, it seems clear that explanations will not be forth coming for reasons that are obvious.

    As an analogy, all of this is like watching ’scientists’ examining the remains of a crashed, alien space vessel and claiming the whole time that what they’re looking at and investigating can really be explained by natural processes alone. Well, please excuse me if I say: “No, it can’t.”

    You “can’t” because you presuppose who the designer is. And we all know it’s NOT some alien civilization. By very definition, the designer’s ways cannot be understood, studied, etc. As such, you presuppose no explanation is possible or will be forthcoming.

  255. about which we can certainly build scientific theories

    I’m sure a lot of people are eagerly waiting for such theories of agency.

    In the meantime, observation and experimentation provides us with rates of change which, when taken with the observation of nested hierarchies in genomes, tell us approximately when certain changes occurred and whether the changes were point mutations, replications, or any of a dozen other kinds of mutations.

    A rather good discussion of this was entered as evidence at the Dover trial and was not rebutted by the defence.

    http://medicine.tums.ac.ir/FA/.....0genes.pdf

  256. 257

    It’s interesting to note, that on one hand, the information orally passed down to progeny is bound to contain copying mistakes, according to these telephone-game-philosophers, and on the other hand, the progeny itself is bound to be a copying mistake. So, according to the telephone-game-philosophers, there is an ever-morphing story and an every-morphing storyteller. Of course, they plead special-pleading with regard to the timeless truth of the telephone-game-philosophy and the changeless truth of the telephone-game-philosophers.

    One the one hand, the telephone-game-philosophy (with respect to an oral tradition), would have us believe that there is a loss of sophistication and complexity as the oral tradition is passed from one generation to the other. But the telephone-game-philosophy (with respect to evolution), would have us believe that all of the animal’s future generation’s copying mistakes create animals that are more sophisticated and more complex. To summarize, atheist evolutionists, who hold to the telephone-game-philosophy, believe that the telephone-game degrades its subject (oral tradition) and also improves its subject (evolving animals) through the exact same mechanism of “copying errors”. Inconsistent? Yes. Very much so.

  257. Petrushka-

    “If “information” interacts with matter. Does it form mutual potential energy wells with other information or with matter? Can it scatter off matter? Can particles of information form orbits around matter, or vice-versa?

    In other words, how does information influence or determine the spatial arrangements of matter?”

    From this and from other things you’ve recently posted it seems to me that your fundamental problem with ID is that you simply do not get it. You don’t.

  258. Petrushka:

    In the meantime, observation and experimentation provides us with rates of change which, when taken with the observation of nested hierarchies in genomes, tell us approximately when certain changes occurred and whether the changes were point mutations, replications, or any of a dozen other kinds of mutations.

    That’s very fine for me. And when will they also tell us how all those changes were generated by the RV + NS model?

    Haven’t you understood yet that the whole problem is about causation? If a gene derives form another gene, how did it change? Can RV and NS account for that?

    And if a gene did not derive from another gene, how did it originate? Let’s pretend you answer: “OK, a completely different gene was duplicated, and after it underwent changes which modified 80% of its primary structure, and created a new sequence that happende to fold orderly and have a new function, which was then integrated into what already existed”. Is that even the beginning of an answer?

    No, it isn’t. You have to show a credible nodel of why that would happen, and why RV and NS could generate that new gene in the assumed time, and so on.

  259. It’s the consciousness and intelligence of the designer which interacts with matter, and confers to matter a special order which corresponds to his mental representations governed by intelligent information.

    So is of part of the research program of ID to work out the entailments of this theory.

    For example, an archaeologist, studying sharp pieces of flint, might attempt to replicate them to see if the patterns on the found objects match the patterns on objects known to be made by humans.

    In other words, Is it part of the ID research program to test theories of how design is implemented by teasing out the way information is front loaded, if it is?

    Or how the designer anticipates all the myriad interactions of the ecosystem to insure that every living thing occupies a survivable niche?

    Or how the designer sometimes seamlessly transitions objects like jawbones into middle ear bones?

  260. No, it isn’t. You have to show a credible model of why that would happen, and why RV and NS could generate that new gene in the assumed time, and so on.

    Perhaps you mistake science for some other enterprise. Science doesn’t do “why.” It does “how.”

    Show me the violation of the laws of physics and chemistry.

    I don’t need to show a specific sequence arose for the simple reason that biology doesn’t anticipate or strive for specific outcomes.

    The fact that most known species are extinct should make that clear.

    Evolution doesn’t solve problems, search for solutions or exhibit any intelligent anticipation. Species that do not have the right alleles available at a time of significant ecological or environmental change simply go extinct.

  261. Petrushka:

    In other words, Is it part of the ID research program to test theories of how design is implemented by teasing out the way information is front loaded, if it is?

    Or how the designer anticipates all the myriad interactions of the ecosystem to insure that every living thing occupies a survivable niche?

    Or how the designer sometimes seamlessly transitions objects like jawbones into middle ear bones?

    Yes.

  262. And if a gene did not derive from another gene, how did it originate?

    Did you read the pdf I linked?

    Can you explain why this paper remains unrebutted, five years after Dover? What are the technical errors?

    You could make a name for yourself if you are smarter than Behe and Meyers, who testified at Dover.

    As I mentioned before, ID has a journal that’s practically begging for a great paper.

  263. Petrushka:

    Perhaps you mistake science for some other enterprise. Science doesn’t do “why.” It does “how.”

    No, I mean “why it happened that way” in the sense of “what was the cause of what happened”. Don’t equivocate on words.

    Biochemical laws only explain why existing biological structures work. They can’t explain why (or, if you prefer, how) they originated. A cause must be appropriate to explain what we observe.

    Show me the violation of the laws of physics and chemistry.

    No violation is needed for this post to appear. And yet it can be traced to a designer.

    I don’t need to show a specific sequence arose for the simple reason that biology doesn’t anticipate or strive for specific outcomes.

    Your old mantra. But the truth is that biology needs functional structures. Functional proteins. Functional biochemical machines. And so on. You have no need to anticipate all those things because you see them already in place. But how did they come in place?

    The fact that most known species are extinct should make that clear.

    ????? Why?

    Evolution doesn’t solve problems, search for solutions or exhibit any intelligent anticipation.

    That is maybe true of darwinists…

    Species that do not have the right alleles available at a time of significant ecological or environmental change simply go extinct.

    Again, true. And so? What has that to do with all the rest? Are you implying that designed things never go extinct? Do you want a list?

  264. Petrushka:

    I still can’t understand what we should rebut of the paper you quote. Or what it really “explains”. Let’s take some random examples:

    “Todd et al.63 investigated 31 diverse structural enzyme superfamilies for which structural data were available, and found that
    almost all have functional diversity among their members that is generated by domain shuffling as well as sequence changes.

    Emphasis mine. And so? What is that telling us? That an enzyme has to change to change?

    Mobile elements.Makalowski et al.14 were the first to describe the integration of an ALU ELEMENT into the coding portion of the human decay-accelerating factor (DAF ) gene. They found that mobile element-derived diversity was not limited to the human genome or to the Alu family15 (TABLE 1). Further analyses of human genome sequences16 and vertebrate genes17 have shown that the integration of MOBILE ELEMENTS into nuclear genes to generate new functions is a general phenomenon.

    I absolutely agree. I am a big fan of transposons as a design implementation tool.

    A more straightforward conjecture is that adaptive evolution might have had a principal role throughout the creation and subsequent evolution of new genes — we call this the ‘immediate model’, because it requires no waiting time for the evolution of a new function. Several case studies and theoretical works (for example, see REFS 29,30) have shown that the evolution of recently created genes involves accelerated changes in both protein-coding sequences and gene structures from the onset, which supports the immediate model. An important role of positive Darwinian selection has been detected in these processes and these studies have uncovered some interesting results. For example, the initial functions of new genes are rudimentary and further improvement under selection might be crucial. So, new gene functions that are created by altering a sequence that encodes one or a few amino acids might be special cases rather than the general situation. Also, the rapid changes in well-defined new genes with new functions could help to explain a past conjecture in molecular evolution studies: that rapid sequence evolution in many old genes might reflect a diverged function under selection31.

    Very clear indeed! How shall we ever be able to rebut such a straightforward, immediate, enlightening explanation!

    Petrushka, perhaps if you condescended to specify what is to be rebutted, we poor IDists could spare some of our idle time.

  265. I absolutely agree. I am a big fan of transposons as a design implementation tool.

    You seem willing to accept all known facts from mainstream biology, but seem to add some invisible guy pushing the atoms around at just the right times and places.

    As an observation from history, Newton used just such a conjecture to explain the stability of the solar system, when his calculations suggested it was unstable.

    Gaps theories evaporate with knowledge.

  266. From Petrushka’s paper…

    “De novo origination.Although the true de novo origination ofnew genes from previously non-coding sequences is rare,there are genes with a portion of coding-region sequence that has originated de novo.For example,in the Drosophilasperm-specific dynein intermediate chain gene Sdic,a previously intronic sequence has been converted into a coding exon.”

    Could this actually be evidence of more functionality within the genome rather than “de novo” gene creation? That would make much more sense to me.

  267. Cabal [230]:

    So far, I have searched in vain for that explanation. What has ID contributed to our understanding beyond the level of Genesis?

    You seem full of yourself.

    I wrote that ID tries to explain what we see. Can Darwinism really explain what we see? Using Darwinism to explain biological complexity is like trying to pull an 18-wheeler out of the mud using a 125cc Honda motorcycle. It will get you absolutely nowhere.

    You seem to get tripped up on the question of what it is that we see. We see self-replicating, self-constructing, nanoscale machines operating via sophisticated energy transfer schemes and using quantum level computing and communication which, working in tandem, give rise to incredible macro-level organisms capable of flight, navigation using magnetic orientation, sonar, and star formations, and running 4-minute miles. And you want to say that this all happened by chance.

    Oh yeah, and a tornado passing through a junkyard can fashion a Boeing 747!

    The rather sensible explanation is that all of this was designed.

    veils of maya [250]:

    You think Darwinism is real, and ID is nothing more than a convoluted elaboration of what is real. However, what is Darwinism is not real; then, maybe, Darwinism is nothing more than a convoluted elaboration of ID. But, of course, Darwinists love tautologies.

    I don’t really know why you think Darwinism has much of anything to do with reality. Oh, yes, there is this built in ability of organisms to adapt to their environments and to one another, and this ability does employ stochastic methods (or, at least what at this point look like strictly stochastic methods). But maybe we should just tip our hat to the wonderful Designer who thought of such an ability. (As did Asa Gray, a defender of Darwin)

    But I think your great faith in Darwinism comes from the sense you seem to have that Darwinism can actually predict things. The fact is is that if Darwinism could explain anything halfway well, then it actually could make predictions. But it can’t—oh, well, it can to the degree that it is right about the adaptational abilities that organisms have. But this is really only trivial.

    But almost all of the really important predictions that Darwinism has tried to make have turned out to be wrong. Doesn’t this raise any doubts in your mind, then, about Darwinism?

    In the early 60′s, neo-Darwinism predicted that gel electophoresis would demonstrate rather homogenous families of proteins, that is, they would be mostly homozygous at virtually all of their lengths. WRONG. Along comes Kimura and the Neutral Theory.

    Since Darwinism sees its putative genetic mechanisms as the source of all phenotypic change, Darwinism predicts (dismisses) all non-coding DNA as junk. WRONG!

    When genome-wide sequencing promised to open up the secrets of the human genome, Darwinism was sure that there would be little genetic variation intraspecies. WRONG!

    OTOH, whereas ID would be somewhat neutral as to polymorphisms, it CORRECTLY predicted the role—the very important role—of non-coding DNA, and, because of its emphasis on regulatory mechanisms, is fully consistent with the great level of intraspecific variation we find in humans.

    Switching to another of your themes, you again aren’t being logical. You want to denigrate ID by supposing that the only answer it can give, or that it is interested in, is that God did it. So you fault it because you think it won’t give us answers. And then you say that Darwinism, just like quantum mechanics fails to explain gravity, can’t give us, AND MAY NEVER GIVE US, all the answers. But I thought you wanted answers???? So you like Darwinism even thought it might not ever give you the answers you’re looking for, but you don’t like ID because, in your mind, it isn’t interested in the answers that you’re looking for.

    The reason for ID being superior to Darwinism has to do with its greater explanatory powers. ID would say that the crashed space vessel was designed by the aliens that crashed in them. Darwinism would say, no, this heap of metal was formed by natural forces. You know, like fashioning a 747 by having a tornado pass through a junkyard!

  268. The reason for ID being superior to Darwinism has to do with its greater explanatory powers.

    1. When confronted with an historical mystery or a phenomenon not yet understood, attribute it to an unseen entity having unspecified powers, acting at unspecified times and places, using unspecified methods, for unspecified reasons.
    2. See Number one.
    3. See number two.

  269. Petrushka [266]:

    You seem willing to accept all known facts from mainstream biology, but seem to add some invisible guy pushing the atoms around at just the right times and places.

    You mean you’ve never heard of Maxwell’s Demon?! ;)

    Phaedros quoting Petrushka’s paper:

    For example,in the Drosophilasperm-specific dynein intermediate chain gene Sdic,a previously intronic sequence has been converted into a coding exon.”

    This sounds like the mechanism I was referring to earlier on when speaking of ORF’s. Using RNA editing, a “start” codon could be placed at the beginning of the intron, thus leading to its translation as a protein. Quite some software, eh?!

  270. This sounds like the mechanism I was referring to earlier on when speaking of ORF’s. Using RNA editing, a “start” codon could be placed at the beginning of the intron, thus leading to its translation as a protein. Quite some software, eh?!

    Too bad such sophisticated software allows its carrier to go extinct on a frequent basis. You’d think such sophisticated software could anticipate need rather than relying on disease, competition and predation to weed out the bugs.

    Just a quick question: when a disease like AIDS employs sophisticated software to wear down a human immune system — let’s say in a child who acquired the disease in a blood transfusion — is that a score for front loading, or a score for genetic entropy?

    How does the ID research program distinguish between the sophisticated products of front loading, and the degeneration implied by genetic entropy?

    Can you provide an example of each in operation, and the characteristics that identify which process in in effect?

  271. All right. Let’s try again. We leave out references to “Darwinism” as well as ID hype, ok? We go straight to the explanatory power of ID.

    I wrote that ID tries to explain what we see.

    Sounds promising. But what do we get?

    You seem to get tripped up on the question of what it is that we see. We see self-replicating, self-constructing, nanoscale machines operating via sophisticated energy transfer schemes and using quantum level computing and communication …

    That is, according to you, what we see. But wait a minute, is that really what we see? Could it be, let’s for a minute consider the alternative, that that is just your, and maybe other design proponents artistic rendering of what we see? Isn’t it, basically, all just expressions of chemistry at work? We know that from the very beginning of, say, a human being as a single cell, it grows to a complete, fully functional body over just nine months. And as far as we know and are able to determine, nothing mysterious is going on, just chemistry.

    Now that, in a nutshell, is an explanation, and AFAIK, a correct interpretation of what we see in terms of chemistry. Is anything besides chemistry involved? It is of course also eligible for poets to expand upon, and the images Lennart Nilsson have ‘painted’ for us are of course awesome.

    But putting what we see into words is not by itself an explanation. Would you like to try again, explaining what it is that we see? What you have said so far is just that “yes, we see lots of things going on”, to which I may add “Yes indeed, but what is the explanation for what we see? (My personal thought on that is that what we see in biology as well as in all other aspects of the universe are expressions of nature, the so-called natural forces at work.)

    From what you write, may I suggest that what you are trying to say is that the explanation is that what we see is unnatural, there is magic or something like that at work here?

  272. Petrushka:

    You seem willing to accept all known facts from mainstream biology

    I always accept facts. It’s on interpretations that I often disagree with mainstream biology. And again, it’s not a problem of gaps, but of good and bad theories.

  273. Cabal:

    And as far as we know and are able to determine, nothing mysterious is going on, just chemistry.

    That’s one of the most arrogant and foolish statements I have ever read. I suppose that when you use your computer, nothing more than electromagnetic events is taking place? And when Skakespeare wrote Hamlet, nothing more than mechanical neuronal activity?

    You know, the development of a multicellular being from the zygote is certainly mysterious, at least in the sense that we cannot explain it at all.

    Understanding the laws of physics and biochemistry does not imply explanation of the facts of biology, any more than understanding the physical laws implies explanation of how a software works.

    From what you write, may I suggest that what you are trying to say is that the explanation is that what we see is unnatural, there is magic or something like that at work here?

    I don’t know if you consider unnatural functional information, consciousness, intelligence and design. It’s just a question of qords. Your choice.

    Let’s just say that the explanation of complex functional information implies conscious intelligent beings and design.

  274. Cabal:

    Pardon, but you have presented a classic example (familiar to me from watching Marxism implode late 80′s – early 90′s) of how a failed paradigm distorts one’s view.

    Observe just above how you speak of “ID hype” and “magic” etc.

    Do you not see how warped, prejudicial and loaded that choice of language is? Do you not see that it reflects an emotional intensity that is very likely to be blinding?

    Then, pause and — bearing in mind the weak argument correctives above right on every UD page — reflect:

    1 –> We exist as intelligent creatures in a real world that we share.

    2 –> We know that we often do things that are artificial, which is a credible opposite to “natural.” So, we ne3ed not resort to the false dichotomy “natural/supernatural” or in its latest guises, extra-natural/magical.

    3 –> Indeed, as has been repeatedly pointed out at UD, the analysis in terms of nature [phusis] vs art [techne] goes back to the very first design analysis on record, i.e. Plato’s The Laws, Bk X. So, it would be reasonable to expect that a responsible analysis would reflect that fact of 2,300 years’ standing.

    4 –> Now, too, we know that we often analyse based on inference to best current explanation. Candidate explanations for a given phenomenon are tabled, analysed on factual adequacy, coherence and explanatory power, the best being selected on a provisional basis and put up for a programme of further testing. Explanations that turn out to be best and robust across time are generally seen as good ones.

    5 –> Indeed, this is the basis of scientific methods, as we can see from say Newton’s classic 1705 summary in Opticks, Query 31. (Our classic school level descriptions of science and its generic methods look to me to be pretty much a simplification of Newton’s remarks on analysis and composition in the context of empirical investigation.)

    6 –> Coming back, we personally are intelligent, and live in a technologically based world that is filled with artifacts of intelligence. One common pattern that marks out such artifacts and allows them to stand out from the phenomena of chance and mechanical necessity of nature is: complex, functional organisation and associated information.

    7 –> Consistently, we directly observe and experience how such FSCI is a hallmark of design in action. Starting with posts in this thread, and going on to case after case after case. (NONE of the above posts were produced by monkeys banging away at keyboards at random, or by faulty keyboards forcing a keystroke sequence.)

    8 –> Using insights from mathematics, information theory and statistical mechanics, we can see that once a functional entity depends on a sufficiently complex pattern [500 - 1,000 bits is a useful threshold], the functional configurations will as a rule be so deeply isolated in the space of possible configurations, that a random walk from an initially arbitrary config, even with selection for function, will be well beyond the search capacity of the observed cosmos. (Onlookers, cf here for my latest summary on such, in context.)

    9 –> So, we see an analytical reason for why we routinely observe that intelligence produces FSCI, but blind chance + necessity will not credibly do so.

    10 –> Actually, this is not controversial, even Dawkins admits it. What evolutionary materialists try to do is to suggest that very simple entities are sufficiently functional that some form of favourable natural selection will reward their accidental coming into existence. Then, further favourable, small-step accidents and selection based on success in environments gets us first to life then to body plan level biodiversity.

    10 –> But to do that, predictably, they duck the issue of threshold of complex organisation to achieve function. As a major current example, they often invite us to shut our eyes tot he obvious designed, purposeful context and unwarrantedly simplistic threshold of function in genetic algorithms and other simulations, starting with Dawkins’ notorious Weasel, and going up to Avida etc.

    11 –> But in fact, to get to the first successful organism, we have to fulfill the von Neumann requisites for a self-replicating automaton [where the entity haws an independent function, i.e autocatalysis of molecules will not do]:

    (i) an underlying storable code to record the required information to create not only (a) the primary functional machine [e.g. a metabolising entity that interacts with its environment] but also (b) the self-replicating facility; and, that (c) can express step by step finite procedures for using the facility;

    (ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with

    (iii) a tape reader [[called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling:

    (iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by

    (v) either:

    (1) a pre-existing reservoir of required parts and energy sources, or

    (2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment.

    12 –> Such an entity is well past the 1,000 bit limit. And, we routinely observe examples, starting with unicellular life forms, including the so-called simplest cases. These we know start out with DNA in the 100 – 1,000 k bit class [where each 4-state base is 2 bits].

    13 –> As the video top right every UD page shows, such entities are in fact exactly the sort of entity von Neumann envisioned.

    14 –> So, we have very good reason to view the living cell in its organism precisely as PaV described it:

    We see self-replicating, self-constructing, nanoscale machines operating via sophisticated energy transfer schemes and using quantum level computing and communication which, working in tandem, give rise to incredible macro-level organisms capable of flight, navigation using magnetic orientation, sonar, and star formations, and running 4-minute miles.

    15 –> Maybe Denton, circa 1985 will add details:

    _____________

    >> To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until it is twenty kilometers in diameter [[so each atom in it would be “the size of a tennis ball”] and resembles a giant airship large enough to cover a great city like London or New York. What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we would see millions of openings, like the port holes of a vast space ship, opening and closing to allow a continual stream of materials to flow in and out. If we were to enter one of these openings we would find ourselves in a world of supreme technology and bewildering complexity. We would see endless highly organized corridors and conduits branching in every direction away from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plants and processing units. The nucleus itself would be a vast spherical chamber more than a kilometer in diameter, resembling a geodesic dome inside of which we would see, all neatly stacked together in ordered arrays, the miles of coiled chains of the DNA molecules. A huge range of products and raw materials would shuttle along all the manifold conduits in a highly ordered fashion to and from all the various assembly plants in the outer regions of the cell.

    We would wonder at the level of control implicit in the movement of so many objects down so many seemingly endless conduits, all in perfect unison. We would see all around us, in every direction we looked, all sorts of robot-like machines . . . . We would see that nearly every feature of our own advanced machines had its analogue in the cell: artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of components, error fail-safe and proof-reading devices used for quality control, assembly processes involving the principle of prefabrication and modular construction . . . . However, it would be a factory which would have one capacity not equaled in any of our own most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours . . . .

    Unlike our own pseudo-automated assembly plants, where external controls are being continually applied, the cell’s manufacturing capability is entirely self-regulated . . . . [[Denton, Michael, Evolution: A Theory in Crisis, Adler, 1986, pp. 327 – 331. ] >>
    _______________

    Cabal, that is what evolutionary materialists need to credibly account for in a factually adequate, coherent, explanatorily powerful way. After many decades, on both the origin of life and the origin of body plan level biodiversity fronts, the failure is getting more and more obvious. (Don’t even mention the origin of mind, morality and man!)

    But, we know how to produce code, machines to implement it, and even nanomachines. It is sixty years ago that von Neumann published his analysis on self replicating automata, though we have yet to build a 3-d implementation. (Computer simulations don’t count.)

    More shockingly, in Ch II of his book, Paley looked at the hypothetical example of a self-replicating watch and made the case that its additional capacity to replicate itself — additional to time-telling is key — ADDS to the case for its intelligent design. In short, right from the beginning of the school of thought, Darwin willfully begged the key questions, as he was quite familiar with Paley. He simply assumed the can opener to open the can, and that there was a smooth tree of life path from amoeba to man.

    We know much more than that today, and in that light Paley sounds a lot closer to the truth than Darwin. (And, I am making no inferences to the supernatural on biological complex functional organisation and information, only to the intelligent. It is the complex functional organisation of the cosmos that points beyond it to an extracosmic, extremely powerful intelligence. Of course such an entity sounds a lot like what theists have discussed for thousands of years, and is a good candidate for the originator of biologically functional complexity. But, that is to say that the theistic worldview in the Judaeo-Christian frame is exposed to serious empirical tests and is plainly passing them, despite the pretensions of the evolutionary materialistic magisterium that dominates science in our day. Heavens declaring the glory of God, check. Our world without and inner life point to an Author, check. Our conscience and moral intuitions point to a moral law and lawgiver that transcends opinions and politics or rhetoric of the day, check. Time to take God seriously again. And, to get over the glorified teenager rebellion that rejects God as the supreme Father figure.)

    In short, evolutionary materialism is plainly falling apart before our eyes.

    G’day

    GEM of TKI

  275. Darwin more substantiated by science than Lucretius? Darwin claimed that nature was capable of producing the highly differentiated species of its own accord. He also claimed that natural processes were capable of producing ameliorative change. Where is the hard scientific proof for either of these claims?

    Just as Lucretius used a seemingly scientific theory to eliminate the need for God and first causes and advocate a philosophy based on rational pleasure, so Darwin used a scientific theory to discredit religion and replace it with a philosophy based on beauty and the love of intellect. The “fittest” to him were cultivated upper-crust gentlemen like himself—the enlightened caste in a dark century beset by the rise of merchant arrivistes.

    Like Lucretius, Darwin was a philosopher first. Science was a means for him, not an end. His theory became the reigning paradigm because it was timely—he was the one the cultural elite were waiting for—and because he was a master tale-spinner. Nihilism was in the air in the ‘50s, and Darwin gave it the scientific rationale it needed to overthrow the Transcendental Aesthetic.

    The fact remains, however, that his big claims have never been substantiated. Natural Selection leading to new species has never been observed. Natural Selection leading to permanent beneficial changes has never been observed. What the Darwinists call “evidence” is inference. The paradigm does not develop naturally from the facts. The facts are made to fit the paradigm.

    Two worldviews are in conflict. In the Biblical worldview, God created the heavens and the earth, but man corrupted his own existence through sin. In Darwinism, nature made nature, and man need only use his mind to find the happiness that has hitherto eluded him. Does hard science support Darwinism more than Genesis? Not really. If anything, the basic biological sciences are quietly revealing the glory of God and restoring a sense of awe and wonder.

    Nothing has changed since the time of Lucretius. Men still believe what they want to believe. In the late, lamented century, we were told that we were not serious people if we believed in a creator God. The self-appointed vanguard spun a cozy little cocoon of Darwinism for themselves in the larger culture. Now basic science is making belief respectable again. Nature is indeed “very good,” just as the Bible says.

    Choose your narrative, gentlemen. Isn’t that what “postmodernism” is all about? The freedom to choose?

  276. Petrushka [271]:

    Just a quick question: when a disease like AIDS employs sophisticated software to wear down a human immune system — let’s say in a child who acquired the disease in a blood transfusion — is that a score for front loading, or a score for genetic entropy?

    Your basic question is this: how do you explain evil in the world. This is not a scientific question—unless, like Darwin, you want to use the presence of evil and imperfection in our world as an argument against life being designed. But, of course, this is a religious question, as Dr. Cornelius Hunter points out in his books. You might want to peruse them.

    How does the ID research program distinguish between the sophisticated products of front loading, and the degeneration implied by genetic entropy?

    Well, it would appear that NS doesn’t operate efficiently enough. Read Fred Hoyle’s book, The Mathematics of Evolution where he demonstrates what Darwinists already know: that NS is basically ‘negative selection’; that is, it roots out errors and defects. That is, NS is a ‘conservative’, not a ‘creative’ force.

    Can you provide an example of each in operation, and the characteristics that identify which process in in effect?

    Well, when you find genes needed for the formation of digits in the genome of a sea anenome, that certainly suggests front-loading.

    As to genetic entropy, most species, right before they go extinct, show signs of gigantism. This is possibly and plausibly caused by genetic defects having built up over time.

  277. Cabal:

    From what you write, may I suggest that what you are trying to say is that the explanation is that what we see is unnatural, there is magic or something like that at work here?

    i-phones seem “magical” in what they do; but they’re not. They’re just incredibly well-designed machines. Do you get the point yet?

  278. Your basic question is this: how do you explain evil in the world.

    You may have see that argument presented many times, and I admit my argument suggests that, but it was not my intended argument.

    My question does not hinge on “disease=evil” and “adaptation=good.”

    My question was not about good and evil, pleasure and pain, or anything that might be connected to theosophy or morality.

    It is about physical history.

    I believe you asserted that genomes have a wonderful software program capable of producing new genetic products. I interpret this as a version of front loading. The genome itself (or the cell machinery) computes and produces changes to the genetic code.

    So I am curious about what this means and implies. Do the changes to the genomes extend beyond epigenetic changes? Do organisms compute and produce new complex structures?
    Do the computations anticipate need when the environment changes or when the ecosystem changes?

    How does this comport with genetic entropy? Is the same wonderful software that produces intelligent changes also produce degeneration? Or does the operating system itself degenerate because the designer forgot to include redundancy and polling?

    I’m curious about observed examples of this software in action. Would the lenski experiment provide an example of adaptive change computed by the organism?

  279. As to genetic entropy, most species, right before they go extinct, show signs of gigantism. This is possibly and plausibly caused by genetic defects having built up over time.

    Hundreds of species have gone extinct in the last hundred years. Can you cite an example of any exhibiting this phenomenon, or any you can attribute to genetic entropy rather than to changes in the environment or ecosystem?

  280. Well, when you find genes needed for the formation of digits in the genome of a sea anenome, that certainly suggests front-loading.

    Are they useful in the anenome?

  281. Petrushka:

    Can differential death, that is, natural selecction, bring abut life?

    You answer that, and then I’ll answer your questions.

  282. Petrushka:

    PaV is right, your question as posed is at least in part about the debate on good vs evil, at minimum at the “read between the lines” rhetorical level.

    For the moment, though, we can ignore the philosophy involved and point out that a virus is a hijack program, which demonstrates that in the cell we are dealing with discrete state, code-bearing complex algorithmic information. Also with machines that read and execute that information.

    So, the far more relevant question is, what is the credible source of codes, algorithms, object code programs [and remember how hard it is to de-compile a program to get the original source code from the equivalent of machine language], and matched, organised execution machinery?

    Based on our experience and understanding of what is required to develop such entities, starting with the codes and algorithms, what is the most credible explanation of such? Why?

    And, going to the other sort of viruses what does the existence of malware suggest about the world?

    Let’s see:

    1 –> It confirms the existence of designs and systems that have a normal function that can be hijacked.

    2 –> It points to the best explanation of complex information systems: designers who have both intelligence and intent.

    3 –> It suggests that something has gone twisty with the world. (I am old enough to remember the days before malware was an issue . . . I even recall the Sci Am core war article that so unwisely sowed some very bad ideas.)

    So, the central issue is not genetic entropy vs front loading, but the significance of the sort of programmed information based technologies we are seeing.

    GEM of TKI

  283. @allanius

    -”Choose your narrative, gentlemen. Isn’t that what “postmodernism” is all about? The freedom to choose?”

    Funny that you mention post-modernism. I was just reading about relativism and the inherent relf-refuting claims it makes. The fact that people actually subscribe to such a degenerate belief is beyond me.

    I do agree with you about darwin his underlying nihilistic philosophy and how he was first and foremost a religious materialist. The fact that he was a scientist was merely a means to an end.

    However, when all is said and done we need to remember that the man was motivated by a very significant personal experience, the death of his little girl. That played a vital role in the direction he took in his life and research and we need to take it into account. He was after all just a man dealing with tragedy in his life. I’m not saying this to discredit his work or brand him as dishonest. I’m referencing this to indicate his humanity and fallibility, something that is often forgotten as he has become an idol for materialists/atheists.

  284. So, the far more relevant question is, what is the credible source of codes, algorithms, object code programs [and remember how hard it is to de-compile a program to get the original source code from the equivalent of machine language], and matched, organised execution machinery?

    Computers and computer languages are not a very good analogy for living systems. We certainly don’t have any computer systems that work like living things. We have models of bits and pieces, but no mechanical organisms that can replicate themselves from raw materials.

    But if you are going to force the analogy anyway, DNA is more like parameters to a program than a program. Back in the days of the Trash-80 computer, someone pointed out that BASIC programs were just data to the real program, which was the BASIC interpreter.

    Computer scientists may or may not agree with this, but I will push this analogy just a bit further.

    In any computer made by humans, there are inner rings of code that are seldom, if ever, modified. Call them BIOS or OS or whatever, it seldom changes, and it protects itself against attempts by outer ring programs to modify it.

    By analogy, the cellular machinery is the operating system, and DNA is data. The cellular machinery changes much less and much slower than DNA. there are fewer differences in cell machinery between organisms than there are differences in genes.

    Changes to DNA can certainly be fatal, but we can easily observe that many changes are not fatal. Most humans have a few mutations that have not killed them. It’s fairly obvious that there are lots of differences between people.

    When you look at the kinds of mutations that are non-fatal and which may be mildly beneficial or mildly detrimental, you can find classes of alleles that modify things like hair color, skin color, bone length and so forth. Things that make one individual look slightly different from other individuals.

    You would not expect to find as many survivable mutations that modify low level biochemistry. One possible exception would be the allele that enables lactose metabolism. Other non-fatal mutations could involve the number of copies of genes, such as those that produce digestive enzymes.

    Back to your question: the answer I offer is that evolution doesn’t typically produce new executable code. It mostly tweaks parameters. Changes to the operating system are most likely to be fatal.

    But living systems do have a way of dealing with fatal changes. In microbes, the death of a defective individual is insignificant.

    In sexually reproducing species, the cellular machinery is also overproduced in the form of egg cells. Those that have serious metabolic defects will never see the light of day. Sperm cells are produced in the hundreds of millions, and those that have serious metabolic defects never participate in fertilization.

  285. 286

    allanius,

    Like Lucretius, Darwin was a philosopher first. Science was a means for him, not an end. His theory became the reigning paradigm because it was timely—he was the one the cultural elite were waiting for—and because he was a master tale-spinner. Nihilism was in the air in the ‘50s, and Darwin gave it the scientific rationale it needed to overthrow the Transcendental Aesthetic.

    C. S. Lewis agrees with you. In his essay “The Funeral of a Great Myth” he wrote:

    “That, then, is the first proof that popular Evolution is a Myth. In making it Imagination runs ahead of scientific evidence. ‘The prophetic soul of the big world’ was already pregnant with the Myth: if science has not met the imaginative need, science would not have been so popular. But probably every age gets, within certain limits, the science it desires.”

    http://books.google.com/books?.....38;f=false

  286. but the significance of the sort of programmed information based technologies we are seeing.

    But I’m wondering what the entailments of such a conjecture are.

    Several posters here have suggested that certain kinds of mutations look like they could be intelligent modifications, as opposed to random.

    So would the ID research program expect that some of the 20 or so kinds of mutations are more likely to result in beneficial change, compared to other kinds?

    Could you devise an experimental protocol that would induce organisms to exhibit some intelligent or front loaded change?

    Other than epigenetic change?

  287. Petruska:

    I’ve been waiting all day for your answer to my query.

  288. 289

    @PaV (#268)

    You think Darwinism is real, and ID is nothing more than a convoluted elaboration of what is real. However, what is Darwinism is not real; then, maybe, Darwinism is nothing more than a convoluted elaboration of ID. But, of course, Darwinists love tautologies.

    Pav,

    As long as consciousness remains a first person-only experience and induction is unreliable, no empirical evidence can show the theory of Darwinism is “real” while the theory of solipsism is “not real.” Welcome to the incompleteness of empiricism and the problem of induction.

    However, what we can do is criticize theories and discard those that are bad explanations. By definition, both ID and sophism appear to be convoluted elaborations of other theories, so I discard them.

    How did I determine this? When we take the theory of ID seriously in respect to biological change, It becomes clear it represents a convoluted elaboration of neo-darwinsim. What do I mean by take ID seriously? I’ve illustrated this in my earlier comment regarding solipsism.

    The key point of solipsism is the claim that one cannot know if an external reality exists because It’s only possible to know that one’s self exists. This is the explicit theory, which is based the real problem of what we can and cannot know.

    However, if we take solipsism seriously, this problem is only part of the picture as the solipsist also experiences the same day to day phenomena that you and I do as realists. The sophist is surprised by the actions of conscious being-like facets of himself. Object like facets of himself obey laws of physics like facets of himself. etc. Despite the fact that solipsism does not address this phenomena directly, it still presents a implied theory about them none the less. This is because of the solipsist’s claim that nothing exists outside of himself. If they are not external, then, by the process of elimination, they must be internal. And if they are internal, this there must be some reason why all of these internal facets behave in the specific the way we observe.

    For example, realism says these conscious being-like facets surprise the solipsist because they really are external conscious beings. Realism says object-like facets obey laws of physics-like facets because they really are external objects which are effected by uniform external physical laws. On the other hand, if these phenomena are internal to the solipsist, there is no reason to expect objects to behave they way they do. Especially, since it’s likely the solipsist couldn’t do the complex math necessary to uniformly determine how these objects should be behave when these laws are applied – let alone why any particular laws should be uniformly applied in the first place. As such, solipsism provides absolutely no explanation for why objects we observe would act if they really were external objects effected by external laws of physics. etc, but are not.

    In claiming they are internal, not only does solipsism invalidate all of the explanations provided by realism, but it provides none of it’s own to replace them. As such, It merely explains away realism. So, when we take solipsism seriously, its failure to explain it’s own implied theory reveals it as a convoluted elaboration of reality.

    How does this apply to Intelligent Design? The key point of biological ID is the claim that the specific complexity we observe in biological organisms cannot currently be explained by darwinism. This is the explicitly theory presented by biological ID.

    However, just as with solipsism, if we take ID seriously, this claim represents an incomplete picture as the IDist observes the same specific rates of change, the same biological features, the same variations of between species, etc. After speciation occurs at a significantly high rate during in the cambrian explosion many species become extinct at a significantly high rate shortly after. We observe eyes across multiple species with significant variations and appear to be organically optimized over time. Despite the fact that ID does not explicitly address these observations directly, it still presents an implicit theory about them none the less. This is because of the IDist claims that a designer was responsible. If each of these specific changes are not explained by dariwnism, then the designer must have explicitly interceded (or intentionally took no action) to bring about each specific change at specific times.

    For example, neo-Darwinism says the specific rates, traits and structures we observe are due to a complex combination of chance, natural laws, environment, geographical features, etc. These factors ultimately result in one a particular set of DNA, RNA and other genetic structures are replicated, rather than some other particular set. On the other hand if, in reality, these specific rates, traits and structures were the result of intentional choices by some mysterious intelligent agent, there is no particular reason to choose one particular set of rates, traits and structures over some other set.

    Note: this is similar to the question of why object-like facets would follow laws of physics like facets if they were really internal to the solipsist, rather than external.

    For example, why do all species share the exact same four DNA molecules? Why do organisms clearly lend themselves to be put into a tree structure. Why did over 95% of all species that ever existed go extinct, rather than 50%, 10% or 1%? I could go on, but I think you get my point.

    In claiming a designer did it, not only does ID invalidate the explanation provided by Darwinism, but it provides none of it’s own to replace it. That’s just what the designer happened to have chosen. As such, it merely explains away Darwinism.

    So, when we take biological ID seriously, its failure to explain it’s own implied theory reveals it as a convoluted elaboration of Darwinism.

  289. Petrushka:

    Wow! I find your #285 and 287 very interesting, and I agree on many of the things you say. What a change!

    By analogy, the cellular machinery is the operating system, and DNA is data. The cellular machinery changes much less and much slower than DNA. there are fewer differences in cell machinery between organisms than there are differences in genes.

    I agree. Butit is also true that the cell machinery is built on those data. I would rather say that the protein coding genes are the raw data, and that all the rest (which we really don’t know well, but which certainly includes non coding DNA, epigenetic factors, and who knows what else) is the real code, the “procedures”, the part which builds the cell machinery, individualizes the transcriptomes, controls bodt plans and organ plans and tissue plans, and so on.

    When you look at the kinds of mutations that are non-fatal and which may be mildly beneficial or mildly detrimental, you can find classes of alleles that modify things like hair color, skin color, bone length and so forth. Things that make one individual look slightly different from other individuals.

    That’s perfectly true.

    You would not expect to find as many survivable mutations that modify low level biochemistry. One possible exception would be the allele that enables lactose metabolism. Other non-fatal mutations could involve the number of copies of genes, such as those that produce digestive enzymes.

    OK. And one possible exception would be designed changes.

    Several posters here have suggested that certain kinds of mutations look like they could be intelligent modifications, as opposed to random.

    Yes, that’s exactly the point.

    So would the ID research program expect that some of the 20 or so kinds of mutations are more likely to result in beneficial change, compared to other kinds?

    They are more likely to do that, if they are intelligently guided, ot if the result is intelligently selected and fixed.

    Both guided, non random mutation, and intelligent selection, are good scenarios for intelligent design of biological information.

    Could you devise an experimental protocol that would induce organisms to exhibit some intelligent or front loaded change? Other than epigenetic change?

    In principle, it’s possible. But first we have to understand many things which at present are not clear. For instance, while I am not a fan of front-loading, and my favourite model is that of an intelligent intervention of the designer on living beings, still I can certainly admit that at least part of the building of biological information could be the result of intelligent adaptation though algorithms already implanted in the living beings. That would be the lamarckian part of evolution. In an old post, I have already expressed my tentative “model” for the various possible causal mechanisms:

    a) neo-darwinian evolution: very limited power, usually restrained to microevolutionaty changes (one or two coordinated mutations), like in antibiotic resistance.

    b) neo-lamarckian mechanisms: limited adaptive power, probably through complex controlled mechanisms like HGT and controlled tweaking of existing information. Could explain events like the emergence of nylonase from penicillinase, and similar modifications, probably involving a few coordinated mutations.

    c) design: direct intervention of a designer on existing biological information, to build new functions at any possible level, in the respect of the constraints determined by what already exists and by the environment. Creative power: very high (but not infinite). Through guided variation and/or intelligent selection of targeted random variation, or both, the designer can achieve new genes, new functions, new regulations, and, more dramatically, new body plans. Favourite possible tool for intelligent modeling of the genome: transposons.

    That’s just a proposal, in order to stimulate the discussion.

    Many of these ideas are potentially open to research and analysis. For instance, a deeper understanding of models like antibody maturation, or the emergence of nylonase, or a serious analysis of the emergence of new genes in natural history, could certainly help to shape our understanding of how biological information can be intelligently built.

  290. veilsofmaya (#289):

    I have read with interest your explanations of you point of view. But I think there is always something you miss about ID.

    First of all, I agree with your analysis of solipsism. But I find that, through such an analysis, you tend to avoid a major empirical problem: the empirical nature of our perception of our own consciousness and of its processes.

    We cannot artificially create an all-or-nothing model of reality. Our model must include all knwon facts. Consciousness is a fact, and it is true that it is the “fact of facts”, the one through which all other facts are known. But that does not mean that what we perceive through our consciusness (the oute world, other human beings, physical events, etc.) is not a fact too. Or that our legitimate inferences about the other world are not useful and good, even if they are not “absolute” in a metaphysical way.

    Our model of reality must be flexible and humble, and make the most of all the information we have. But, certainly, not addressing, or denying, one of the most notable parts of that information, that is consciousness and its phenomena, the “fact of facts”, is arrogant folly.

    About ID, you miss the most important point: ID observes objective facts which are empirically connected, by analogy and quantitative analysis, to another empirical fact, which is intelligent design as we knwo it in humans, in ourselves. The point to which no darwinists has ever abswered, and which I have made many times, is that the analogy between human design and the hypothesis of a conscious intelligent designer for biological information is not based “only” on the explanatory filter, or on the concept of CSI: those are just important tools to detect design. But the fundamental concept is that we observe ourselves in the act of designing thing, of producing CSI, and we know that our designed result is the product of specific cosncious representations, including a purpose, a mental map of causal relations, our powers of deduction and inference, our will, and so on. That’s why we infer a similar process, and a similar agent, when we observe the marks of design in biological information.

    And “a similar agent” does not necessarily mean, like darwinists like to think, “a human agent”: it just means “an agent who is conscious and intelligent, an agent who can experience the same kind of mental representations as we do”: purpose, deduction, inference, cognition, will. That is the origin of design: it doesn’t matter if the cosncious intelligent agent is a human being, or an alien, or a god, or anything else, provided that it is a conscious intelligent being. That’s all that is necessary to implement design, and to produce designed things.

  291. Can differential death, that is, natural selection, bring abut life?

    That question?

    I don’t understand the question.

    Nearly all multi-celled organisms die. There is no differential death.

    There is differential reproductive success which will, over generations, modify the genomes of populations.

  292. I agree on many of the things you say. What a change!

    I have no credentials in biology other than ten years on the internet debating evolution. I have very little ego wrapped up in ideas. I hate like fury being caught making a mistake, but I would rather learn something than win an argument while being wrong.

    But getting back to our differences, I appreciate your candor in saying you favor a scenario involving intervention. It’s certainly a logical stance, but I’ve said several times I don’t buy it.

    I don’t buy the argument that random variation is deficient in power. In fact I’m almost certain that non-random intervention would be a huge failure. Stochastic processes have huge advantages in complex systems. They allow the system to be less brittle, less apt to break under stress.

    There’s a huge cost in inefficiency in stochastic variation: lots of pain and death. But that is what we see if we look around.

    I’m not going to argue philosophy or respond to arguments based on philosophy or theology, but I will say just a bit about where I stand.

    I think stochastic processes are the heart and soul of what we experience as free will. I think all the really interesting process in the universe are stochastic and evolutionary.

    I think our own minds rely on stochastic processes and a kind of Darwinian selection. I think this is why we are complex and unpredictable. I also think the “darwinian” factor in our thinking is why we can be accountable for the consequences of our actions. Our thoughts and actions are shaped by consequences.

    Just as genomes are shaped by consequences.

    So while I am not theistic, I am not particularly hostile to theism. I just don’t think I am smart enough to understand the mind of God, and I find it unlikely that anyone will find the fingerprints or signature of God in natural phenomena. Without being a militant atheist, I think from the human point of view, its “naturalism” all the way down.

  293. “Stochastic processes have huge advantages in complex systems. They allow the system to be less brittle, less apt to break under stress.”

    Not buying it.

  294. it doesn’t matter if the cosncious intelligent agent is a human being, or an alien, or a god, or anything else, provided that it is a conscious intelligent being. That’s all that is necessary to implement design, and to produce designed things.

    I disagree. I think both molecular biology and ecosystems, taken together or separately, are complex in the mathematical sense of branching too quickly for any logical process to analyze.

    ID proponents are fond of using large numbers and probabilities to rule out randomness in producing structures. But I’ve never seen an ID proponent do the math to see what kind of mind it would take to anticipate the behavior of novel proteins in a molecular machine, much less the behavior of the machine in an ecosystem.

    Actually, I think they have, and most have concluded it would take an omniscient mind. Call it a quantum mind, capable of following all branches simultaneously. Call it a creator.

    So I think what ID proponents envision is a creator-mind capable of doing all the trial and error of evolution without involving time or effort.

    Could be, but humans seem to be stuck on one branch of the omneisient mind. To us it looks Darwinian.

  295. Not buying it.

    Quite frankly, I’ve never seen the concept applied to biology. I suggested it based on stuff I read some years ago about making tall buildings resistant ot earthquakes.

    So I googled “stochastic stability.”

    http://www.cse.ucsb.edu/IGERT/.....bility.pdf

    Turns out there are a lot of people investigating this concept.

    http://www.google.com/search?h.....25%2C0%2C1

    Lots of references to molecular biology.

    The math is way over my head, but the basic consect is that introducing noise into a dynamic system can make it more stable.

    Since I don’t have to present this for peer review, I’ll fearlessly assert that life in general is more successful because of random noise in genomes.

    This really isn’t an argument. It’s just what I think.

  296. “The math is way over my head, but the basic consect is that introducing noise into a dynamic system can make it more stable.

    Since I don’t have to present this for peer review, I’ll fearlessly assert that life in general is more successful because of random noise in genomes.

    This really isn’t an argument. It’s just what I think.”

    Interesting, but I think, then, this would be a clear case of design. Rather, it would be the case that this noise would have to be implemented in a particular way would it not?

  297. Petrushka [292]:

    There is differential reproductive success which will, over generations, modify the genomes of populations.

    What does differential reproductive success mean other than some forms die off more than others. NS is simply differential death. Dawkins, in his “The Blind Watchmaker” says that NS is basically the “grim reaper”.

    You’ve been trying to give the impression that the notions of front-loading and genetic entropy are opposed, and that this makes ID look silly as a proposition. Well, how silly a proposition is it to basically say that because form A dies more than form B, life will be produced. It is, at bottom an illogical proposition to suppose that that which is the negation of what is, is then somehow the cause of what is.

    But this is all caricature. That’s not the way to proceed, and it’s unbecoming. So do please refrain from it in the future.

    _______________________

    About the sea anemone, you asked, Well, were the genes being used? Yes, they were. But the whole idea of “front-loading” is that genes are in place before they will be needed. Doesn’t it strike you as a little bit odd that the genes needed for vertebrate development are already found before ever they were needed? Doesn’t Darwinism presuppose that organisms develop greater complexity as they move along, and yet here the complexity that would be needed hundreds of millions of years later, is already found at the very base of several lineages that won’t come into existence for eons hence?

    That said, ID doesn’t rise or fall with the notion of “front-loading”, but is certainly consistent with it. In the papers announcing that these genes had been found, the results were “unexpected” (let’s read that as, ‘something that Darwinism wouldn’t have predicted’) Here’s the paper
    : http://www.uibk.ac.at/cmbi/downloads/kusserow.pdf

    And here’s a quote:“Our result also points to an unexpected paradox of genome
    evolution: the gene diversity in the genomes of simple metazoans is
    much higher than previously predicted3 and some derived lineages
    (flies and nematodes) have an even lower diversity of gene family
    members. Thus there is no simple relationship between genetic and
    morphological complexity.”

    Part of the paper’s results show that in lineages that came into existence PRIOR to the mammalian lines, that many of the Wnt gene families were lost, but yet they are found in the mammalian lines. Was this “genetic entropy”? Maybe so, likely not. Again, ID doesn’t rise and fall with the notions of genetic entropy.

    As to gigantism and extinction, this connection is known as Cope’s Rule. You can read about it here: http://en.wikipedia.org/wiki/Cope's_rule. And, BTW, you mention the 100′s of species that have gone extinct. Some, indeed, have; and yet we get reports quite often of species being spotted that once were considered extinct. I wouldn’t just accept it for fact if scientists say this or that species is extinct. Life is generally hardier than that.

    So, we’re back where we started: ID explains what we see; and what we see is immense complexity and self-replicating mechanistic wonders. There are simulations of what goes on in the cell that can be viewed and it is everything that I described above. So, now, if you want to “believe” that all of this came about by chance mechanisms, fine, but the better explanation is that it was designed.

  298. I forgot to make this point:

    In the quote from the paper, the authors write:

    Thus there is no simple relationship between genetic and morphological complexity.

    As I have been saying, population genetics is passe, only to be done for amusement.

  299. Petrushka:

    I respect your views, which this time you have expressed very clearly.

    Just take into account that stochastic processes can very well be part of a designed process. There is no contradiction in that.

  300. Petrushka:

    Could be, but humans seem to be stuck on one branch of the omneisient mind. To us it looks Darwinian.

    This is a very interesting view.

    In fact, this is what I think we have. Let’s for a moment just assume an Omniscient Creator, and let’s further assume that the Creator wishes to bring about, not a new species, but a new lineage—phyla, class, let’s say—chromosomal changes will appear, perhaps drastic ones. Further, changes to the form of the “egg” (which is like the ‘hardware’ of the system; so, with new software, we need new hardware) would very likely be made. Now, if we were ‘filming’ this process, and let’s say we film it using the fossil record, even if the fossil record were relatively perfect, all’s we would have would be a ‘before’ and ‘after’—like the diet ads! The Darwinist would say: random variation and selection; the IDist would say: designed change. So I don’t think there will ever be incontravertable evidence for ‘creation’, yet, nonetheless, even with the evidence we now have, the better inference is design. Why? The utter complexity of the hardware and software, that it can replicate, and the scale at which this all happens.

    This is where I think “belief” comes in. If someone is a committed materialist—knowingly or unknowingly—he/she would have a hard time accepting the design hypothesis. But logically, I think this is the only real alternative we have given “what we see”.

  301. This is where I think “belief” comes in. If someone is a committed materialist—knowingly or unknowingly—he/she would have a hard time accepting the design hypothesis.

    I don’t want to stir up trouble, but to me, ID looks like a tower of Babel, an attempt to see God.

    I don’t know what the TRUTH is, but my own belief system is most comfortable with the notion that from the human point of view, everything appears natural and will always appear natural, regardless of our state on knowledge. Somewhat in the way that all viewpoints in the universe appear to be the center of the universe.

    I think this state is an necessary entailment of free will. Regardless of how finely we divide matter, and regardless of how deep or understanding becomes, there will always be a stochastic element in causation. That is necessary to ensure that we are governed by consequences and not by antecedents.

    The ability to learn and be governed by consequences is my operational definition of free will, and evolution is an instance of a learning system.

    I differ entirely from people who think that species or kinds are embedded in creation. I think there is no limit to what can be, just as there is no limit to the number of words that can be made from an alphabet.

    Evolution is not just the leftovers from differential death. Evolution is a continuation of creation.

    No surprise, but I take a metaphorical view of Genesis. The alternatives for existence are continuous bliss, possible because everything is predictable and determined by antecedents, and continuous uncertainty with accompanying pain, made necessary because we are governed by consequences, and we must be forever learning. The Fall is a metaphor for the coming into being of an existence allowing free will.

    To be governed by consequences, it is necessary that at some level, antecedent causes be indeterminate.

  302. About the sea anemone, you asked, Well, were the genes being used? Yes, they were. But the whole idea of “front-loading” is that genes are in place before they will be needed. Doesn’t it strike you as a little bit odd that the genes needed for vertebrate development are already found before ever they were needed?

    I’m not a biologist, but there seems to be something wrong with the anemone’s genes being simultaneously useful and not needed.

    Again, my technical expertise is lacking, but I doubt if the anemone’s alleles appear unmodified in vertebrates.

    It has been part of my argument throughout my time at this forum that evolution since the Cambrian has been more a matter of tweaking genes rather than inventing them.

    And even the inventions appear to be modifications of existing sequences.

  303. Petrushka:

    Re: my own belief system is most comfortable with the notion that from the human point of view, everything appears natural and will always appear natural, regardless of our state on knowledge.

    Does that include the PC you typed your comment up on?

    That is, what do you mean by “natural”?

    For, if the PC has certain empirical characteristics that allow you to make a confident, empirically reliable judgement that it is an ART-ifact of a technology, then that set of principles and associated evidence form a body of things that can be scientifically studied.

    And, of course, once you see that, you have no good and consistent basis for then dismissing other similar signs of intelligent action, of art, of design. That the evidence pointing to design might be uncomfortable for a materialistic worldview is not a good enough reason to raise it as a scientific objection.

    On that, we can then look at the functionally specific complex organisation and associated information, much of it discrete state and algorithmic, that crops up in cell based life as a molecular network.

    So, while you may not see eye to eye with us, we hope you can see why we hold the view we do — and on an empirical basis, not an a priori one. (Remember, discovering that biological life on earth is credibly a technology does not by itself entail that the relevant designer is within or beyond the observed cosmos. In addition, we have very strong configuration space reasons for seeing thast chance plus blind mechanical necessity are maximally unlikely to give rise to such complex, finely integrated and synchronised technologies.)

    GEM of TKI

  304. 305

    @gupuccio (#291)

    You wrote:

    But that does not mean that what we perceive through our consciusness (the oute world, other human beings, physical events, etc.) is not a fact too. Or that our legitimate inferences about the other world are not useful and good, even if they are not “absolute” in a metaphysical way.

    gupuccio,

    This is what I mean by the implied theory of Solipism.

    If one dares to actually take Solipsism seriously, one must concede it to predicts each and every phenomena we observe as realists. This includes the ability to have conversations, receive enjoyment from eating ice cream, the desire to avoid pain, the ability to learn that protons and neutrons consist of quarks by building particle accelerators, etc. Solipsism does not claim these things cannot be studied or are not in some way useful. It just claims they are elaborate facets of the solipsist’s internal self. This is because Solipsism provides no other explanation. It’s this omission that forces us to reach this implied conclusion.

    So, again, if one dares to take Solipsism seriously, it becomes clear it’s a convoluted elaboration of realism and therefore a bad explanation.

    However, hardly anyone takes Solipsism seriously because, as as you’ve mentioned in an earlier comment, virtually no one is actually a solipsist. And if you do not take Solipsism seriously, then your merely using it to attack realism or it’s something one occasionally dusts off and trots out to illustrate a particular philosophical aspect of conciseness and the limits of human knowledge.

    In either case, despite being based on a valid limitation on what we can and cannot know, we can discard Solipsism as an actual working theory without merely appealing to intuition, usefulness, the desire to not be alone, etc.

    ID observes objective facts which are empirically connected, by analogy and quantitative analysis, to another empirical fact, which is intelligent design as we knwo it in humans, in ourselves.

    Please see above. When we take it seriously, Solipsism in no way suggests I cannot study how these object-like facets of myself behave and discover the uniform laws of physics-like facets of myself they follow using methodological observations. I really should expect all of these facets to behave in a specific way which I could verify each and every time should I know how do the necessary complex mathematics. In fact, according to the implied theory of solipsism, physicist-like facets of myself do it all the time. So, apparently, there is some part of myself that really can do the math. The problem is that Solipsism provides no explanation of I can both know and not know simultaneously, which is precisely why I’m not actually a solipsist.

    But the fundamental concept is that we observe ourselves in the act of designing thing, of producing CSI, and we know that our designed result is the product of specific cosncious representations, including a purpose, a mental map of causal relations, our powers of deduction and inference, our will, and so on.

    While I hate to sound like a broken record, am I a realist because I think the mind is incapable of creating highly elaborate environments, interactions and intricate details? Of course not. Unless you’re one of the rare people who cannot remember having a dream, you know exactly what the mind is capable of first hand.

    I’ve had dreams where other people surprise me. I’ve had dreams where I appear to learn new things I did not know before. I’ve had dreams were I appear to write music I’ve never hear before, yet I’m not a musician. Clearly, if take solipsism seriously, the solipsist can also make an argument by via analogy of the observed ability of his mind, his power of deduction, etc.

    So, if my objection to solipsism isn’t due to a lack of capacity or ability of my mind, then what is it? It’s the specific details of the outcomes and behaviors that we observe, not that the mind is not capable of producing them.

    If everything you and I conceder reality is actually internal facets of my mind, then why don’t things occasional fall down instead of up? Why would objects like facets appear to follow laws of physicals like facets? Why, after suddenly hearing a bell ring out of nowhere, do I not occasionally find myself back in high school not knowing what class I’m supposed to be in, etc. In accounting for reality using one’s own mind, the solipsist negates realism’s explanation for the specific things we observe without providing one to replace it. As such it’s a convoluted elaboration of realism.

    Hopefully the analogy to ID has become more clear.

    Yes, we are designers. But in proposing what we observe is actually the result of intent, purpose, will and so on, you’re not just making an inference or attempting to solve isolated instances where complexity is supposedly beyond natural explanation. You’re actually presenting a vast implied theory about the specific entirety of biological complexity – past, present and future. In doing so, it’s the ‘specificity’ in CSI that is precisely what ID claims to shoulder, but actually does not. In merely positing some vague designer, it provides no reason for any specific rate, features or structures over some other rate, features or structure.

    In doing so, ID merely negates the current explanation without providing one of it’s own to replace it.

    If, as PaV suggests, there is really is some dirty little secret in play here, it’s ID’s implied theory and the fact that it clearly has no intention of explaining it. That’s just what the designer happened to have chosen.

    Until ID provides a better explanation of why a designer just so happened to intentionally and willfully chose the specific rates, features and structures we observe, it will remain a convoluted elaboration of Darwinsm.

    And, like solipsism, if you do not take ID seriously, then it’s merely a weapon used to attack darwinism or a “theory” specifically designed to support a particular philosophical / religious position, which isn’t science.

    Please note that I do not need to argue that neo-darwinism must be true. Nor am I making such a claim. Nor can I prove ID is false using empirical observations any more than I can prove an external reality exists outside of myself using empirical observations.

    Instead, I’m suggesting that, by the very way it is defined, the specific structure and formulation of biological ID represents a convoluted elaboration of some other theory. It’s a bad explanation.

    Until such time ID decides to provide it’s own explanations to replace those it explains away from darwinism, we can discard it.

    However, given the presupposed designer that drives the entire ID movement, it’s unlikely such an explanation will be forthcoming. This is due to the way the presupposed designer is defined.

  305. Does that include the PC you typed your comment up on?

    Of course. It doesn’t violate any laws of physics.

    We can judge things to be made by humans because we have vast experience seeing things being made by humans, and a vast catalog of things known to be made by humans.

    Except at the very fringes of archeology, we do not judge objects to be human made using probability calculations. I’m not actually aware of any such calculations being used.

    We have no such experience with the origin or origins of life. We have never seen life being created, never seen a life creator in action. We have no knowledge of the powers and capabilities of such an agent, nor any knowledge of the methods used or the times and places when life creating events took place.

    None of these unknowns trouble us in the case of the computer.

  306. The problem of vast configuration space applies to potential designers just as much as it applies to chance and necessity.

    At every point in a molecular configuration there will be branching possibilities, and the possibility branches simply cannot be tracked by anything less than an omniscient intelligence.

    The designer must know things that are inaccessible to finite intelligence. The designer must be aware of all the emergent properties of increasingly complex assemblages of objects.

    We already know, from the difficulty of computing how proteins fold, that sequential computing cannot solve this kind of problem in the time available. When you multiply that problem by the number of potential gene sequences, you have a problem that is inaccessible to any kind of finite designer.

    So that pretty much narrows down the question of the designer’s capabilities. The designer must be omniscient.

    Unless, perhaps, the configuration space is richer than you imagine it to be, and functionality is a gradient.

  307. 308

    Petrushka,

    None of these unknowns trouble us in the case of the computer.

    They trouble me, especially when I am asked to pay hundreds if not thousands of dollars for them. I’ve never seen anyone make a computer, and take it on faith that all computers have been made by men. This is reasonable, but it is an inference nonetheless, just as the origin of life being the product of design is an inference. It is an inference the other way too (that life wasn’t designed). It just depends on what you consider reasonable. I do not consider happenstance material movements a satisfactory answer for what created the intricacies necessary for even the most basic of life. The most basic of life turns out to be much more complicated than the term “basic” would imply; life occurs in a nano-factory. My conclusion seems perfectly reasonable to me.

  308. Petrushka

    Pardon, but the careful calculation of your response above is revealing.

    We both know that the PC in front of you is chock full of features that are empirically recognisable and sharply distinguish it from items traceable to undirected chance and mechanical necessity.

    Also, you know full well that the PC reflects the “miracle” of intelligence; the source of its functionally specific, complex, purposeful organisation and the associated information. In short, you plainly know or should know that art is dramatically distinct from nature [i.e. we are not just locked up to the rhetorically loaded dichotomy natural vs supernatural], and that there are empirically reliable, routinely observed signs of art.

    So, when you write . . .

    We can judge things to be made by humans because we have vast experience seeing things being made by humans, and a vast catalog of things known to be made by humans . . .

    . . . we can easily enough clean it up a bit to show what is going on: “We can judge things to be made by humans [intelligent agents] because we have vast experience seeing things being made by humans [intelligent agents], and a vast catalog of things known to be made by humans [intelligent agents].”

    From that vast experience we have intuitively obvious and in principle quantifiable patterns that come down to functionally specific complex organisation, synchornisation and information. We routinely see that intelligence produces such, and we observe that undirected chance and mechanical necessity of nature do not.

    On good configuration space reasons we see why: functionally specific configs are vastly less thermodynamically probable than non functional ones that are not so tightly specified. So, the statistical weight of the latter will overwhelm the former in spontaneous, undirected situations, with near certainty. In commonly met cases, where the information is over 1,000 bits, that all but is beyond the search capacity of the observed cosmos. (And yes, all of this is very closely related to the basic principles of statistical thermodynamics.)

    So, we have empirically reliable signs of intelligence. With some pretty serious config space reasons to rely on them.

    When therefore we see cases where we do not directly observe the actual origin of the relevant systems, we have good reason to trust the signs, absent a priori worldview commitments that arbitrarily rule these signs and where they point out of court. (And BTW, not being there to directly observe did very little to prevent confident inference to estimated ages and scenarios of origins, on much weaker inductive grounds.)

    When in particular I see that the cell embeds discrete state code bearing systems, algorithms and tightly co-ordinated implementing machines, and the capacity to make a fresh copy of itself, I see excellent reason to infer that the cell is a technology, just not ours.

    Given that dolphins, apes, birds, beavers and even bees seem to show signs of intelligence (albeit at a more restricted level and not using symbolic language) I have no problem whatsoever in inferring from sign to signified art and artificer behind it. Much less than inferring from the physics of balls of hydrogen to stellar dynamics and timelines, and a lot less than inferring from various phenomena to proposed timelines for earth and life history. Statistical considerations on config spaces are a lot more direct and straightforward.

    So, the caginess of your response above is inadvertently deeply revealing.

    G’day

    GEM of TKI

  309. PS: Clive, I have had to roll my own on computers (at a programmed controller level) and so I have painfully vivid memories of what it takes to build and get such a complex entity to work. When I see someone staring a similar technology in the face and pronouncing that he is confident it all came about by blind chance and necessity acting on matter and energy, all I can do is shake my head at patent reductio ad absurdum. You can insist on it if you wish, but please don’t expect this old electronics hand with soldering iron scars (guess what happens when attention slips for a moment after hours and hours and you touch the wrong end?) to go there.

  310. Petrushka: Intelligence with insight cuts down astonishingly complex tasks to size. Did you ever try to figure out the config space of the alphanumerical characters in say War and Peace? (And besides, I expect to live to see the first self-replicating human tech 3-d machines. Yes, protein spaces are vast, but that is the precise point: somebody knew enough to use a very specific polymer family [almost all of which conform to a fairly simple chaining and side-branch structural formula] to create a world of life. Whether that somebody is within or beyond the cosmos, I do not know, but when I see the DNA-RNA-ribosome-enzyme machinery, I know what I am looking at. And, what it cannot reasonably be caused by.)

  311. I have to say I am troubled by your drawing inferences about what is knowable from what you know.

    I have built computers. I haven’t built the components, but I have made circuit boards from blanks and populated them with components.

    When I was a kid my older brother made working radios from copper wire wound around oatmeal boxes, and slivers of germanium.

    I know that electronic designs evolved bit by bit. Computers did not poof into existence from first principles.

    Many of the important features of current features are the result of the marketplace rather than abstract principles of design. Many variations worked fine but didn’t sell.

    When you look at a collection of computers spanning the last 30 years, you see gradual and overlapping changes. The original PC expansion bus changed over time, and you can put computers in a hierarchy based on overlapping bus technology. Look in nearly any PC today and you will see at least two expansion card slots from different eras. Sometimes you will see CPU sockets capable of taking CPUs from several generations.

    You could place these machines in chronological sequence purely on the basis of overlapping technology, without knowing what the technology did.

    Evolution is a feature of design as well as as a feature of biology. At least when design is implemented sequentially, and when it must build by slight modifications of what already exists.

    No finite designer can anticipate all the emergent properties of physical objects in complex configurations. All design embodies cut and try. All complex designed objects embody serendipity, properties that were not anticipated.

    I will grant that an omniscient being could see all the branches of possibility simultaneously, and reify some, but not others. Somewhat like sculpting.

    But the resulting network of realized possibilities, in our world, looks an awful lot like descent with modification.

  312. I know what I am looking at. And, what it cannot reasonably be caused by…

    I don’t think you do.

    Most importantly, I don’t think the configuration space has been explored to the extent that you can characterize the slope of all function gradients.

    This has been discussed here, and a very bright person name Axe has done some work on it, but I’d say his work is a bit like trying to illuminate a planet by lighting a candle from the distance of the moon.

    The fact is that ignorance about first life cuts both ways. It means that mainstream biologists should bite their tongues when tempted to pontificate about the history of first life, but it also means that ID proponents cannot calculate relevant probabilities.

    You say the probabilities against life forming spontaneously are astronomical.

    Really? exactly what event are you referring to? Spell out the details of the transition that made life from dust. Were you there?

  313. Using human design as an analogy for the design of life cuts both ways.

    Certainly you can argue that a living thing is unlikely to arise spontaneously, but you can also argue convincingly that it wasn’t the work of anything having human limitations.

    The same big numbers that forbid spontaneous generation of a living cell also forbid its creation by a finite intelligence.

  314. VoM:

    Pardon a shortish remark.

    The fundamental issue on solipsism is that it is a philosophical point of view, and is to be evaluated using comparative difficulties relative to other worldviews. It is indeed empirically equivalent, but at the price of inferring a systematically misleading sense of the world in which we live.

    When we see that, we see that we are within our worldview rights to view it as inferior to thse views that do not, absent compelling reason to accept it. Which is very much missing.

    By contrast, the inference to design is an empirical matter, that we routinely observe design in action, and its consequences and signs. So, we have excellent reason to distinguish and mark the signs of chance, mechanical necessity tracing to natural, lawlike regularities, and intelligence.

    When we do so, then it leaps out that cell based life has many features that reflect strong signs of being an artifact of an advanced information technology, one that uses molecular nanomachines.

    Absent a priori imposition of evolutionary materialism, that is the overwhelmingly superior explanation. (Indeed, after eight decades of active research, there is no credible materialistic account of OOL, for excellent reason. And we have a well known theory of the origin of information systems, one with massive empirical support.)

    So, it is not at all clear that the design inference on cell based life is a convoluted elaboration on an existing adequate explanation. Instead, it points to the explanation that would make sense of what is otherwise utterly resostant to explanation on the laternative: chance plus necessity.

    And that is not giving up, it is refusing to straight-jacket science with materialistic blinkers.

    GEM of TKI

  315. Petrushka-

    “Using human design as an analogy for the design of life cuts both ways.

    Certainly you can argue that a living thing is unlikely to arise spontaneously, but you can also argue convincingly that it wasn’t the work of anything having human limitations.

    The same big numbers that forbid spontaneous generation of a living cell also forbid its creation by a finite intelligence.”

    First re; “finite intelligence”. That’s an interesting point which leads again to another (at least at this point) philosophical question regarding the nature of the intelligence required to produce (also capable of producing) the clear evidence of design seen in nature. You use the “word” analogy as a way, I think, to lessen the strength of the argument when in fact it is not an analogy but an observable fact. All design means at its most basic level is the fitting of parts to attain some goal or function.

  316. Petrushka:

    Can you show me that the genetic code is not a discrete state, string based code?

    Can you show me that protein assembly is not a step by step targetted process of finite duration? Can you tell me that the ribosome and support entities are not acting the part of implementing machines? [Onlookers kindly cf here, at 101 level, and top right this page for a video clip.]

    If you cannot acknowledge what is staring you in the face like that, then it looks to me that you have reduced yourself to absurdity.

    Sorry to have to be so direct.

    GEM of TKI

  317. 318

    Petrushka,

    Using human design as an analogy for the design of life cuts both ways.

    Certainly you can argue that a living thing is unlikely to arise spontaneously, but you can also argue convincingly that it wasn’t the work of anything having human limitations.

    The same big numbers that forbid spontaneous generation of a living cell also forbid its creation by a finite intelligence.

    But it is not a solution to do away with intelligence altogether because some lower intelligences aren’t up to the task. If lower intelligences aren’t up to the task, then certainly no intelligence is even farther from the task.

  318. Can you show me that the genetic code is not a discrete state, string based code?

    How many different words can be made using the English alphabet?

    Suppose we limit word length to a few billion characters.

    The question I raised is what order of intelligence is required to produce and test the properties (i.e., meaning in context) of every possible word.

    Consider the fact that some one character differences are synonyms, some have trivially different properties and some have extraordinary consequences.

    Consider that context changes. A word can mean different things if read as German as opposed to English.

    The designer of life must not only be able to anticipate all the emergent properties of sequences imposed by biophysics, but also the implications in changing environments and ecosystems.

    You think this could involve examining fewer cases than required for half a dozen mutations accumulate serendipitously to produce a new structure?

    Bear in mind that a designer presumably produces an outcome specified in advance, whereas life only requires that changes are not fatal.

    Life does not need to produce flagella. Presumably a designer would set out with the goal of producing a flagellum.

  319. If lower intelligences aren’t up to the task, then certainly no intelligence is even farther from the task.

    But what is “the task”?

    If the task is to anticipate need and anticipate the emergent properties of complex biochemistry, then the problem of big numbers applies to the capabilities of the designer as well as to descent with modification.

    If you drop the need to anticipate, which all of mainstream biology did a hundred years ago, you are left with gradients.

    The question to be answered is not whether evolution can find targets specified in advance, but whether there are differences between any parent and child that cannot reasonably be traversed by the known mechanisms of mutation.

  320. 321

    Petrushka,

    The question to be answered is not whether evolution can find targets specified in advance, but whether there are differences between any parent and child that cannot reasonably be traversed by the known mechanisms of mutation.

    Is that what you think? That the whole question of ID relies on showing that there has to be differences between parent and child that are magical or not natural? But yes, the question is exactly that evolution can or cannot find specified targets in advance. It cannot. And by “the task” I meant what is necessary for life, this should’ve been obvious from my comment.

  321. Petrushka

    You ducked the question: is or is not DNA a discrete state coded system, using G, C, A, T in two complementary, intertwined helices?

    Blatantly, yes.

    Then, is or is not the DNA-RNA-Ribosome-Enzyme system a discrete state step by step processor that uses sequences of actions to create new proteins by chaining AA’s?

    Again, yes.

    We have just identified a digital information system in the heart of he cell, and crucial to its work.

    Typical storage space, 100 – 500 k bases and up to 3 – 4 billions. At 2 bits per base.

    Then think about the role the system plays in metabolism and in reproduction. Then, factor in the requisites of a von Neumann self-replicator.

    Then, provide a plausible, empirically well supported model for the origin of life and of body plans.

    You will soon enough see why we identify functionally specific complex information, and point to its known source on inference to best, empirically anchored explanation.

    GEM of TKI

  322. PS: As an experienced designer, note that designers don’t scan all configs. We use knowledge, imagination, goals, models heuristics etc to get to what is close to working, then adjust in from there. (Do you seriously think that to design a controller I went through every component and config for a US$ millions electronics stores, then picked what would work “best”? No, I drew up a general config, then specified the blocks and the synchronisation, all on paper. Then I used relevant info to design then I built, tested and corrected, ckt by ckt. Similarly no-one who has brains worth having will try to write a book by getting a million monkeys to bang away at keyboards at random.)

  323. I wonder if this is a compelling question. Why should anything have evolved past being one-celled organisms? What advantage does it confer purely on the level of survival and the passing on of genes? Wouldn’t anything beyond that be almost necessarily less fit until an incredibly high level of complexity? This is from talkorigins on this subject, but it only talks about signal proteins that help organisms to “cooperate” in breaking down food.

    “Claim CB922:

    There are no two-celled life forms intermediate between unicellular and multicellular life, demonstrating that the intermediate stage is not viable.
    Source:

    Brown, Walt. 1995. In the Beginning: Compelling evidence for creation and the flood. Phoenix, AZ: Center for Scientific Creation, p. 9. http://www.creationscience.com/
    Response:

    The intermediate stage between one-celled and multicelled life need not have been two-celled. The first requirement is for signals between cells, which is necessary if cells are to cooperate in division of labor to break down a food source. Many bacteria utilize a variety of different signals. The evolution of a signal for cooperative swarming has been observed in one bacterium (Velicer and Yu 2003).

    The transition to multicellularity has been studied in experiments with Pseudomonas fluorescens, which showed that “transitions to higher orders of complexity are readily achievable” (Rainey and Rainey 2003, 72). Choanoflagellates, which are unicellular and colonial organisms related to multicelled animals, express several proteins similar to those used in cell interactions, showing that such proteins could arise in single-celled animals and be co-opted for multicellular development (King et al. 2003).

    Desmidoideae is a class of conjugating green algae, phylum Gamophyta. Most desmids form pairs of cells whose cytoplasms are joined at an isthmus (Margulis and Schwartz 1982, 100). The bacterium Neisseria also tends to form two-celled arrangements. As noted above, this may not be relevant to the evolution of multicellularity.”

  324. PS: As an experienced designer, note that designers don’t scan all configs. We use knowledge, imagination, goals, models heuristics etc to get to what is close to working, then adjust in from there.

    So after studying the problem of protein folding, what generalizations do you draw about what difference a small change to a gene sequence would make?

  325. Petrushka [302]:

    I don’t want to stir up trouble, but to me, ID looks like a tower of Babel, an attempt to see God.

    Despite what Darwinists say, ID is not creation science. I happen to be Catholic. Dave Scot, who moderated this blog for some time, was an atheist.

    If I want to “see God”, I just simply look at the face of 3-month old child.

    ID, personally, is a ‘scientific’ point of view; not a theistic one. I don’t need ID to support my faith. I just think, as I’ve stated before, it (makes more sense) “has greater explanatory power”.

    Evolution is not just the leftovers from differential death. Evolution is a continuation of creation.

    Well, I kind of see ‘evolution’ this way also. Now, let’s remember there is a distinction between evolution as a ‘fact’, which the fossil record affords us (not being a true creationist, this isn’t problematic for me), and the ‘process of evolution’. There are those who see themselves as believing in ‘theistic evolution’, which would be God moving things along via RM + NS. To me, science just doesn’t support this. THIS is what ID is all about—the scientific argument.

    There are also those, like Michael Denton, who see life as being so ‘optimized’ as to suggest that all of life, all that we see, is the direct consequences of the laws of nature present from the very beginning.

    To me, this isn’t too far away from Deism, and, so, I find this a problematic view of God as Creator.

    No surprise, but I take a metaphorical view of Genesis. The alternatives for existence are continuous bliss, possible because everything is predictable and determined by antecedents, and continuous uncertainty with accompanying pain, made necessary because we are governed by consequences, and we must be forever learning. The Fall is a metaphor for the coming into being of an existence allowing free will.

    The traditional view would be of God and man in perfect harmony, and of sin disrupting this harmony—through an abuse of free will.

    To the liberal mind, not really believing in God, or, if believing in God, not wanting to accept the reality of sin, personal sin, because of its ramifications (that is, Hell), the idea of Original Sin is rejected. Then ‘sin’ becomes a by-product of, a residue, of human interactions. And, ultimately, an ill use of Reason. (Reason becomes ‘deified’ as our Savior) The antidote is, as in Christianity, ‘repentence’, but a form of ‘repentence’ that means ‘learning how to do it right’. So ‘learning’ (reason) becomes the proper response to sin. Hence, ‘sensitivity training’ classes.

    As to all of this—theologically—I would say that only those people who want to go to Hell, go there. Why do they go there? Because they hate God, and don’t want to go to Heaven; so that only leaves Hell. I’m not saying don’t worry about sinning—because we shouldn’t want to offend God, just as we wouldn’t want to do something that would embarass our parents or our family—but God is more powerful than our sinfulness. The horror of Jesus’ crucificixion is the culmination of all human hatred of God (and man!); but then there is the Resurection. God has the final word in all of this.

    I think many who think of themselves as ‘atheists’ and ‘agnostics’, at bottom, simply fear God’s punishment. I wonder if your experience was anything like mine growing up: I was always afraid of what my Dad might do if I did something really wrong—the fear was there, but my Dad never hit me. And, when I did something that I thought he certainly would punish me for, he was the most gentle and understanding.

    Hope you catch the drift.

  326. 327

    @kariosfocus (#314)

    The fundamental issue on solipsism is that it is a philosophical point of view, and is to be evaluated using comparative difficulties relative to other worldviews.

    This in no way prevents solipsism from being a convoluted elaboration of realism. In fact, realism is the world view that solipsism tries to explain away. Again, the point I’m making here is there really is such a thing as a bad explanation and there are specific properties we can look for to identify them. This is independent of the subject matter, scope or scale.

    Furthermore, there are other theories which are also convoluted elaborations. For example, I’ve illustrated elsewhere how the Inquisition’s implied theory of planetary motion is a convoluted elaboration of heliocentrism.

    It is indeed empirically equivalent, but at the price of inferring a systematically misleading sense of the world in which we live.

    While our intuition is important, it does not scale well to the very large, the very small or the very complex. This is non-controversial, as we’ve historically seen this time and time again across multiple domains. Furthermore, we exhibit cognitive biases which tend to cloud our perception. For example, when a ruler is applied to reveal that an optical illusion presents a misleading feature or scale, we immediately revert to seeing the illusion again once the ruler is removed. If this bias remains even in situations where we have empirical knowledge that clearly tells us otherwise, what of situations where theories make equivocal empirical observations?

    You can see an example in this TED talk video by Dan Ariely.

    And, as Petruska noted earlier…

    Could be, but humans seem to be stuck on one branch of the omneisient mind. To us it looks Darwinian.

    Given these obvious problems, my point is that we need not merely appeal to intuition to reject solipsism. By analyzing the structure of a theory we really can determine it is a bad explanation via being a concluded elaboration of some another theory. As such, we can discard it. This is in contrast to suggesting solipsism is somehow ‘unsatisfying’, which is vague in the sense that it’s unclear what or who is being satisfied.

    By contrast, the inference to design is an empirical matter, that we routinely observe design in action, and its consequences and signs. So, we have excellent reason to distinguish and mark the signs of chance, mechanical necessity tracing to natural, lawlike regularities, and intelligence.

    That we intelligently design things is an empirical matter. This we agree on. However, that any specific biological complexity we observe was actually the result of intelligent design is not.

    As I’ve illustrated in my previous comment. the solipsist and IDist can make an argument via analogy and inference to empirical observations.

    In the case of the solipsism, what I object to is *not* the idea that the mind is incapable of creating highly elaborate environments, interactions and intricate details. Again, unless you cannot remember dreams, you know what the mind is capable of first hand. What I object to is lack of explanation provided by the solipsist as to why our internal selves would just so happen to create the specific environments, interactions and details we actually observe. Not only does solipsism invalidate the explanation provided by realism, but it fails to provide a explanation to take its place.

    In the case of the biological ID, I’m *not* suggesting that an intelligent agent would be incapable of designing the biological complexity we observe. As a software engineer, I have first hand knowledge that intelligent agents can and do design things. What I am objecting to is that ID fails to explain why an intelligent agent would just so happen to design the specific biological complexity we actually observe I’m objecting to how ID invalidates the explanation of darwinism, yet fails to provide an explanation to take its place. Furthermore, it’s clear that ID has no interest whatsoever in providing such an explanation. There is no analog to biogenesis in ID for reasons that are obvious.

    When we do so, then it leaps out that cell based life has many features that reflect strong signs of being an artifact of an advanced information technology, one that uses molecular nanomachines.

    Again, I’m *not* suggesting an intelligent agent is incapable of designing molecular nano-machines, etc. Nor do I claim that current empirical observations show this is false. But I don’t have to because when we look at the implied theory ID presents, it becomes clear it’s a convoluted elaboration of Darwinism.

    It’s essentially the same as neo-darwinsm except, at a point depending on which variant of ID you happen to support, a designer caused/selected a change rather than a natural process. The empirical observations are the same, except a mysterious designer causes all of the positive changes, or orchestrated a specific series of positive and negative changes that resulting in a specific desired outcome, etc. (i’m obviously simplifying here to keep things short)

    If biological complexity is actually caused by the intentional design process of an intelligent agent, rather than a incremental and undirected process, why should we observe genes changing at rate that is even remotely close to what neo-Darwinism predicts, rather than, say, 10,000+ all at once? Why do all species share the exact same four DNA molecules? Why did over 95% of all species that ever existed go extinct if a designer could have chosen 50%, 10% or 1% instead? You simply have no answer, other than “That’s what the designer happened to have chosen.”, which is a non-answer.

    This is analogous to the question of why object-like facets of myself would follow laws of physics-like facets of myself if they are internal. When you discard realism, there is no particular reason to expect this to occur.

  327. veilsofmaya:

    <cite<Yes, we are designers. But in proposing what we observe is actually the result of intent, purpose, will and so on, you’re not just making an inference or attempting to solve isolated instances where complexity is supposedly beyond natural explanation. You’re actually presenting a vast implied theory about the specific entirety of biological complexity – past, present and future. In doing so, it’s the ’specificity’ in CSI that is precisely what ID claims to shoulder, but actually does not. In merely positing some vague designer, it provides no reason for any specific rate, features or structures over some other rate, features or structure.

    That’s absolutely not my view of ID. I believe you are countering a position which is not ID in itself, but maybe some personal interpretation of ID.

    The designer is not vague. We can certainly know a lot about the designer, excactly as we know a lot about the designers of human artifacts, by a serious scientoific approach, and by analysis of the design, of its functions, of the context where it is found, and so on.

    This idea that the designer of biological information should in some way be treated differently than any other designer is false. It derives from the false assumption that the designer must be God, that the designer must be omnipotent, that the designer must be omniscient, and that the purpose of ID is to support a religious view.

    All of that is false. The designer of ID must be treated by science exactly like any other designer. The only assumption of ID is that the designer (or designers) is a consious intelligent being, IOW that he shares with us those ability of conscious representation, intent, will, cognition, which make us designers.

    The analysis of the CSI we find in biological information can tell us a lot about the deisgner. Both the analysis of the specification and of the complexity. They are a precious source of information.

    I have arguied many times here, probably also with you, that the evidence does point to a designer, but not certainly to a designer who can do anything he likes, whenever he likes, wherever he likes. IOW, the designer appears to act in a very specific context, and to be limites by that context.

    That obviously does not exclude that ultimately the designer can be a god, but it does suggest that, even if that were the case, that god was acting in a cintext (maybe according to his own will), and that such a context determined specific constraints.

    That’s why the biological designer appears so similar to human designers under many respects (even if, I must admit, by far better than us :) ). That’s because we too have to operate under severe restraints.

    We do not say: well, I want a software which gives me the first 100 prime numbers: let it appear! That does not work. We have to concieve the software, represent it in our minds, make attempts at implementation, verify them, remove the bugs, and so on.

    The same is apparently true for the biological designer: that’s exactly what we apparently observe in natural history. And that’s what I believe has happened: a gradual, intelligent implementation of ideas and intents, through a patient process of design in absolute respect and intelligent exploitation of natural laws.

    That treasure of information is there, in the facts we can observe. We just have to interpret it, and we can only do that in a scenario of ID, not certainly in the wrong scenario of neo-darwinism.

    So, it’s absolutely not true that ID:

    “In merely positing some vague designer, provides no reason for any specific rate, features or structures over some other rate, features or structure”.

    It’s the opposite: by making the strong and specific inference of design, ID prompts us to ask ourselves: why do we observe specific rates, times, structures, figures, and not others? How can we explain that in terms of design, intent, resources, constraints? What does the observed reality tell us about all that?

    That is not vague. In no way can we explain away facts just saying that “the designer did that”. That’s not the sèpirit of ID, and never has been. That’s rather a pusposeful deformation of ID by its opponents, what is usually called “a straw man argument”.

    ID is a scientific approach. All its tools are completely pragmatic. Any argument which uses philosophical, or methaphysical issues is not an ID argument, be it from IDists or from their opponents. The only “supernatural” issues in ID are those about consciousness, intelligence and agency: if they are “supernatural”, then humans are “supernatural” too, and so are their designed creations.

  328. Petrushka:

    Evolution is a feature of design as well as as a feature of biology. At least when design is implemented sequentially, and when it must build by slight modifications of what already exists.

    No finite designer can anticipate all the emergent properties of physical objects in complex configurations. All design embodies cut and try. All complex designed objects embody serendipity, properties that were not anticipated.

    I will grant that an omniscient being could see all the branches of possibility simultaneously, and reify some, but not others. Somewhat like sculpting.

    But the resulting network of realized possibilities, in our world, looks an awful lot like descent with modification.

    That’s correct. see my previous answer to veilsofmaya. We are not postulating an omniscient designer here. There is no need for that.

    The problem of vast configuration space applies to potential designers just as much as it applies to chance and necessity.

    At every point in a molecular configuration there will be branching possibilities, and the possibility branches simply cannot be tracked by anything less than an omniscient intelligence.

    The designer must know things that are inaccessible to finite intelligence. The designer must be aware of all the emergent properties of increasingly complex assemblages of objects.

    We already know, from the difficulty of computing how proteins fold, that sequential computing cannot solve this kind of problem in the time available. When you multiply that problem by the number of potential gene sequences, you have a problem that is inaccessible to any kind of finite designer.

    That is not true. Intelligent algorithms, based on some knowledge of the earch space, can greatly optimize the search and make it approachable, as Dembski and Marks have so brilliantly shown. The vast configuration space is an unsovable limit only for unguided search.

    And while it is true that it is very difficult to compute how proteins fold, that is not a problem which cannot be solved algorithmically in finite time: indeed, a lot of people are trying to solve it, and making progress. It juhst requires very big computational resources, but certainly not infinite resources.

    And it’s not true that:

    “The designer must know things that are inaccessible to finite intelligence. The designer must be aware of all the emergent properties of increasingly complex assemblages of objects.”.

    Ansolutely not. The designer, like all other designers, can certainly implement gradually, and through a very simple process of trial and error, intelligently guided, just as we do. IOW, the design needs not be perfect, it needs not be instant, it needs not be definitive.

    Is it so difficult to understand that biological design can have (and indeed I believe does have) all the characteristics of human design?

  329. A note or two (or so):

    1] Petrushka:

    I don’t need to try to deduce heuristics on protein folding to see that the protein manufacturing process is just that, and that it embeds a functionally specific, complex, carefully organised digital, flexible program information system.

    The best explanation for such a system, especially in the context of metabolic action AND self-replication, is design. Beyond that, it becomes plain that the protein class of molecules was very carefully selected as a family of information-based polymers where the side chains and a balance of inter- and intra- molecular forces would produce a Swiss army knife range of useful functions.

    Whether such a technology is beyond the reach of finite, fallible intelligences like us, is irrelevant to the facts that ground an inference to design as best explanation of a feature of the natural world. (I don’t need to go there to have good reason to infer to design. And, I already have another class of design inference that points to an extra-cosmic, powerful designing intelligence: that on the fine-tuned cosmos.)

    2] PaV:

    Excellent!

    3} VOM:

    That solipsism may or may not be a convoluted elaboration on realism, leading to a preference for some variety of [chastened!] realism as one’s worldview choice on say Occam’s principle is precisely an illustration of the philosophical methodology of comparative difficulties in action.

    Thus, the matter is a meta question, one beyond science.

    Within science, we have already seen that there are many things that ground the principle that there are empirically reliable signs of intelligence. In particular, functionally specific, complex organsisation and associated information is such a sign.

    Whenever we see the sign, on the principle of uniformitarianism and trust in well-tested (albeit inevitably provisional) inductive conclusions,we have every epistemic right to infer tot he signified. In this context, of course it is logically possible that chance can throw up any contingency whatever, including the text of this post. But, the config space implied is so large that we readily see that the artificial constricting of explanation of contingent outcomes to chance by suppression of inference to art when it is inconvenient to an a priori commitment to evolutionary materialismn, is arbitrary censorship. Thus the idea that imposing such censorship simplifies matters is absurd.

    Science in the end must be about the pursuit of the truth about our world based on empirical evidence and good reasoning. One of these patterns is inference to best warranted explanation; and, on reliable signs of intelligence, art is a better explanation of this post than chance. (Why is law not in the race: it is precisely not a source of contingency, but of natural regularities: dropped heavy objects reliably fall.)

    So, let us note Einstein’s stricture on Occam: everything should be simple as possible, but not simpler than that.

    That is, there is a point where one becomes simplistic.

    And, when it comes to accounting for the relevant levels of complexity, at origin of life as we know it, we need to account for upwards of 100,000 base pairs worth of genetic information.

    For the cluster of dozens of body plans reflected in the Cambrian fossil life revolution the increment in DNA base pairs — notice how I am not talking of “genes” but of 4-state bases in a string data structure — is credibly 10′s – 100′s of millions to get from unicellular forms to the variety of complex body plans at Phylum and sub-phylum levels.

    All this in the context of functionally specific, complex digital information systems with associated functionally specific complex information well beyond the threshold of 1,000 bits that exhausts the search capacity of our observed cosmos.

    We know on strong induction tha the best explanation for that is: design. At least, absent a priori censorship on evolutionary materialist presuppositions. Which subvert science from being a pursuit of the truth about our world on evidence, into a stalking horse for one of the most destructive worldviews ever. That is a perversion of science.

    GEM of TKI

  330. PaV (326),

    “I think many who think of themselves as ‘atheists’ and ‘agnostics’, at bottom, simply fear God’s punishment.”

    That makes no sense. If you fear something then you think the thing you fear is real, otherwise there is nothing to fear. If an atheist fears God’s punishment then that atheist believes there is a God (and one that punishes at that) and hence is not really an atheist. CS Lewis went through this kind of phase (not necessarily over God’s punishment though).

    Theists put out lots of reasons why people are atheists, and pretty much all of them are false. The other one that crops up, equally wrongly, is that atheists don’t believe in God because they want to behave badly. I am an atheist and I can tell you precisely why I became one (from being a Protestant, but in a mixed Catolic/Protestant area at the time) – I simply realised, gradually, that there is no objective evidence for a God. What also contributed was the sheer lack of rational answers to questions put to priests etc., other than to be told something was a “mystery” or given a vague response based on blind acceptance of either scripture or doctrine. At the end of the day, about 30 years ago, I simply became unconvinced there is a God. Nothing to do with fear of God’s punishment; nothing to do with wanting to have my end away with the neighbour’s wife, with no comeback; just a simple lack of objective evidence for it. And all of the atheists I know – without exception – either went through the same experience or weren’t from a faith background anyway. Not one of them fear the punishment of an entity that doesn’t exist.

    Some of them may well have had their end away with the neighbour’s wife, but then again some theist friends definitely have!

  331. Gaz you state there is no objective evidence for God and that is why you became an atheist:

    I strongly object to that assertion:

    Intelligent Design – The Anthropic Hypothesis
    http://lettherebelight-77.blog.....is_19.html

    In fact I find it impossible to rationally comprehend the universe or quantum mechanics without a “objective” real and living God.

    But of a more personal note let me and you take our focus on if there is “objective” evidence for Gaz and bornagain77, us personally, to believe that we have souls that may live past the death of our physical bodies:

    Near Death Experiences – Scientific Evidence – Dr Jeffery Long – Melvin Morse M.D. – video
    http://www.metacafe.com/watch/4454627

    Miracle Of Mind-Brain Recovery Following Hemispherectomies – Dr. Ben Carson – video
    http://www.metacafe.com/watch/3994585/

    Removing Half of Brain Improves Young Epileptics’ Lives:
    Excerpt: “We are awed by the apparent retention of memory and by the retention of the child’s personality and sense of humor,” Dr. Eileen P. G. Vining; In further comment from the neuro-surgeons in the John Hopkins study: “Despite removal of one hemisphere, the intellect of all but one of the children seems either unchanged or improved. Intellect was only affected in the one child who had remained in a coma, vigil-like state, attributable to peri-operative complications.”

    Blind Woman Can See During Near Death Experience (NDE) – Pim Lommel – video
    http://www.metacafe.com/watch/3994599/

    Kenneth Ring and Sharon Cooper (1997) conducted a study of 31 blind people, many of who reported vision during their Near Death Experiences (NDEs). 21 of these people had had an NDE while the remaining 10 had had an out-of-body experience (OBE), but no NDE. It was found that in the NDE sample, about half had been blind from birth. (of note: This “anomaly” is also found for deaf people who can hear sound during their Near Death Experiences(NDEs).)
    http://findarticles.com/p/arti....._65076875/

    Quantum Consciousness – Time Flies Backwards? – Stuart Hameroff MD
    Excerpt: Dean Radin and Dick Bierman have performed a number of experiments of emotional response in human subjects. The subjects view a computer screen on which appear (at randomly varying intervals) a series of images, some of which are emotionally neutral, and some of which are highly emotional (violent, sexual….). In Radin and Bierman’s early studies, skin conductance of a finger was used to measure physiological response They found that subjects responded strongly to emotional images compared to neutral images, and that the emotional response occurred between a fraction of a second to several seconds BEFORE the image appeared! Recently Professor Bierman (University of Amsterdam) repeated these experiments with subjects in an fMRI brain imager and found emotional responses in brain activity up to 4 seconds before the stimuli. Moreover he looked at raw data from other laboratories and found similar emotional responses before stimuli appeared.

    In The Wonder Of Being Human: Our Brain and Our Mind, Eccles and Robinson discussed the research of three groups of scientists (Robert Porter and Cobie Brinkman, Nils Lassen and Per Roland, and Hans Kornhuber and Luder Deeke), all of whom produced startling and undeniable evidence that a “mental intention” preceded an actual neuronal firing – thereby establishing that the mind is not the same thing as the brain, but is a separate entity altogether.

    “As I remarked earlier, this may present an “insuperable” difficulty for some scientists of materialists bent, but the fact remains, and is demonstrated by research, that non-material mind acts on material brain.” Eccles

    “Thought precedes action as lightning precedes thunder.”
    Heinrich Heine – in the year 1834

    In The Presence Of Almighty God – The NDE of Mickey Robinson – video
    http://www.metacafe.com/watch/4045544

    The Day I Died – Part 4 of 6 – The NDE of Pam Reynolds – video
    http://www.metacafe.com/watch/4045560

    Genesis 2:7
    And the LORD God formed man of the dust of the ground, and breathed into his nostrils the breath of life; and man became a living soul.

    notes and links on the “spiritual” aspect of man:
    http://docs.google.com/Doc?id=dc8z67wz_4d8hc876j

  332. bornagain77 (332),

    You illustrate why men of the cloth never managed to convince me there was objective evidence of God. Their response – as yours seems to be – is along the lines of “people have near death experiences, therefore there is a God”. That isn’t anywhere near being objective evidence. There is at least one, more likely several, steps missing – for example, assessing what the near death experiences actually are: are they real events or are they internal perceptions of an individual brain, such as distorted memories (like dreams) arising as neural pathways change and shut down?

    I have never been at all convinced about near death experiences as being anything related to God. An aunt of mine talked about an aunt of hers who spoke of “going to the lights” in the seconds before she died, and seeing lights seems to be a common experience near death. Yet a far more plausible explanation is that the senses and processing centres of the brain are beginning to shut down, or at least stop working properly, partly because of the reduction in the brain’s oxygen supply that happens as a body begins to die. It may even be possible that light photons are generated as chemical processes take place in a decomposing organ such as a brain.

    I’m happy to look further at your evidence – but not on a scattergun approach such as the one you’ve taken in your last post. If you want to take it further, can you pick your single best piece of evidence ans say why it is objective evidence for God (i.e. not just say it IS objective evidence for God – say WHY it is objective evidence for God).

  333. Gaz:

    Were I in your shoes I would first do a self-critique on my criteria of objectivity and my concepts of evidence and “proof,” thence knowledge.

    GEM of TKI

  334. kairosfocus (334),

    You may wish to consider taking your own advice.

  335. Gaz:

    As the above linked documents in a fair amount of detail I have long since done so.

    It is therefore sad to see your resort to the rhetoric of turnabout, as that is precisely a classic symptom of the problem of selective hyperskepticism.

    GEM of TKI

  336. Gaz you state:

    “for example, assessing what the near death experiences actually are: are they real events or are they internal perceptions of an individual brain, such as distorted memories (like dreams) arising as neural pathways change and shut down?”

    But that point was addressed specifically here:

    Near Death Experiences – Scientific Evidence – Dr Jeffery Long – Melvin Morse M.D. – video
    http://www.metacafe.com/watch/4454627

    and here:

    The Day I Died – Part 4 of 6 – The NDE of Pam Reynolds – video
    http://www.metacafe.com/watch/4045560

    And to top all that off for the “physical” validity of NDE’s that I presented, which you did not bother to look at, I also showed you a study showing over 90% of blind people “seeing” in their NDE’s,,, That is pretty darn good objective evidence! Which is a lot more than I can say for any of the “objective” evidence that has ever been presented to me for neo-Darwinian evolution

    You say you wanted objective evidence for God and I presented this,,,

    ID – The Anthropic Hypothesis
    http://lettherebelight-77.blog.....is_19.html

    but, again you can’t be bothered to read anything and say you want just one piece of evidence out of the overwhelming wealth of evidence that makes the case in the paper,,, So for just one piece of evidence from the paper I present:

    Explaining Information Transfer in Quantum Teleportation: Armond Duwell †‡ University of Pittsburgh
    Excerpt: In contrast to a classical bit, the description of a (photon) qubit requires an infinite amount of information. The amount of information is infinite because two real numbers are required in the expansion of the state vector of a two state quantum system (Jozsa 1997, 1) — Concept 2. is used by Bennett, et al. Recall that they infer that since an infinite amount of information is required to specify a (photon) qubit, an infinite amount of information must be transferred to teleport.
    http://www.cas.umt.edu/phil/fa.....lPSA2K.pdf

    The reason this evidence presents “objective” evidence for God is that information is shown to be transcendent of matter-energy space-time in the teleportation experiment. Yet we know that matter-energy space-time were created by a transcendent cause in the big bang since space-time matter-energy were in fact brought into existence at the Big Bang. That it would be found to take infinite information to “mathematically define” the (photon) qubit is in fact exactly what we would expect to find if energy, which is the primary “material” component of this universe, were in fact created by the infinite, and perfect, transcendent mind of God since only God has access to infinite information so as to implement in such a precise way as to make a photon arise.

  337. Gaz as far as man never being able to create light from infinite information,,,

    Job 38:19-20
    “What is the way to the abode of light? And where does darkness reside? Can you take them to their places? Do you know the paths to their dwellings?

  338. bornagain77 (337),

    I don’t see how you consider this objective evidence of God. The paper you link to states:

    “Concept 2. is used by Bennett, et al. Recall that they infer that since an infinite amount of information is required to specify a qubit, an infinite amount of information must be transferred to teleport. Thus, their explanatory project is to provide a mechanism whereby all the information required to specify a qubit is actually transferred. There seems to be no reason to make the inference that motivates this explanatory project. In fact, there is reason to doubt that it is a good one to make. If it were true that information actually had to be transferred, backwards teleportation should not be possible. Recall there seems to be no possible mechanism for information transfer in backwards teleportation. Backwards teleportation is possible, so it seems that the inference made by Bennett, et al. is not a good one to make.”

    By my reading, this suggests that the model whereby infinite information is required to sspecifiy a qubit is not actually a good model (and the author goes on to suggest Concept 3 is the better model). This paper actually appears to go AGAINST your argument. Where does this leave your objective evidence for God?

    Re: your 338: I’m afraid Job is not objective evidence for God either.

  339. Gaz:

    I have never considered belief in God as an issue really relevant here, but just because I like your posts, I will say a couple of words.

    First of all, I don’t believe that belief in God (or, if we want, belief that God does not exist) is ever the result of merely cognitive or rational arguments. the cognitive aspect is important, and it can and should guide the personal search of each of us, but there is no doubt that such a fundamental choice in worldview is determined also by other components (feeling, intuition, experiences, and whatever). And I think that it’s right that way.

    That said, I deeply respect those atheists who have reached sincerely and serenely their convictions, and who sincerely put them to the test not only of their reason, but also of their personal life. Which is exactly the same feeling I have for religious people who do the same.

    Fear of God’s punishment is, IMO, a very bad motive, either for believing in God or for not believing in Him. That does not mean that it is an uncommon motive. I would like to share your conviction that fear of punishment is not a strong motivation, both in atheists and in religious people, but I am not completely optimistic about that. But I am sure that you are really sincere when you say that it was never an issue for you, and I am happy of that.

    Love, love for truth, and a deep sincere need of the soul are certainly better motives for our worldviews, whatever they may be. I hope and believe that they are the foundation for yours.

    Finally, a couple of words anout NDEs. Just a friendly advice: don’t underestimate the huge amount of information about that kind of phenomena. They are really interesting, and IMO often very convincing. I don’t think anybody should believe in God only because of that: first of all they are at most evidence for personal survival, and not necessarily for the existence of God, and second, as I have said, nobody will ever believe in God only because there are evidences. If that were the case, I think everybody would spontaneously believe in God, because I really believe that the evidences are everywhere. So, NDEs are just one more example.

    But they are a very interesting example. And one rich of information. If you read the serious literature about them, maybe you will find that there is something worthwhile in it.

    One important mark of a good worldview, be it religious or atheistic, is remaining open to glimpses of truth, wherever we can find them. I sincerely wish you that your worldview may happily accompany you throughout your life.

  340. gpuccio (340),

    Thanks for the post and the sentiments in it. Incidentally I appreciate your posts too. Despite the fact we probably come to different conclusions I have to say I agree with all of your post here with the possible exception of the second to last paragraph about NDEs being convincing.

    The very best to you and yours, both for now and the future – whatever it holds and however far it may stretch!

  341. Gaz, despite the author’s objection, there is another line of evidence that, overturns his doubts, and concretely suggests that “specified” infinite information is what the photon qubit is actually made of:

    More supporting evidence for the transcendent nature of information, and how it interacts with energy, is found in these following studies:

    Single photons to soak up data:
    Excerpt: the orbital angular momentum of a photon can take on an infinite number of values. Since a photon can also exist in a superposition of these states, it could – in principle – be encoded with an infinite amount of information.
    http://physicsworld.com/cws/article/news/7201

    They provided proof of principle here:

    Ultra-Dense Optical Storage – on One Photon
    Excerpt: Researchers at the University of Rochester have made an optics breakthrough that allows them to encode an entire image’s worth of data into a photon, slow the image down for storage, and then retrieve the image intact.
    http://www.physorg.com/news88439430.html

    This following experiment clearly shows information is not an “emergent property” of any solid material basis as is dogmatically asserted by some materialists:

    Converting Quantum Bits: Physicists Transfer Information Between Matter and Light
    Excerpt: A team of physicists at the Georgia Institute of Technology has taken a significant step toward the development of quantum communications systems by successfully transferring quantum information from two different groups of atoms onto a single photon.
    http://gtresearchnews.gatech.e.....mtrans.htm

    It is also interesting to note that a Compact Disc crammed with information on it weighs exactly the same as a CD with no information on it whatsoever.

    Thus Gaz, here we are with three lines of evidence very strongly showing that infinite information is what is what a photon is actually made of. i.e. Information is NOT a “emergent property” of any materialistic basis as atheists dogmatically assert (at least if they are consistent)! We also have evidence in teleportation experiments, and even in refutation of hidden variables, showing that information is completely transcendent of energy, as well as exercising direct dominion of energy. thus Gaz “implemented” transcendent information is the only known entity of physics that is able to explain the origination of energy in the Big Bang. You can quibble whether it is exactly “infinite” information per each photon qubit or not, I really don’t care since the evidence is clear for it being transcendent information, in the first place, at the basis of the photon qubit,

  342. And while it is true that it is very difficult to compute how proteins fold, that is not a problem which cannot be solved algorithmically in finite time: indeed, a lot of people are trying to solve it, and making progress. It just requires very big computational resources, but certainly not infinite resources.

    I see there’s a new thread on this subject, so I’m going to watch it and other relevant threads. This one has become too long for me to load continuously.

    As for design being a tractable problem, I would agree that it is tractable to evolutionary algorithms. It will never, however, yield the kind of generalizations that typify most human engineering.

    The problem is complexity: the combination of sensitivity to initial conditions, and the sheer quantity of starting points and branches.

    Protein folding is just the beginning of the problem. You have to solve for all the emergent properties of combinations, followed by the problem of competition, ecology and environmental change.

  343. bornagain77,

    I fail to see how your 342 provides objective evidence for God. It isn’t mentioned anywhere within the articles you cite, which are articles involving quantum physics and the possible application thereof rather than God. It does not show that a “photon is made of infinite information”, it shows what was already known, i.e. that photons exist in a superposition of states which can have a range of values (theoretically infinite) until the wavefunction collapses when the photon is observed. Where God comes from in your analysis of these articles remains obscure.

  344. Gaz as well, though not to completely undermine the author’s objections against Bennet, Bennet’s team happens to be the team who actually brought teleportation to reality;

    In 1993 an international group of six scientists, including IBM Fellow Charles H. Bennett, confirmed the intuitions of the majority of science fiction writers by showing that perfect teleportation is indeed possible in principle, but only if the original is destroyed.
    http://www.research.ibm.com/qu.....portation/

    Thus, me being a “proof is in the pudding” type guy, Since the author (Aemond Duwell) has in fact not physically teleported anything, but only objected from what can be termed a theoretical position of interpretation while working for a PhD in physics:

    http://www.uni-konstanz.de/ppm/People.htm

    Then I am much more apt to take the opinion of the man who brought teleportation to light.

    But in case you think Duwell’s approach to quantum mechanics is in any way friendly to materialism/atheism I suggest you read his profile closely:

    Armond Duwell: Armond Duwell graduated with a BS in physics from Georgia Tech in 1998 and recently received his Ph.D. in History and Philosophy of Science from the University of Pittsburgh. His dissertation was entitled: “How to teach an old dog new tricks: quantum information, quantum computation, and the philosophy of physics”. Armond’s current work focuses on quantum information theory. The physicist Chris Fuchs has made the radical claim that probabilities in quantum mechanics are actually best interpreted as degrees of belief. His claim is that somehow the formalism of quantum mechanics is as much about what we believe as subjects as it is about the objective physical world. If that is true, it is profound. Armond is currently interested in classifying which elements of quantum mechanics are subjective and which are objective, and providing and assessing arguments for various classification schemes,,,
    http://www.uni-konstanz.de/ppm/People.htm

  345. Well Gaz, If you can see no need for a transcendent cause to explain the origination of photons in the big bang, and you can’t see that information is shown to be exactly the entity that is sufficient to explain that transcendent cause, there really is nothing I can do for you anymore.

  346. Gaz:

    “The heavens proclaim the glory of the Lord.”

    The world, that it exists, and its wonderful awesomeness, are “objective” evidence of God.

    Now, when an artist paints, he oftentimes leaves a signature mark on the canvass. But sometimes its hidden. And yet the experienced eye can spot the work. There isn’t some formation of clouds that is going to spell out: I’m God. I really exist.

    Now, what other kinds of evidence, objective evidence exists for God? Miracles. To this day, in Lourdes, France, for example, there are documented cases of physical healing. Where I live, there was a case of a miraculous healing that lead to the canonization of a 16th century priest about six years later. This all happened no more than 20 years ago. How do we know it was a miracle? Because he was in a hospital, and from the evening before to the morning after, he went from the point of death to sitting up in bed eating. None of the doctors could explain it.

    Then there is the Shroud of Turin. Scientists can’t explain its origins (and the carbon-14 testing done 30 years ago is now considered to be inaccurate because of known reweaving done to the shroud in the 15th century). Nor can it explain the image on Juan Diego’s tilma (a woolen body blanket)—which is now almost 400 years old (and formed in almost the very same fashion as that of the Shroud) There’s simply no known human means of forming these images. Science stands before them mute.

    But for me, personally, I figured out God existed when I was five. I was struck with the fact of my own existence. I understood that there was a time when I didn’t exist. And, basically, reasoned my way back to the First Cause. (with mom and dad’s help, of course) There I was, five years old, asking my parents how to pray.

    Then there are my own personal experiences of God found in prayer—some certainly exceptional.

    And, just to add to the grab-bag, there was an experience of “fore-seeing” what would happen minutes before it happened—and fore-seeing “live, and in color”, the real deal. This happened when I was twenty, while standing in a chemistry lab. Nothing ‘religious’ about it.
    But, how could the future be present before it is present, unless that future is present within some living object; and we call that living object, God.

    You want to “know” God like you “know” the computer screen you’re now looking at—objectively (you make an “object” of it; you treat it like an “object”). But God is only known through faith. To “know” God like your computer screen would be to enable you to manipulate God just like you can manipulate your computer screen. And, thus, God would no longer be God. We approach God through the “darkness” of faith. This is an open invitation always made to us by God.

    I once was in England, at Windsor Castle, looking through the sarcophagi in St. George’s Chapel. There was one with the name of Bonaparte. (It apparently was some cousin of Napoleon’s) I thought to myself, “how did this end up here?” On the side was an inscription that read something like this: “Lord, give me faith; for without faith, I cannot pray; and without prayer; I cannot come to know You; and without knowing You, I cannot be saved.” (From memory, so if someone can correct this, that would be great.)

    If you would have asked me, I would have said that a Bonaparte didn’t have a prayer of ever ending up within Windsor Castle, inside St. George’s Chapel.

  347. Gaz:

    Two more things:

    Pope John XXIII’s body lies in a glass sarcophagus in St. Peter’s in Rome. It’s incorrupt. It’s been 47 years since his death.

    St. Pio (Padre Pio) lies incorrupt in San Giovanni Rotundo, near Foggia, Italy (southern Italy). He died in 1968. For 50 years he had the stigmata; the five wounds of Christ. He was put into the hospital so that doctors could determine if the stigmata was the result of hysteria. It wasn’t. He only consumed the sacred host and wine celebrated at Mass for the 30-40 days he was there. When they weighed him on the way out, they found that he weighed a pound more. The saint quipped, “I guess if I want to lose weight, I’ll have to start eating.”

    To deny God’s existence, you MUST explain these instances using only material forces. Good luck!

  348. bornagain77 (346),

    “Well Gaz, If you can see no need for a transcendent cause to explain the origination of photons in the big bang, and you can’t see that information is shown to be exactly the entity that is sufficient to explain that transcendent cause, there really is nothing I can do for you anymore.”

    Indeed. I think the origin of energy in the Big Bang is better explained by a quantum mechanical event, but of course I could be wrong. You’ve certainly given me no better explanation. And whilst I appreciate your earnest attempts, I don’t think you’ll do it by giving me evidence in a paper by Duwell and then backing away from Bennett when I point out Duwell doesn’t support you! Thanks anyway though, it’s been a blast as usual.

  349. PaV,

    Thanks for your replies, but as an example I’ll let the reported words of the Vatican speak for me about John XXIII:

    http://www.lewrockwell.com/orig/vennari2.html

    The other events you mention are also unremarkable. As for Lourdes miracles: people wh are ill – even very seriously ill – do, sometimes, make spontaneous recoveries that doctors can’t explain. That happens even to atheists. Now, if amputees grew new limbs at Lourdes, then I’d look again at the evidence, but until then….

    On seeing the future: I had it too, after a two-month spell of depression when a loved relative died, 20 years ago, but I attributed it (correctly, according to a psychologist friend of mine) as a short circuit of my short term memory into long term whilst the mind was disturbed. It happened regularly, then it disappeared a while later.

    Sorry – still no mobjective evidence for God….

  350. Gaz-

    “That happens even to atheists.”

    I find it funny when people say this, as though it shouldn’t happen to atheists or something.

    Anyways, I was watching a thing on 20/20 or Dateline about some cancer recoveries that were being reported as miracles. Michael Shermer was on and he said that if the chances of this happening were 1 in a million that might seem a lot but in a country of say 300 million it could easily happen 300 times. The question is, however, what are the actual chances. I don’t thnik Shermer would know the answer to that.

  351. “On seeing the future: I had it too, after a two-month spell of depression when a loved relative died, 20 years ago, but I attributed it (correctly, according to a psychologist friend of mine) as a short circuit of my short term memory into long term whilst the mind was disturbed. It happened regularly, then it disappeared a while later.”

    This is a load of pseudo-science B.S. and you know it.

  352. Gaz I believe I found the fault in DuWell’s reasoning , thus as well as in your reasoning,

    he states:

    “If it were true that information actually had to be transferred, backwards teleportation should not be possible. Recall there seems to be no possible mechanism for information transfer in backwards teleportation. Backwards teleportation is possible, so it seems that the inference made by Bennett, et al. is not a good one to make.”

    Now first and foremost, DuWell is NOT contesting the fact that a photon qubit is mathematically defined as infinite information, which is in fact my main point of interest to be made to you about a photon, i.e. he is just contesting that infinite information is actually teleported because he believes backward teleportation would be impossible if infinite information were to actually be teleported instantaneously. This is because because he has no known “mechanism” by which he can explain such a time defying process,,,

    Yet the fatal flaw in his reasoning for trying to rule out Bennett’s inference, for instantaneous information transfer, is found on the bottom of page 10 or 11 where he states:

    “On the assumption that information can’t be transferred back in time, this eliminates any possible channel for information transfer from the(sic) Alice’s photon to Bob’s.

    Excuse me gaz but information is shown to be completely transcendent of any material basis or space-time considerations by the refutation of the hidden variable arguments, as well as by the work done in quantum erasure experiments where “backwards in time” events are normal, thus his starting presupposition that it would violate some “backwards in time” constraint is in fact flawed in its basic premise. i.e. He does not realize he is actually dealing with a entity that is completely transcendent of space-time matter-energy in the first place. i.e. he cannot refute something that is not bound by space and time with a principle derived from the constraints of space and time!

    There is another flaw in his thinking, but what is pointed out is enough for now.

    i.e. You can go with the summation of Duwell’s college paper if you want but ALL my money is riding on Bennett’s interpretation of his own work since it is in fact the horse that brung us to teleportation in the first place.

  353. Gaz as well I couldn’t help noticing this statement of yours:

    “Indeed. I think the origin of energy in the Big Bang is better explained by a quantum mechanical event,”

    Well thanks for clearing that up LOL!!!

    Do you mind elaborating just a little on this “quantum mechanical event” that produced this most highly ordered occurrence of 1 in 10^10^123 for the initial entropy of the universe? In fact it is such a precise order that it gives all indications of being purposely intended by Almighty God to most reasonable men, thus I am truly interested as to why you (on the off chance you are reasonable) would even entertain the doubt that it was unintended.

  354. Gaz:

    Don’t believe everything you read on the internet. Some things need looking up. Here’s a better account:

    http://www.messengersaintantho.....125IDRX=42

    As it turns out, earlier this year I was personally involved in the exhumation of a body. It was of a woman who had been dead and buried for no more than 5 years. California law requires that embalming take place/ or refrigeration. I am almost 100% certain that she was embalmed. When the fragile casket was opened, it revealed a once beautiful woman who now had lost more than half of her face—”Thou art dust, and unto dust you shall return.”

    Here’s the actual statement bye the Vatican official Fr. Ciro Benedettini:

    “The body is well preserved which needs no comment or hypotheses concerning supernatural causes.”

    IOW, we all should be able to figure it our ourselves. Right, Gaz?

    BTW, are you aware of the case of St. Sharbel, whose body—dead and buried body—was producing fluid. Is that sort of like having a missing arm grow back?

    I once saw the hand of saint martyred in England in the early 1600′s. It was kept in a glass repository. Did they have embalming back in the 1600′s? I can go on and on like this Gaz.

    OTOH, I can deny that I exist, if I want. But the objective fact is that I do exist. Gaz, you can deny anything you want. But is that really sensible.

  355. –Gaz to BornAgain77: “Indeed. I think the origin of energy in the Big Bang is better explained by a quantum mechanical event, but of course I could be wrong. You’ve certainly given me no better explanation.”

    Trying to argue on behalf of a first cause with someone who thinks that something can come from nothing is like trying to solve a crime with someone who thinks that a murder can occur without a murderer. Such a person is impervious to reason and cannot be persuaded by any argument.

  356. StephenB the title of this new book by William Lane Craig reminded me of your “to the heart of the matter” debating style:

    “On Guard” by William Lane Craig – video interview
    http://www.youtube.com/watch?v=hcsT4ZAGnuk

    http://www.bgassociates.com/im.....TOUCHE.jpg

    The book is available here:

    On Guard: Defending Your Faith with Reason and Precision
    by William Lane Craig, Lee Strobel
    http://www.amazon.com/Guard-De.....34764885/2

  357. 358

    @kairosfocus and gpuccio

    The point I’m making here is that, when you take solipsism seriously, it implies the same empirical predictions as realism. If we were to only apply Occam’s razor, solipsism would be favored as it wouldn’t need to posit the existence of an external reality. Again, the reason why I’m not a solipsist because solipsism does not explain why the observations just happen to look like realism, but in actuality is internal.

    As a critical rationalist, I understand how the problem of induction and the incompleteness of empiricism has changed the way we solve problems. That is, any theory can make any predictions which can be “supported” by observations.

    For example, i could posit a theory which claims we can know we exist in a virtual reality simulation because the color of the sky is blue. Clearly, applying the scientific method would result in empirical observations that “support” this theory. However, I haven’t presented a good explanation as to why the sky being blue is a good indicator that we exist in a virtual reality simulation. As such, it’s a bad explanation.

    Furthermore, I could add a laundry list of other empirical predictions, such as the existence of leaves that are green and tree bark that is brown, that the earth’s surface consists of 30% land and 70% water, etc. In fact, I could suggest that everything we observe is what one would predict if we existed in a virtual reality simulation. As such, I’d have a massive amount of empirical observations that “supports” my theory. But, yet again, we can discard this theory because it provides no underlying premise which explains why the specific things we observe indicates we exist in a virtual reality simulation. Furthermore, my theory becomes a convoluted elaboration of reality because it negates the explanation of realism, despite providing no explanation to replace it.

    This is despite the fact that an inference to human design suggests some vague intelligent agent could design a virtual reality simulator and choose to create an elaborate environment with interactions and incredible detail, etc. Clearly this is a logical possibility, but does this particular theory I’ve just presented above justify reaching this conclusion? No, it doesn’t.

    It may be that we really do exist in a virtual reality simulation designed by an intelligent agent. The ultimate conclusion the theory reached may in fact be true. However, this particular theory I’ve just presented can be discarded because the specific means by which it reached this conclusion is via a bad explanation. Note that I do not have to claim to know we do not exist in a virtual reality simulation to say we are justified in discarding this particular theory.

    The key point here is that a theory may be wrong for all the right reasons or right for all the wrong reasons. We should not support a particular theory just because our intuition tells us the conclusion it reaches is true since there really is such a thing as a bad explanation and there are clear ways to identify them. Only if a theory with the right reasons comes along, should we accept it.

    However, it’s very likely you think ID has reached the correct conclusion for reasons you think are “right” but are intentionally absent from the theory ID officially presents. This is because ID wants to be accepted as science and the very definition of the designer you presuppose prevents his inclusion in the theory.

    Calming science is biased for rejecting your intentionally incomplete theory is disingenuous.

  358. 359

    @kairosfocus and gpuccio

    Next, you claim that empirical observations supports a theory that a complex combination of chance, natural laws, environment, geographical features, etc. cannot explain the biological complexity we observe, therefore an intelligent agent is the best explanation. However, the problem here is threefold.

    First, it implies that darwinism claims to completely understand exactly how the complex combination of chance, natural laws, environment, geographical features, etc. translates in to exactly what we observe. Therefore you can clearly rule out natural processes as a cause. However, no one in biology actually makes this claim. Nor is this controversial.

    For example, do you deny that what we observe could be interpreted as the result of an undirected cause so complex we cannot fully understand it? Note: that you personally do not make such an interpretation does not mean others could not reasonable reach such an interpretation.

    Second, it makes the assumption that what we specifically observe is what this process was attempting to achieve and calculates the “odds” as such. This might work for a cause that exhibits intent, but is inappropriate for a undirected process.

    Third, AFAIK most variants of ID do not claim the entirety of biological complexity we observe was formed all at once or temporally arrived as via slight variations of existing forms that materialized out of thin air. Instead it suggests that the process is virtually identical to neo-darwinism with the exception of an intelligent designer specifying the exact time, order, locations and degree of change in genetic structures in parent organisms which eventually produced the specific biological complexity we observe.

    Even if the claim of materialization out of thin air is made, the resulting empirical observations would be virtually the same. While we have yet to discover exactly how this occurs, both Darwinism and ID agree that DNA, RNA and the entirety of genetic materials in an organism are the blueprint that actually defines the complexity and features we observe. We know this because each of us starts out as a single cell and mature into a complex organism rather than appear out of thin air.

    This third point is precisely what I mean by ID being a concluded elaboration of Darwinism.

    Specifically, darwinism provides a cause for biological complexity that is known to exists and is known to strongly influences the genetic structure of living organisms: the complex combination of chance, natural laws, environment, geographical features, etc. Furthermore it presents this same cause as the underlying principle which explains the specific biological complexity we observe. That is, the specific rates, features and structure of the features we observe are a direct result of this particular cause.

    On the other hand, ID provides an abstract possible cause for biological complexity: an inference to design in human beings. However, it’s unclear that a one or more designers with the necessary means and opportunity to design what we observe existed 3.5+ billion years ago. Nor is it clear that designers would have continued to exist until at least 3-6 millions years ago to cause the final step that resulted in the complexity of live we observe today. Furthermore, no underlying principal is provided to explain the specific biological complexity we observe. In other words, that’s just what the designer must have wanted, must have been capable of, etc. If the the designer could anticipate the10^300 possible folding geometries necessary to create one functional protein, causing the retina of an eye face one direction, rather than another, would appear to be child’s play. Therefore, the human eye has a backward retina because thats what the designer just so happened to have wanted.

    But what would any designer make such a choice? Again, no underlying principle that explains this is provided by the ID.

    Note that, in the case of ID, the designer’s abilities and capabilities are inferred by the specific biological complexity we observe. This is the opposite of Darwinism, in which the specific observed rates, features and structures are explained by the underlying principal of chance, natural laws, environment, geographical features, etc. This is similar to the contrast between realism and solipsism.

    Finally, in replacing a concrete cause with an abstract cause, ID not only negates the underling premise darwinism uses to explain the specific biological complexity we observe, but fails to present it’s own underlying principle to replace it.

    In other words, what the theory of ID attempts to present is an “explanation” of why the biological complexity we observe is caused by a designer, rather than darwinism. But it never actually gets around to explaining the specific biological complexity we observe. As such, It merely explains away darwinism. This is for reasons that are transparent and obvious,

    It’s possible all biological complexity we observe really could have been caused by an intelligent designer and the conclusion reached by ID may in fact be true. However, we can discard the particular theory currently presented by ID proponents because it reachers this conclusion via a bad explanation. Note that I do not have to claim to know ID is false to make this claim.

  359. veilsofmaya:

    First of all thank for taking the time again to clarify your thought.

    I could go on commenting on many points, but perhaps you will agree that this thread is now too big and old. And probably I have already clarified my position in previous posts.

    So, I suggest that for the moment we stop here. I am sure we can deepen some parts of this interesting discussion in some new thread. I do believe that you are a good interlocutor, so I will be happy to take again some discussion with you as soon as possible (but not again on solipsism, I hope :) ).

    Best.

  360. TRRRUMPET!

  361. Phaedros (351),

    “Michael Shermer was on and he said that if the chances of this happening were 1 in a million that might seem a lot but in a country of say 300 million it could easily happen 300 times. The question is, however, what are the actual chances. I don’t thnik Shermer would know the answer to that.”

    Nor, I suspect, do those who proclaim them “miracles”. That’s the problem: they should not be considered “miracles” until such analyses are done.

  362. Phaedros (352),

    “This is a load of pseudo-science B.S. and you know it.”

    Charming turn of phrase. So – it’s “pseudo-science B.S.” to come up with an explanation based on the known workings of the human brain, but not pseudo-science B.S. if it has a supernatural explanation (i.e. God helped me see the future)? Do you want to think about that one again?

  363. PaV (355),

    “Gaz:

    Don’t believe everything you read on the internet. Some things need looking up. Here’s a better account:

    http://www.messengersaintantho…..125IDRX=42″

    I don’t see how it helps your argument to say I shouldn’t believe everything I read on the internet, and for you to then give me another page off the internet! Anyway, reports of miraclkes ought to be treated with the utmost scepticism as natural explanations are commonplace. As far as John XXIII goes, I’ll take the Reuters report as being more credible than your website any day.

    “As it turns out, earlier this year I was personally involved in the exhumation of a body. It was of a woman who had been dead and buried for no more than 5 years. California law requires that embalming take place/ or refrigeration. I am almost 100% certain that she was embalmed. When the fragile casket was opened, it revealed a once beautiful woman who now had lost more than half of her face—”Thou art dust, and unto dust you shall return.””

    So you’re not 100% sure. And even if she was embalmed, just how well was it done?

    “Here’s the actual statement bye the Vatican official Fr. Ciro Benedettini:

    “The body is well preserved which needs no comment or hypotheses concerning supernatural causes.””

    And I’ll take the Reuters report of medical embalming (amd also mention on wiki of a triple sealed casket).

    “IOW, we all should be able to figure it our ourselves. Right, Gaz?”

    That’s what I did. Not sure what YOU did.

    “BTW, are you aware of the case of St. Sharbel, whose body—dead and buried body—was producing fluid. Is that sort of like having a missing arm grow back?”

    Er, no it isn’t. One thing to expel fluid, another thing to grow a new limb. But if expelling fluid is so important to you, look at the well-documented miracle of Ganesh the Hindu elephant-god. Will you now convert to Hinduism?

    “I once saw the hand of saint martyred in England in the early 1600’s. It was kept in a glass repository. Did they have embalming back in the 1600’s?”

    Yes. they have had it since at least ancient Egyptian times – i.e. pre-Christianity. Perhaps we should worship Isis and Osiris?

    “I can go on and on like this Gaz.”

    I know you can. But it isn’t to any great effect.

    “OTOH, I can deny that I exist, if I want. But the objective fact is that I do exist. Gaz, you can deny anything you want. But is that really sensible.”

    Not really, although it didn’t stop Descartes. It’s not a matter of denying anyhting at all, it’s a matter of reasoned scepticism and looking at the evidence. By all objective basis, the miracles you’ve mentioned – and for that matter every other one I’ve come across – has a more rational explanation than “God did it”.

  364. StephenB (356),

    “Trying to argue on behalf of a first cause with someone who thinks that something can come from nothing is like trying to solve a crime with someone who thinks that a murder can occur without a murderer.”

    Nonsense. Murders are events in the classical world. Events in the quantum world require different scientific explanations, as has been made clear to you countless times. The trouble with you is that you think you can formulate all encompasing laws from classical world events and use them in the quantum realm and cosmological realm. You can’t.

    “Such a person is impervious to reason and cannot be persuaded by any argument.”

    No, it’s the people who hold on to faith blindly that are impervious to reason and unpersuadable. Bizarrely, it’s considered a virtue in some faiths.

Leave a Reply