Home » 'Junk DNA', Intelligent Design » Jonathan Wells on Darwinism, Science, and Junk DNA

Jonathan Wells on Darwinism, Science, and Junk DNA

Jonathan Wells

On November 5, I posted a response to people who falsely claim that I set out to oppose Darwinism on orders from Reverend Sun Myung Moon. Since then, many comments have been posted—some of them critical of my book, The Myth of Junk DNA. Unfortunately, other commitments prevent me from responding to every detail (so many critics, so little time!). So I have selected some representative comments posted by two people using the pseudonyms “Gregory” and “paulmc.”

First, “Gregory” asked how many biologists I think are “Darwinists.” In my original post, I wrote:

By “Darwinism,” I mean the claim that all living things are descended from one or a few common ancestors, modified solely by unguided natural processes such as variation and selection. For the sake of brevity, I use the term here also to include Neo-Darwinism, which attributes new variations to genetic mutations.

By “Darwinists,” then, I mean people who subscribe to that view. Having worked in close proximity with biologists for over two decades, I can confidently say that most of them—at least in the U.S.—are Darwinists in this sense.

“Gregory” also wrote that “without ‘doing science,’ Jonathan Wells personally concluded ‘evident design’ in ‘the mountains of Mendocino county.’ Thus, the argument that ‘intelligent design is a purely scientific pursuit’ is obviously untrue.” I’m not sure what “Gregory” means here by a “purely scientific pursuit.” Intelligent design (ID) holds that we can infer from evidence in nature that some features of the world, including some features of living things, are better explained by an intelligent cause than by unguided natural process such as mutation and selection. Unlike creationism, ID does not start with the Bible or religious doctrines.

So if “science” means making inferences from evidence in nature—as opposed to inventing naturalistic explanations for everything we see (as materialistic philosophy would have us do)—then ID is science.

Second, “paulmc” wrote that “there are a number of strong lines of evidence that suggest junk DNA comprises a majority of the human genome.” The lines of evidence cited by “paulmc” included (1) mutational (genetic) load, (2) lack of sequence conservation, and (3) a report that “putative junk” has been removed from mice “with no observable effects.” In addition, (4) “paulmc” wrote that “there is an active other side to the debate” about pervasive transcription. I’ll address these four points in order.

Before I start, however, I’d like to say that I’m not particularly interested in debates over what percentage of our genome is currently known to be functional. Whatever the current percentage might be, it is increasing every week as new discoveries are reported—and such discoveries will probably continue into the indefinite future. So people who claim that most of our DNA is junk, and that this is evidence for unguided evolution and evidence against ID, are making a “Darwin of the gaps” argument that faces the inevitable prospect of having to retreat in the face of new discoveries.

Now, to the points raised by “paulmc”:

(1) Mutational Load. In 1972, biologist Susumu Ohno (one of the first to use the term “junk DNA”) estimated that humans and mice have a 1 in 100,000 chance per generation of suffering a harmful mutation. Biologists had already discovered that only about 2% of our DNA codes for proteins; Ohno suggested that if the percentage were any higher we would accumulate an “unbearably heavy genetic load” from harmful mutations in our protein-coding DNA. His reasoning provided a theoretical justification for the claim that the vast majority of our genome is functionless junk—what Ohno called “the remains of nature’s experiments which failed”—and that this junk bears most of our mutational load.

According to “paulmc”, this is the first of “a number of strong lines of evidence that suggest junk DNA comprises a majority of the human genome.” But Ohno’s claim was a theoretical one, based on various assumptions about how often spontaneous mutations occur and how they affect the genome.

As of last year, however, the accurate determination of mutation rates was still controversial. According to a 2010 paper:

The rate of spontaneous mutation in natural populations is a fundamental parameter for many evolutionary phenomena. Because the rate of mutation is generally low, most of what is currently known about mutation has been obtained through indirect, complex and imprecise methodological approaches.

Furthermore, genomes are more complex and integrated than Ohno realized, so the effects of mutations are not as straightforward as he thought. As another 2010 paper put it,

Recent studies in D. melanogaster have revealed unexpectedly complex genetic architectures of many quantitative traits, with large numbers of pleiotropic genes and alleles with sex-, environment- and genetic background-specific effects.

In other words, the first line of evidence cited by “paulmc” is not evidence at all, but a 40-year-old theoretical prediction based on questionable assumptions. The proper way to reason scientifically is not “Ohno predicted theoretically that the vast majority of our DNA is junk, therefore it is,” but “If much of our non-protein-coding DNA turns out to be functional, then Ohno’s theoretical prediction was wrong.”

(2) Sequence Conservation. According to evolutionary theory, if two lineages diverge from a common ancestor that possesses regions of non-protein-coding DNA, and those regions are non-functional, then they will accumulate random mutations that are not weeded out by natural selection. Many generations later, the corresponding non-protein coding regions in the two descendant lineages will be very different. On the other hand, if the original non-protein-coding DNA was functional, then natural selection will tend to weed out mutations affecting that function. Evolution of the functional regions will be “constrained,” and many generations later the sequences in the two descendant lineages will still be similar, or “conserved.”

As “paulmc” pointed out , however, many regions of non-protein-coding DNA appear to “evolve without evidence of this constraint;” their sequences are not conserved. According to “paulmc,” this “implies that changes to these sequences do not affect fitness… we expect that for them to be functional they need some degree of evolutionary constraint,” and the absence of such constraint points to their “being putatively junk.”

Not so. Although sequence conservation in divergent organisms suggests function, the absence of sequence conservation does not indicate lack of function. Indeed, according to modern Darwinian theory, species diverge because of mutational changes in their functional DNA. Obviously, if such DNA were constrained, then evolution could not occur.

In 2006 and 2007, two teams of scientists found that certain non-protein-coding regions that are highly conserved in vertebrates (suggesting function) are dramatically unconserved between humans and chimps (suggesting… rapid evolution!). More specifically, one of the teams showed that one unconserved region contains an RNAcoding segment involved in human brain development.

Furthermore, the analysis by “paulmc” assumes that the only thing that matters in nonprotein-coding DNA is its nucleotide sequence. This assumption is unwarranted. As I pointed out in Chapter Seven of my book, non-protein-coding DNA can function in ways that are largely independent of its precise nucleotide sequence. So absence of sequence conservation does not constitute evidence against functionality.

(3) Mice without “junk” DNA. In 2004, Edward Rubin] and a team of scientists at Lawrence Berkeley Laboratory in California reported that they had engineered mice missing over a million base pairs of non-protein-coding (“junk”) DNA—about 1% of the mouse genome—and that they could “see no effect in them.”

But molecular biologist Barbara Knowles (who reported the same month that other regions of non-protein-coding mouse DNA were functional) cautioned that the Lawrence Berkeley study didn’t prove that non-protein-coding DNA has no function. “Those mice were alive, that’s what we know about them,” she said. “We don’t know if they have abnormalities that we don’t test for.”And University of California biomolecular engineer David Haussler said said that the deleted non-protein-coding DNA could have effects that the study missed. “Survival in the laboratory for a generation or two is not the same as successful competition in the wild for millions of years,” he argued.

In 2010, Rubin was part of another team of scientists that engineered mice missing a 58,000-base stretch of so-called “junk” DNA. The team found that the DNA-deficient mice appeared normal until they (along with a control group of normal mice) were fed a high-fat, high-cholesterol diet for 20 weeks. By the end of the study, a substantially higher proportion of the DNA-deficient mice had died from heart disease. Clearly, removing so-called “junk” DNA can have effects that appear only later or under other
circumstances.

(4) Pervasive transcription. After 2000, the results of genome-sequencing projects suggested that much of the mammalian genome—including much of the 98% that does not code for proteins—is transcribed into RNA. Scientists working on one project reported in 2007 that preliminary data provided “convincing evidence that the genome is pervasively transcribed, such that the majority of its bases can be found in primary transcripts, including non-protein-coding transcripts.”

Since an organism struggling to survive would presumably not waste its resources producing large amounts of useless RNA, this widespread transcription suggested to many biologists that much non-protein-coding DNA is probably functional. In 2010, four University of Toronto  researchers published an article concluding that “the genome is not as pervasively transcribed as previously reported.” Yet the Toronto researchers had biased their sample by eliminating repetitive sequences with a software program called RepeatMasker, the official description of which states: “On average, almost 50% of a human genomic DNA sequence currently will be masked by the program.” In the fraction that remained, the Toronto researchers based their results “primarily on analysis of PolyA+ enriched RNA”—sequences that have a long tail containing many adenines. Yet molecular biologists had already reported in 2005 that RNA transcripts lacking the long tail are twice as abundant in humans as PolyA+ transcripts.

In other words, the Toronto researchers not only excluded half of the human genome with RepeatMasker, but they also ignored two thirds of the RNA in the remaining half. It is no wonder that they found fewer transcripts than had been found by the hundreds of other scientists studying the human genome. The Toronto group’s results were disputed in 2010 by an international team of eleven scientists, and the group’s flawed methodology was sharply criticized in 2011 by another international team of seventeen scientists.

So “paulmc” was technically but trivially correct in writing that there are two sides to the debate over pervasive transcription. There are also at least two sides to the larger debate over the functionality of non-protein-coding DNA. But I leave it to open-minded readers of The Myth of Junk DNA to decide whether “paulmc” was correct in claiming that “the science at the moment really does fall on one side of this: large amounts of putative junk exist in the human genome.”

Oh, one last thing: “paulmc” referred to an online review  of my book by University of Toronto professor Larry Moran—a review that “paulmc” called both extensive and thorough. Well, saturation bombing is extensive and thorough, too. Although “paulmc” admitted to not having read more than the Preface to The Myth of Junk DNA, I have read Mr. Moran’s review, which is so driven by confused thinking and malicious misrepresentations of my work—not to mention personal insults—that addressing it would be like trying to reason with a lynch mob.

Follow UD News at Twitter!

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

63 Responses to Jonathan Wells on Darwinism, Science, and Junk DNA

  1. Thanks Dr. Wells for clearing up ‘the smoke and mirrors’.

    notes:

    Francis Collins, Darwin of the Gaps, and the Fallacy Of Junk DNA – video
    http://www.evolutionnews.org/2.....40361.html

    further question: How does neo-Darwinism explain chemically impossible things in DNA?

    Does DNA Have Telepathic Properties?-A Galaxy Insight
    Excerpt: DNA has been found to have a bizarre ability to put itself together, even at a distance, when according to known science it shouldn’t be able to. Explanation: None, at least not yet.,,, The recognition of similar sequences in DNA’s chemical subunits, occurs in a way unrecognized by science. There is no known reason why the DNA is able to combine the way it does, and from a current theoretical standpoint this feat should be chemically impossible.
    http://www.dailygalaxy.com/my_.....ave-t.html

    Quantum Dots Spotlight DNA-Repair Proteins in Motion – March 2010
    Excerpt: “How this system works is an important unanswered question in this field,” he said. “It has to be able to identify very small mistakes in a 3-dimensional morass of gene strands. It’s akin to spotting potholes on every street all over the country and getting them fixed before the next rush hour.” Dr. Bennett Van Houten – of note: A bacterium has about 40 team members on its pothole crew. That allows its entire genome to be scanned for errors in 20 minutes, the typical doubling time.,, These smart machines can apparently also interact with other damage control teams if they cannot fix the problem on the spot.
    http://www.sciencedaily.com/re.....123522.htm

    Quantum Action confirmed in DNA by direct empirical research;

    DNA Can Discern Between Two Quantum States, Research Shows – June 2011
    Excerpt: — DNA — can discern between quantum states known as spin. – The researchers fabricated self-assembling, single layers of DNA attached to a gold substrate. They then exposed the DNA to mixed groups of electrons with both directions of spin. Indeed, the team’s results surpassed expectations: The biological molecules reacted strongly with the electrons carrying one of those spins, and hardly at all with the others. The longer the molecule, the more efficient it was at choosing electrons with the desired spin, while single strands and damaged bits of DNA did not exhibit this property.
    http://www.sciencedaily.com/re.....104014.htm

    verse and music:

    Colossians 1:17
    He is before all things, and in him all things hold together.

    ROYAL TAILOR – HOLD ME TOGETHER – music video
    http://www.youtube.com/watch?v=vbpJ2FeeJgw

  2. I guess it would be helpful to point out just how ‘spooky’, to use Einstein’s infamous word, it is to find quantum entanglement/information pervasively within DNA;

    Light and Quantum Entanglement Reflect Some Characteristics Of God – video
    http://www.metacafe.com/watch/4102182/

  3. I have read Mr. Moran’s review, which is so driven by confused thinking and malicious misrepresentations of my work—not to mention personal insults—that addressing it would be like trying to reason with a lynch mob.

    Moran is particularly vicious concerning challenges presented by ID proponents. He’s a disturbed individual. (That’s not a personal attack, just an empirical observation, and I hope he gets over it somehow.)

    Another factor in biology that should be considered is redundancy and backup systems, which are standard fare in human engineering. Redundant/backup systems ensure survival if one or more of the primary systems is disabled or compromised. In aviation, fly-by-wire systems (in which the pilot does not directly influence the aircraft’s control surfaces, but provides input to computer systems that execute the pilot’s commands) provide three or more redundant computers that process the pilot’s commands and vote about the outcome. If one computer disagrees, the majority wins.

    In the final analysis three things are clear:

    1) Biological systems are engineered. At every turn they provide more evidence of this because of their functionality and fault tolerance.
    2) Darwinists are chasing a rainbow in their attempt to explain away engineering with a process (randomness) that is the antithesis of what is required to produce the result in question.
    3) Darwinists have a philosophical precommitment that will not allow them to follow the evidence where it leads.

  4. Well, I put this in another thread that has gone a bit stale, so am repeating part of my comment here, with apologies.

    ———-

    For some reason the Dawkins and the Millers of the world have shown a great deal of interest in the idea of junk DNA supporting evolution, and just as importantly for their worldviews, contradicting the concept of intelligent design. Junk DNA is, in essence, just another example of the invalid “bad design” line of arguments. The reason I say their feet need to be held to the fire is because they have clearly gone on record saying that a large amount of non-functioning DNA is evidence for evolution and against design. Now that the evidence is starting to lean the other direction I am not at all interested in letting them off the hook easily with any backpedaling (although, astoundingly, some of the folks I cited haven’t even started backpedaling, as they seem to be oblivious about recent research). It is absolutely reasonable to point out that their expectation, arising from their viewpoint in favor of evolution and against design, is starting to be falsified and to insist that they admit as much.

    The bottom line on so-called “junk DNA” is this.

    1. There is very little logical reason to assume that there is a lot of junk DNA. The possible exception being some excision experiments, but we have to be extremely careful to not misinterpret those results. Specifically, we cannot say that a particular part of DNA has no function based on an excision experiment. At most, we can say that, given the short amount of time the observation was conducted, it appears that removal of a particular part of DNA did not have an adverse impact on the organism in its current state. The reason we have to limit this inference is that we know (i) some processes occur only at specific stages of an organism’s life, and (ii) there is some redundancy built in. Think of it this way: If I am launching a Saturn V rocket to put an Apollo spacecraft in orbit and one of the five engines cuts out shortly after launch, the fact that they were still able to attain orbit does *not* mean that the cut engine did not have function, and I cannot validly claim that it didn’t have an important function just because that function wasn’t realized.

    Looking at DNA and proclaiming that we don’t know what most of it does, and therefore, it probably doesn’t have function, is intellectually equivalent to someone opening up a supercomputer and proclaiming that most of the parts must not have function because they don’t know what the parts do. It just doesn’t follow, and there is no reason to take that position.

    2. In contrast, there are very good reasons to think that the great majority, though perhaps not all, of junk DNA in fact has function. I will give just three off the top of my head.

    A. First of all, the non-functioning DNA argument (just like vestigal organs) has an abysmal track record, so we should be very cautious about jumping on that bandwagon. Introns are now known to be critical to alternative splicing. Pseudogenes help regulate DNA. LINEs and SINEs have known functions, as do certain transposons. And on and on. Adopting non-function as an initial assumption puts us on the wrong side of history.

    B. More importantly, there are numerous known biological functions, the programming for which has not yet been discovered or fully elucidated. Just looking at any basic biology text and drilling down to the details reveals numerous functions that must be in place for almost any meaningful biological activity. If we spent a day, we could come up with a list of thousands of known functions, every single one of which must, by definition, have some kind of instantiation in DNA or elsewhere in the cell. As a result, we know, by definition, that many thousands of functional programming elements remain to be discovered; indeed vastly more functional elements remain to be discovered than the number that have been discovered to date.

    C. From a basic engineering standpoint the idea of large amounts of useless DNA is problematic. Regardless of whether one thinks our current DNA came about by design or by random chemical processes, everyone understands that DNA, and the related cellular mechanisms, carry out an absolutely stunning symphony of carefully-orchestrated processes. It is also known that most DNA, including so-called “junk DNA” is transcribed at one level or another. So here is what the junk DNA proponents would have us believe: a sophisticated storage, retrieval, transcription, translation, error-checking, sheparding, building, functioning orchestra of thousands of concurrent processes just happens to take place with incredible fidelity from hour-to-hour, day-to-day, indeed generation-to-generation, while being almost completely overwhelmed in terms of volume by processes of storage, retrieval, transcription and translation of complete nonsense, which, at the very least uses precious resources, and at worst, has the ability to completely muck up the chemical orchestra and bring the whole system to a grinding halt.

    Think of it this way: Although our modern computer operating systems can occasionally handle a minor error here or there without crashing, in most cases a significant error will bring the system to its knees and require a reboot, despite huge efforts expended to get the operating system right in the first place and to prevent viruses and the like. Yet we are expected to believe that the human operating system, which is vastly more complicated and which operates much more robustly than any computer operating system, can swim in a literal sea of nonsense and garbage. It would be like having a computer filled to the brim with viruses, malware, nonsense strings and the like that are regularly being accessed and run, without being any worse off for it, indeed, without any noticeable or identifiable delays or hiccups. Such an idea is simply absurd, and might be best described as the “stupid man’s view of engineering.” I doubt you hold to this view, but it certainly applies to Dawkins and similar folks who would have us believe that the “good” DNA just by happenstance works fine, thank you very much, while swimming in a sea of “junk.”

    ——-

    There may be some DNA that has no function. Certainly even the most carefully designed machine can degrade or break down over time. But outside of materialistic philosophy there is no reason to think that much or most DNA is junk, while there is every reason to think that much or most DNA is functional. I don’t know whether we can put a precise number on it, but I would venture that more than 90% of DNA will eventually be shown to have function.

  5. I’m currently writing a software utility for others to use at work, and trapping and handling errors is a real challenge. One tries to think of everything that might go wrong internally, as well as everything a user might do to muck things up, but the list is almost endless, as combinatorics produces an exponential explosion of the number of possible pathways through a program as its complexity increases.

    One tests, finds bugs (often fatal ones), and fixes the bugs until a program seems to be reasonably robust. The notion that random errors filtered by natural selection can mimic this process seems absurd on its face.

  6. @Gil,

    If you are a software developer (I am a software developer), then you will have no trouble discrediting your incredulity on this, just by familiarizing yourself with evolutionary algorithms. I no longer work with EA professionally, but did for many years, primarily in large-scale network traffic and intrusion detection applications as well as some financial pattern analysis that can be profitably developed to explore spaces where human engineering can’t or won’t reach.

    One of the basic functions of EA is deliberately introducing “random errors” in the generation of new cohorts, and seeing how they do. This is usually implemented just with calls (in C++) to srand(now) and rand(). I point that out just because it demonstrates (a developer) just how “error-driven” the process is. We take code and values and just randomly mutate them to see what happens. Over and over and over.

    With a filter (a fitness function), the vast majority of ‘errors’ do not help, but because of that filter, they are also not preserved. The rare few errors that do lead to improvements are preserved, and accumulate.

    You are thinking about software in an a context where failures by the millions and millions are not available or allowed. But in computing contexts where they are allowed (and this is a step isomorphically toward the dynamics of biology in the physical world), generating millions of steps backward to sift through in finding the odd step forward which gets preserved and which acculmulates upon the other steps forward does produce interesting, useful, tangible, and sometimes highly profitable results.

    The way you are thinking about software, and writing programs in the traditional way, is a BARRIER to understanding biology and evolution, not an aid. We put in try/catch blocks and fastidiously avoid random or unplanned state changes in our finite automata precisely because we CANNOT operate like biology does, and because we have what biology does not — a governing design agent that works to optimize goals with minimal resources and cycles.

    On an evolutionary model, both in the computing/EA sense and in the biological sense, there is no “pre-testing”. The tests occur in the environment, and the process traverses (with stochastic inputs and fitness filters that preserve and accumulate forward steps) a search landscape. Nature also tests, and also fixes bugs, and repeats the process, driving the program (organism) to increasing robustness.

    It’s only absurd if you insist on an anthropomorphic paradigm for the process. As soon as you let go of the need for a human (or an intelligence) to be driving all parts of the system, and let chance + law + time + resources work to harness the creative capabilities those two can exhibit when working in concert, your sense of absurdity will be obviated as misplaced, unwarranted.

  7. eigenstate:

    LIFE’S CONSERVATION LAW – William Dembski – Robert Marks – Pg. 13
    Excerpt: Simulations such as Dawkins’s WEASEL, Adami’s AVIDA, Ray’s Tierra, and Schneider’s ev appear to support Darwinian evolution, but only for lack of clear accounting practices that track the information smuggled into them.,,, Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information. Active information enables us to see why this is the case.
    http://evoinfo.org/publication.....ation-law/

    Roberts Marks on Avida and ev – video – 6:00 minute mark
    http://www.youtube.com/watch?v=Uc6Ktq0SEBo

    Evolutionary Synthesis of Nand Logic: Dissecting a Digital Organism – Dembski – Marks – Dec. 2009
    Excerpt: The effectiveness of a given algorithm can be measured by the active information introduced to the search. We illustrate this by identifying sources of active information in Avida, a software program designed to search for logic functions using nand gates. Avida uses stair step active information by rewarding logic functions using a smaller number of nands to construct functions requiring more. Removing stair steps deteriorates Avida’s performance while removing deleterious instructions improves it.
    http://evoinfo.org/publication.....gic-avida/

    New paper using the Avida “evolution” software shows it doesn’t evolve. – May 2011
    http://www.uncommondescent.com.....are-shows/

    The effects of low-impact mutations in digital organisms (Testing Avida using realistic biological parameters) – Chase W Nelson and John C Sanford
    http://www.tbiomed.com/content/8/1/9

    The Problem of Information for the Theory of Evolution – debunking Schneider’s ev computer simulation
    Excerpt: In several papers genetic binding sites were analyzed using a Shannon information theory approach. It was recently claimed that these regulatory sequences could increase information content through evolutionary processes starting from a random DNA sequence, for which a computer simulation was offered as evidence. However, incorporating neglected cellular realities and using biologically realistic parameter values invalidate this claim. The net effect over time of random mutations spread throughout genomes is an increase in randomness per gene and decreased functional optimality.
    http://www.trueorigin.org/schneider.asp

    The Capabilities of Chaos and Complexity – David L. Abel
    Excerpt: “To stem the growing swell of Intelligent Design intrusions, it is imperative that we provide stand-alone natural process evidence of non trivial self-organization at the edge of chaos. We must demonstrate on sound scientific grounds the formal capabilities of naturally-occurring physicodynamic complexity. Evolutionary algorithms, for example, must be stripped of all artificial selection and the purposeful steering of iterations toward desired products. The latter intrusions into natural process clearly violate sound evolution theory.”
    http://www.mdpi.com/1422-0067/10/1/247/pdf

    Constraints vs. Controls – Abel – 2010
    Excerpt: Classic examples of the above confusion are found in the faulty-inference conclusions drawn from many so-called “directed evolution,” “evolutionary algorithm,” and computer-programmed “computational evolutionary” experimentation. All of this research is a form of artificial selection, not natural selection. Choice for potential function at decision nodes, prior to the realization of that function, is always artificial, never natural.
    http://www.bentham.org/open/to.....4TOCSJ.pdf

  8. further note:

    The Genius Behind the Ingenious – Evolutionary Computing
    Excerpt: The field dedicated to this undertaking is known as evolutionary computing, and the results are not altogether encouraging for evolutionary biology.
    http://biologicinstitute.org/2.....ingenious/

    Signature In The Cell – Review
    Excerpt: There is absolutely nothing surprising about the results of these (evolutionary) algorithms. The computer is programmed from the outset to converge on the solution. The programmer designed to do that. What would be surprising is if the program didn’t converge on the solution. That would reflect badly on the skill of the programmer. Everything interesting in the output of the program came as a result of the programmer’s skill-the information input. There are no mysterious outputs.
    Software Engineer – quoted to Stephen Meyer
    http://www.scribd.com/full/293.....18zn6dtju0

    Here is a brutally honest admission that neo-Darwinism has no mathematical foundation from a job description from Oxford university, seeking a mathematician to ‘fix’ the ‘mathematical problems’ of neo-Darwinism:

    Oxford University Seeks Mathemagician — May 5th, 2011 by Douglas Axe
    Excerpt: Grand theories in physics are usually expressed in mathematics. Newton’s mechanics and Einstein’s theory of special relativity are essentially equations. Words are needed only to interpret the terms. Darwin’s theory of evolution by natural selection has obstinately remained in words since 1859. …
    http://biologicinstitute.org/2.....emagician/

    More notes:

    In computer science we recognize the algorithmic principle described by Darwin – the linear accumulation of small changes through random variation – as hill climbing, more specifically random mutation hill climbing. However, we also recognize that hill climbing is the simplest possible form of optimization and is known to work well only on a limited class of problems.
    Watson R.A. – 2006 – Compositional Evolution – MIT Press – Pg. 272

    At last, a Darwinist mathematician tells the truth about evolution – November 2011
    Excerpt: 7. Chaitin looks at three kinds of evolution in his toy model: exhaustive search (which stupidly performs a search of all possibilities in its search for a mutation that would make the organism fitter, without even looking at what the organism has already accomplished), Darwinian evolution (which is random but also cumulative, building on what has been accomplished to date) and Intelligent Design (where an Intelligent Being selects the best possible mutation at each step in the evolution of life). All of these – even exhaustive search – require a Turing oracle for them to work – in other words, outside direction by an Intelligent Being. In Chaitin’s own words, “You’re allowed to ask God or someone to give you the answer to some question where you can’t compute the answer, and the oracle will immediately give you the answer, and you go on ahead.”
    8. Of the three kinds of evolution examined by Turing (Chaitin), Intelligent Design is the only one guaranteed to get the job done on time. Darwinian evolution is much better than performing an exhaustive search of all possibilities, but it still seems to take too long to come up with an improved mutation. http://www.uncommondescent.com.....evolution/

    Oracle must possess infinite information for ‘unlimited evolution’
    http://www.uncommondescent.com.....ent-408176

    THE GOD OF THE MATHEMATICIANS – DAVID P. GOLDMAN – August 2010
    Excerpt: we cannot construct an ontology that makes God dispensable. Secularists can dismiss this as a mere exercise within predefined rules of the game of mathematical logic, but that is sour grapes, for it was the secular side that hoped to substitute logic for God in the first place. Gödel’s critique of the continuum hypothesis has the same implication as his incompleteness theorems: Mathematics never will create the sort of closed system that sorts reality into neat boxes.
    http://www.faqs.org/periodical.....27241.html

  9. corrected link:

    THE GOD OF THE MATHEMATICIANS – DAVID P. GOLDMAN – August 2010
    http://www.firstthings.com/art.....ematicians

  10. eigenstate:

    Interesting comments. If I am understanding your description properly, you have a process that introduces particular classes of changes to a pre-existing program and then applies a filter to sift through the changes. This may look, superficially, like some kind of Darwinian process, but it really is not. We should not let the term “evolutionary algorithm” confuse us into thinking that the program actually demonstrates naturalistic evolution as it would need to occur in biology if the naturalistic story is true.

    You state: “We put in try/catch blocks and fastidiously avoid random or unplanned state changes in our finite automata precisely because we CANNOT operate like biology does, and because we have what biology does not — a governing design agent that works to optimize goals with minimal resources and cycles.”

    Therefore, I take it the process would not work well over a practical period of time with random or unplanned state changes or without some optimization of goals. These would be precisely the kinds of things that cannot be controlled in a naturalistic evolutionary scenario. Further, it is certainly fair to say that the amount, complexity and depth of programming in, say a human, dwarfs anything we are currently doing with our machines.

    You seem to be suggesting that we should “let go of the need for a human (or an intelligence) to be driving all parts of the system” and just rely on the vague idea of stuff-happens-plus-lots-of-time, when your own work in fact demonstrates the need for that intelligent intervention. Further, as I understand it, you are not suggesting that the evolutionary algorithm wrote the top-down architecture in the first place. Thus, your algorithm isn’t really a major creative force in any sense of the word. It may, within the carefully planned parameters you have outlined above, be a useful diagnostic or auditing tool to assess the pre-existing program. But even if we concede a small number of potential positive changes flowing from this audit (I don’t think your description matches what would occur in biology, but I’m willing to concede the point for a moment), the evolutionary algorithm is not the primary creative force by any stretch of the imagination.

    Chance + time (+law, of course, but that always exists and goes without saying) can produce some things. It is wonderful at breaking things, which is a large part of the whole auditing process. Very occasionally something useful might be discovered, *when an algorithm is run in a very narrow space and with carefully-crafted parameters.* But evolutionary algorithms have never been shown to produce large amounts of creative content on their own. The amount of time and resources needed to accomplish even piddling biological tasks vastly dwarfs the available resources of the known universe.

    I understood Gil’s comment to refer to the idea of a naturalistic process being responsible for the incredible information content and creative genius we see around us. I didn’t think he was quibbling with the idea that in narrow applications a trial-and-error process might be a useful tool if carefully tailored by the very intelligence the materialist would seek to eliminate from the process.

  11. @Eric Anderson

    Interesting comments. If I am understanding your description properly, you have a process that introduces particular classes of changes to a pre-existing program and then applies a filter to sift through the changes. This may look, superficially, like some kind of Darwinian process, but it really is not. We should not let the term “evolutionary algorithm” confuse us into thinking that the program actually demonstrates naturalistic evolution as it would need to occur in biology if the naturalistic story is true.

    EA doesn’t pretend to be analogy. Rather it harnesses what we’ve learned in biology, and puts it to use ends in computing. Impersonal nature (or what in computing we’d call “brute force processing”) is a powerful designer, it’s just very expensive (by our human standards in terms of the cycles and resources needed for the exploration of the search landscape to work enough to accumulate valuable adaptations and structures. Nature doesn’t have anything to offer in terms of brains (so far as we can tell), but what she lacks in brains, she makes up for in a wealth of deep time an resources.

    Evolutionary algorithms don’t and can’t simulate all the attendant biology, but it does harness the basic engineering principle that nature has worked out — persistent, scaled landscape searches with a cumulative adaption filter can produce a wealth of novel and sophisticated structures, structures that we may find extraordinary valuable. In network traffic analysis, EA provides a method to “let the search go”, and through the vast number of “duds”, effective and ingenious creative structures toward identifying target patterns and detecting anomalies arise. So EA just “virtualizes” some of the core design principles we’ve learned from the (putatively) impersonal design products of nature.

    Therefore, I take it the process would not work well over a practical period of time with random or unplanned state changes or without some optimization of goals. These would be precisely the kinds of things that cannot be controlled in a naturalistic evolutionary scenario. Further, it is certainly fair to say that the amount, complexity and depth of programming in, say a human, dwarfs anything we are currently doing with our machines.

    I think that’s fair to say. It’s certainly true that we’ve not run enough generations, across all the EA programs man has ever written and run, to reach reasonable odds of producing something on the order of the “goo-to-the-zoo-to-you” developmental pathway of humans. But even so, the objection would have to be “I believe in inches, but I can’t see them adding up to miles”. Even fairly modest EA implementations demonstrate the stochastic creativity toward novel structures. They are predictably and routinely produced — it’s just massive iterations over a (pseudo)random mutation algorithm. The key lever is the fitness function, the filter. In the programs I’ve worked with, the fitness function has nothing to do with biology — we’re looking for complex trigger patterns that can detect network intrusion and other traffic anomolies humans are really bad (as intelligent designers) at detecting.

    The whole of the EA computing community, though, has not produced even a small fraction of the equivalent “cycles” of physics and biology for the history of our universe. Not even close. Deep time is…. really mind-bendingly deep.

    You seem to be suggesting that we should “let go of the need for a human (or an intelligence) to be driving all parts of the system” and just rely on the vague idea of stuff-happens-plus-lots-of-time, when your own work in fact demonstrates the need for that intelligent intervention.

    Not saying that. It’s not vague, or the least bit mysterious. It’s just computing automata. It’s boring as hell, but it’s perfectly specific and well understood, what’s happening there. But when you work with genetic algorithms in an applied way (like you are really trying to get something done, professionally), both the power and speed of “intelligent design” from humans is underscored, AND the power and slowness of mindless, undirected process to come up with solutions and structures that humans fail to develop is proven out as well.

    These are two alternative heuristics for design. “Human design” is highly efficient compared to “impersonal design” with respect to conserving time and resources. Human design produces some types of solutions which impersonal design doesn’t, ever, or at least fantastically infrequently. But impersonal design produces solutions humans are too stupid (or rather, just too impatient) to explore. And while it’s lavish in the amount of resources and time it demands, it is positively genius in a humbling way for some of its solutions it produces. It’s just as unlikely man would come up with some of the “impersonal design” designs.

    It’s a weird, even spooky sensation, when you find some creative results that work coming from brute force, impersonal design process of EA. I can totally understand the human superstitious reflex — wow, this is like miraculous, or something. But it’s just mundane computing machinery.

    Further, as I understand it, you are not suggesting that the evolutionary algorithm wrote the top-down architecture in the first place.

    No, the operating environment, like nature and natural law in biology, is an enclosing context.

    Thus, your algorithm isn’t really a major creative force in any sense of the word.

    It is. If you crawled through every line of code, you would not be able to find any code that indicates or steers the program toward any particular solution, or pathway. It uses the (pseudo-)random functions of the OS libraries to drive changes to the parameters, and the executable code itself. For any of the very cool and elegant solutions we came up with for detecting network traffic anomalies, all of the design for that was synthesized from mass iterations over a search landscape, with a fitness filter sweeping up behind to accumulate the positive adaptations.

    When you look at how this dumb, impersonal bit of code and computing machinery produced the solution, it does seem somewhat magical. But humans has a hair-trigger magic reflex, and this situation is really enlightening because we wrote it all ourselves and know exactly how much design we baked in for those solutions — none. We can specify the end criteria, but the criteria is not the design. Many ways to skin a cat, etc.

    It may, within the carefully planned parameters you have outlined above, be a useful diagnostic or auditing tool to assess the pre-existing program. But even if we concede a small number of potential positive changes flowing from this audit (I don’t think your description matches what would occur in biology, but I’m willing to concede the point for a moment), the evolutionary algorithm is not the primary creative force by any stretch of the imagination.

    I think that’s where you are mistaken. The operating environment is key to defining the solution criteria — what succeeds and propagates, and what does not — but the searching of the landscape, random variation with accumulation of successful adaptation is the creative source for the designs of the successful solutions (and the designs of the unsuccessful non-solutions, of course). In terms of the designs themselves — not the solution criteria, and these are different elements — that little bugger of a program IS the creative force. We just send it on its way searching the landscape, and sit back after (seemingly interminable) hours and days and weeks, and watch it create.

    A key limitation right now, and the reason EA is not more pervasive in commercial computing than it is, is the difficulty of assessing fitness in real-time, or in any way that incorporates massive numbers of iterations and some substantial depth in gauging how good the candidate solution really is. That is not, however, a problem for the physical environment — it resolves everything, everywhere, in real time. That’s just physics. It will be a long time before we have even a modest portion of the kind of computing power nature brings to bear in assessing fitness, moment by moment.

  12. eigenstate,

    Three comments:

    1) I’m thoroughly familiar with evolutionary algorithms.
    2) Your challenge is empirically unsupported speculation concerning the applicability of computational evolutionary algorithms to the real world of biology.
    3) The probabilistic resources are not available in the real world to make your stuff work at even the cellular level, much less at the macro level.

    You might enjoy clicking on my name and downloading my brain child, WCC. It’s free. It uses all the most sophisticated search algorithms gleaned from the chess world, plus a dozen more of my own invention.

    People like you fascinate me. You’ve attacked my credibility, character, motivation, integrity, and even my memory about my own past.

    This is evidence of some kind of pathological obsession on your part.

    You are a religious fanatic, and I’ve attacked the creation story of your religion with the very science you thought supported it.

    This explains your hostility and antipathy.

  13. @Gil,

    I haven’t attacked your memory. I think you truly believe you were a “Dawkins-style atheist”. You just don’t know any better, how that claim doesn’t even begin to pass the smell test from you, given what you’ve written over a long period of time. It’s not a big deal, it’s just something that stick out conspicuously when you get your “breastplate of righteousness” on and get going.

    I’m just no interested in checkers, I’m a go player, have done some work on chunking algorithms and pattern recognition that I hoped might serve a group effort to build a computerized go-playing program (there’s a challenge you could really brag about!) My colleague often reminded the folks who joined the project from time to time that “Go is to chess what chess is to checkers” in terms of complexity and depth”. Checkers was the example we used of simple brute force search — you can run enough plies quickly enough on modern hardware to power a world champion (vs. humans anyway) checkers program just with brute force search.

    We had a guy participate who was affiliated with a program named “Chinook”, that I think was competitive back in the day. Maybe you are familiar with this.

    In any case, your claim to thoroughly familiarity with evolutionary algorithms is not grounded in your work on “WCC” as I read it, but if you are thinking that’s your backgrounding, the search computation for a checkers game, I suggest we’re not talking about the same subject.

    As for the rest… hand waving. If you want to tell me your calculation as to the probability figures for the resources required to make a cell run, I’d be like to see that. I’m regularly challenged with the Big Number Tiny Probability Game from creationists, and it’s always interesting (and here, especially so, for one thoroughly familiary with EA and computing mechanics, etc.) to see the math.

    But I won’t hold your breath. You’ve got the fulfillment of Lewis now driving your math, and forget that, the spirit of God. So it hardly seems fruitful for either of us, in light of that.

  14. Firstly, Jonathan, thanks for responding. A few asides to begin:

    You seem perhaps a little derisive that my UD handle doesn’t contain my full name. So, for whatever it’s worth, my name is Paul McBride. I am a PhD candidate in New Zealand, studying molecular ecology. I have research interests that interesect with the junk DNA debate, and the broader selectionist–neutralist debate in molecular evolution. I’m intrigued by the interactions between ecology and the patterns and rates of sequence evolution, (predominantly in vertebrates, as far as my own research goes).

    I use the term ‘putative junk’ because it reflects the potential for the functional components that may be scattered in low densities through the genome, while primarily reflecting the balance of molecular and population-genetic evidence of the last 40 years that large sections are likely functionless or close to it.

    I’m not particularly interested in debates over what percentage of our genome is currently known to be functional.

    You have previously stated that most of the genome is functional (in Myth: “the idea that most of our DNA is junk became the dominant view among biologists. That view has turned out to be spectacularly wrong”), and in doing so have made strong – and in my view, inaccurate – claims about the presence or otherwise of junk. So, you do seem broadly interested in proportions of genomic junk; I take it you only mean you don’t care about exact percentages. Fine, but am I wrong to infer you have made the argument for >50% functional DNA?

    Whatever the current percentage might be, it is increasing every week as new discoveries are reported—and such discoveries will probably continue into the indefinite future.

    This is weak for the reason I have given a dozen times already: it is decidedly qualitative. The real question must be ‘increasing by how much every week?’ I gave the analogy in the previous thread that the discovery of function is rather like the setting of world records in the 100m sprint: just because occasionally the world record is broken, we wouldn’t conclude that eventually the record is approaching zero – rather it is decreasing to a limit. This can only change if new/when new functional classes of genomic elements are found.

    Leaving the world of analogy, discoveries of micro RNAs which are regularly brought up in discussions of newly discovered genomic function, are a point in case of incremental increases. Imagine you find a new one every week and never slow down. In 20,000 years you’ll have explained 1% of the human genome, still leaving 90% unexplained. Yet people still hold miRNA discoveries up as evidence that any talk of junk is premature! Such claims relies on the dicussion remaining qualitative, rather than addressing the question of “how much?”

    Sure, new functions are being discovered in the human genome. Sure, the genome is complex, with many difficult-to-predict interactions. Nothing about either of these observations should lead us to conclude that there is little genomic junk. When we consider the other lines of evidence here (e.g. mutational load) in the context of a) the mutational origins of these sequences and b) the population-genetic fate of these mutations, the balance of evidence squarely falls on one side of this debate. To caricature my position as being “Darwin of the gaps” demonstrates a grave misunderstanding of my reasoning by ignoring the positive evidence for the position of genomic junk. Incidentally, it is also inaccurate to the decidely non-selectionist origins of the arguments.

    Onto mutational load (e.g. Ohno’s work):

    The proper way to reason scientifically is not “Ohno predicted theoretically that the vast majority of our DNA is junk, therefore it is,” but “If much of our non-protein-coding DNA turns out to be functional, then Ohno’s theoretical prediction was wrong.”

    You’ve failed to address the argument I have made. Yes, there is current debate about the precise mutation rate – but despite your quote from Kondrashov & Kondrashov, there are emerging direct estimates of the human germline mutation rate. Let’s also note that predictions of the mutation rate come from several, independent lines of inquiry and converge on similar numbers.

    A more precise human mutation rate estimate will emerge over the next couple years. This will help to determine an upper limit for the number of nucleotides that can be under purifying selection. Critically, though, not knowing the exact number has little bearing on the mutational load argument as a case for the existence of junk – it only bears on the question of how much.

    Indeed, to dismiss the mutational load argument as being purely theoretical would suggest there is nothing to support it. This is far from true. Time dependence in rates of molecular evolution support the idea of a deleterious mutational input that is removed by selection over successive generations. The time dependence is this: between recently diverged species, the spectrum of differences contains an elevated proportion of non-neutral changes (i.e. an elevated dN:dS ratio). This ratio declines as distances between pairs increases, as those differences proportionately change from mutation/low-frequency polymorhphisms to fixed substitutions. Read Ho’s seminal paper or his review. This is well supported by another recent paper from a separate research group who show time dependency for non-synonymous but not synonymous change, as predicted from the mutational load argument. Do you have an alternative explanation for these phenomena?

    In short, you cannot condemn the mutational load argument as simply being theoretical. Not only is the best current explanation for observations such as time dependence in molecular rates, but to dismiss it would at least require an explanation for how populations would otherwise avoid the accumulation of deleterious alleles.

    On sequence conservation:

    Although sequence conservation in divergent organisms suggests function, the absence of sequence conservation does not indicate lack of function. Indeed, according to modern Darwinian theory, species diverge because of mutational changes in their functional DNA. Obviously, if such DNA were constrained, then evolution could not occur.

    I am suprised by this line of argument. We cannot examine these data with depth and arrive at this conclusion. Putative bursts of selection occur with distinct molecular signatures, whereas general, unconstrained evolution occurs at approximately the neutral divergence rate (i.e. the mutation rate), without marked, lineage-specific differences.

    In any case, lineage-specific increases should not be interpreted as accelerated positive selection without evidence. Relaxation of purifying selection will produce the same pattern. Only a hardlined selectionist would argue otherwise. If we were to accept that every acceleration was evidence of positive selection, we would be forced to believe that synonymous sites were under strong positive selection, and pseudogenes have accelerated positive selection relative to their functional homologues.

    On removing putative junk:

    In 2010, Rubin was part of another team of scientists that engineered mice missing a 58,000-base stretch of so-called “junk” DNA. The team found that the DNA-deficient mice appeared normal until they (along with a control group of normal mice) were fed a high-fat, high-cholesterol diet for 20 weeks.

    The deletion was done to a locus previously been linked to heart disease, so it was already known to have phenotypic effects. Undoutedly, there will be plenty of such findings in the future. Will this explain the majority of the genome, or little fragments, though?

    On pervasive transcription, I think this is a much more open question that areas such as mutational load. However, if you wish to discuss methodological problems, you should at least for balance describe the issues with tiling microarrays, and the likelihood of low level transcription being noise. Transcription does not equal function. Transcriptional noise/junk is expected from RNA polymerase binding.

    I would like to say much more but this is already a lengthy response for a comments thread where it will get lost amongst the rest.

  15. eigenstate,

    You want math? Well let’s see your position’s math pertaining to the FEASIBILITY that blind, undirected processes can produce a living organism from non-living matter.

    Your position is regularly challenged and it never responds.

    Why is that?

  16. Paul,

    The only way to make your case is by removing the allged junk and having teh organism develop and live without any issues.

    Absent of that all you have is arm/ hand waving.

  17. Eigenstate,

    “It will be a long time before we have even a modest portion of the kind of computing power nature brings to bear in assessing fitness, moment by moment.”

    But why is it doing it? Just because? All this chirality, fractality, feasibility, probability stuff is just words. Why is it doing it?

  18. Dear Professor Wells,

    You once wrote an essay about how Darwin was wrong about anything. I’ve tried to find it in the archives but I can’t. Can you send me a link? Thanks.

    Noam

  19. paulmc:

    Interesting comments, and hopefully Jonathan will have a chance to respond.

    Question: Do you agree that there are thousands (probably millions, but I’ll stick with thousands for now) of known biological functions, the programming for which has not yet been discovered? Where does that programming reside?

  20. No, I don’t think I would quite agree with that. Programming implies far more genetic determinism than there is evidence for. This seems to touch on that old and poor metaphor that the genome is a ‘blueprint’ for the organism. Even the hardline genetic determinists of old, e.g. 1980s Richard Dawkins and sociobiologists of his ilk, never took genetic determinism that far.

    There are certainly many phenotypes that have unexplained or only partially explained genetic components. Many are quantitative traits – the result of multiple genes interacting. While some phenotypes will undoubtedly have links to putative junk, others will likely resolve as the results of interactions between current genes and the environment. I expect they will be derived with less determinism than what, at least for me, the word ‘programming’ implies.

    Also, even if you are correct to use the word programming, I will still come back to the same point I made earlier: most of the genome does not have the degree of sequence conservation required for the specified biological functions of the type I think you are referring to. If those sequences freely accumulate change, how is it that they can perform a function that is as specified as a conserved gene’s function? Also, under a programming analogy such lack of specificity makes no sense, as even small changes to the program will have an effect on how it runs. Even the lines of a computer program that aren’t run by the computer need specificity to be useful (e.g. comments).

  21. Eric Anderson,

    Sorry I didn’t respond in the other thread; there had been no reply when last I looked.

    I simply think you are being premature, and perhaps swept away by Wells’s and your own wishful thinking. It is not denied that function is being found for areas previously thought ‘junk’. But it is nowhere near enough to light the bonfire under your your evolutionist-roasting program, even allowing for extrapolation. I am prepared to be wrong (contrary to the tiresome, and hopelessly wide-of-the-mark refrain that ‘evolutionists’ have some kind of philosophical attachment to junk). But that awaits a fivefold increase in the current functional percentage just to pass the 50% mark.

    Junk DNA is, in essence, just another example of the invalid “bad design” line of arguments. The reason I say their feet need to be held to the fire is because they have clearly gone on record saying that a large amount of non-functioning DNA is evidence for evolution and against design.

    There remains a substantial amount of apparently non-functioning DNA – I’m not a huge fan of the “what-the-designer-would-have-done” argument, but those authors who point it out have not been given any reason to retract. We know of about 1-2% more functional DNA than we had a few years ago. That simply reduces the junk pile and increases the functional pile, trivially. We remain at over 90% apparently non-functional. No-one is going to don sackcloth and ashes to appease you, and certainly not with those figures. But the argument is more about whether the figure is likely to change substantially. For that to happen, we don’t need cheeseparing, we need a whole different type of functional DNA.

    Now that the evidence is starting to lean the other direction I am not at all interested in letting them off the hook easily with any backpedaling (although, astoundingly, some of the folks I cited haven’t even started backpedaling, as they seem to be oblivious about recent research).

    It is not astounding at all. They have absolutely no reason to start backpedalling. Different issue, but equivalent reasoning: if we found that 85% of the population was gay, when we had thought it was 90%, would that mean anyone proposing substantial levels of heterosexuality needed a good roasting?

    The bottom line on so-called “junk DNA” is this.

    1. There is very little logical reason to assume that there is a lot of junk DNA.

    No-one assumed there was junk. Indeed it, like ‘selfish DNA’ (which contributes a large fraction of junk), was subject to resistance until more evidence began to be gathered. Ohno: “It seems as though ‘junk DNA’ has become a legitimate jargon in a glossary of molecular biology. Considering the violent reactions this phrase provoked when it was first proposed in 1972, the aura of legitimacy it now enjoys is amusing, indeed.” Ohno suggested that there were too many base pairs to have them all functional, given defensible mutation rate and population size assumptions. He also felt that the gene-duplication model of novel function would create more duds (pseudogenes) than successes. These arguments date from the days before genome sequencing. Now we have genomes, we can characterise the various classes, and it’s not looking good, contrary to the Creationist/ID spin.

    2. In contrast, there are very good reasons to think that the great majority, though perhaps not all, of junk DNA in fact has function. I will give just three off the top of my head.

    A. First of all, the non-functioning DNA argument (just like vestigal organs) has an abysmal track record, so we should be very cautious about jumping on that bandwagon. Introns are now known to be critical to alternative splicing.

    The main issue is with the length of intronic sequence. All you need to alternatively splice is an exon boundary – a splice site. There is no obvious reason why exon boundaries as represented by introns need to be 10 times longer than the exons they bound. As with transposons, the suspicion is ‘selfish DNA’ accumulating in the gaps.

    Pseudogenes help regulate DNA.

    Some pseudogenes!

    LINEs and SINEs have known functions, as do certain transposons.

    Some transposons (which class includes LINE and SINE) are in coding/regulatory positions. This cannot explain why there are several million of these in other positions. These are decaying at a rate consistent with lack of function. A particular arbitrary sequence, migrated to an exon, can donate up to 6 different peptide sequences (3 frame shifts, sense and antisense). Given the lack of conservation (because they are junk!) the number of possible random sequences goes up still further. But it is a stretch to suggest that >2 million transposons (whose transpositional ability is frequently broken) are there in order to donate such adventitious mutational sequence. We go to great “hi-fidelity” lengths to avoid point mutation, and then cast random peptide sequences about like so much confetti? Nah.

    C. From a basic engineering standpoint the idea of large amounts of useless DNA is problematic.

    DNA is not engineering! No engineering project had to worry about DNA. Only organisms do. And if it proves to be no particular disadvantage, there is no pressure favouring its loss. Everyone imagines it to be costly; no-one demonstrates that (to a well-fed multicellular eukaryote) it actually is.

    I don’t know whether we can put a precise number on it, but I would venture that more than 90% of DNA will eventually be shown to have function.

    That is very unlikely – due to intergenic transposons and virus fragments alone (50%). If a ‘functional’ part of a computer program were also repeated (but not executed) in 50% of that same program, we would not be correct in our belief that the discovered function gave us a clue as to the unexplained repeats. There is an explanation for these repeats, but function is not it.

  22. @Eugene S

    Why is it doing it?

    Don’t know, don’t see a way I or we can know. I’m not even convinced “Why?” is a coherent question or carries any semantic cargo in this context.

  23. Why is it doing it?

    Don’t know, don’t see a way I or we can know. I’m not even convinced “Why?” is a coherent question or carries any semantic cargo in this context.

    Why does hot air rise? Why does an apple fall? Why do ripples on the surface change the reflection on the rocks below? All of these were legitimate questions, each ending in legitimate knowledge.

    If asking why life is “doing it” needs an extra dash of semantic cargo to be coherent, then I say give it all it needs.

    But I think this particular objection only arises when probing questions are asked of someone who doesn’t want to, er, ask them with you. So you get the observed pat on the head, along with the ridiculous notion that asking ‘why’ needs some semantic cargo in order to rescue itself from being incoherent.

    (btw, the context was in the question, and you properly understood it – as evidenced by the first sentence in your response)

  24. @Upright Biped,

    I think you’ve gotten confused on what the “it” is there, and what the “doing it” is. Eugene S was replying to my comments about nature — physics — resolving everything in realtime, constantly, over and over. This is the massive computing power that powers realtime fitness testing, but the “it” had nothing to do with life. The it was “nature”, and “doing it” was “computing everything in real time”.

    This makes Eugene’s question intractable, the answer unknowable. It’s questioning the outermost context in our universe. There is no enclosing context we are aware of or can speak about to provide any answer, or the basis for knowledge.

    Sorry that got you confused towards a different question (why “life is ‘doing it’”). That is a tractable question, in principle, but it wasn’t the question in view when you responded. Why does nature as quantum computer compute as it does? That’s what was in view, and what I responded to.

  25. eigenstate,

    Is English not your native language?

    it’s just something that stick out conspicuously
    I’m just no interested in checkers
    In any case, your claim to thoroughly familiarity
    but if you are thinking that’s your backgrounding, the search computation for a checkers game
    If you want to tell me your calculation as to the probability figures for the resources required to make a cell run
    But I won’t hold your breath. You’ve got the fulfillment of Lewis now driving your math, and forget that, the spirit of God. So it hardly seems fruitful for either of us, in light of that.

    Lewis has nothing to do with “driving my math.”

    We had a guy participate who was affiliated with a program named “Chinook”

    You might be interested to know that I met Jonathan Schaeffer, the primary author of Chinook, at the first computer olympiad in London. His program placed first and mine placed second. We had an ongoing friendship for many years, and I attended an AI conference at his university in Alberta. As it turned out, I recomputed his eight-piece endgame database and found errors, which he corrected based on my results. I then went on to compute perfect-play databases which have not been duplicated. You can read all about this at my website.

    You are right about the game of Go, which involves a more difficult factorial problem, which is not amenable to a traditional tree-search/evaluation-function solution. What this elucidates is the incredibly sophisticated pattern-recognition capabilities of the human mind, which further undermines Darwinian fantasies about the creative powers of random errors filtered by natural selection.

  26. eigenstate,

    You may think that Eugene was wondering why inanimate matter was just “doing it” … er, computing itself.

    But I don’t.

    And given that your conversation was based on algorithms modeled on biological evolution, its a fair bet to make.

    If I am wrong, I am happy to retract.

  27. @Upright Biped,

    Here is what Eugene S quoted from ME in his reply to me, in which he asked “why is it doing it?”:

    “It will be a long time before we have even a modest portion of the kind of computing power nature brings to bear in assessing fitness, moment by moment.”

    (my emphasis)

    The “it” is nature, and it is computing itself, and this is the resource for testing biological fitness — physics. We were talking about evolutionary algorithms, which are indeed informed by the dynamics of biology, but my point was about the infrastructure problem, the shorting of computing power to ENABLE quality fitness testing. That’s a physics problem. We cannot marshal anything like the computing problem nature brings to bear on reality, the Big Quantum Computer that is constantly resolving that reality.

    I can understand the confusion, but rereading that, the only “it” I can identify there is nature-as-quantum-computer. Why it computes as it computes I understand to be an intractable question.

  28. 3.1.1.2.5

    “Why, is an intractable question.”

    Excellent. Thanks. I can even see grounds for doubting if there is any semantic cargo in this question. This is as much as materialistic thought can get. Fair enough. Anyhow, we need an oracle to learn why. Who might that oracle be? I think it is the One who designed this Big Quantum Computer of nature.

  29. eigenstate:

    “I can understand the confusion, but rereading that, the only “it” I can identify there is nature-as-quantum-computer. Why it computes as it computes I understand to be an intractable question.”

    ***

    Eugene_S:

    ““Why, is an intractable question.”

    “Excellent. Thanks. I can even see grounds for doubting if there is any semantic cargo in this question. This is as much as materialistic thought can get. Fair enough.”
    ====

    It does seem to be on equal footing with Shapiro and Yockey who when pressed on the OOL question, simply reply with “It’s unkowable” or “It’s Unsolvable”.

    It’s sort of like asking a member of Christendom to explain the “Trinity”. The answer usually is – “It’s a Mystery”.

    Either way, the question askers of both ideologies are still left in the dark with no educational thirst satifying answer to be obtained. *smile*

  30. UprightBiped,

    Maybe it is off topic but here it goes.

    Yes, I was wondering why life was doing it in the first place. But as far as I understand Eigenstate, “it” meaning nature presents an even broader context, which I cannot dispute. However, what I feel must be disputed is the point of view which in principle does not distinguish life from inanimate matter. I believe that life is also an intractable problem. It must be so: it is an extra “layer” of something science cannot define over physics and chemistry. Anyway, I am happy with Eigenstate’s purely materialistic response because I think that ultimately there is nothing that can force us to believe in God apart from our self. If there can be any act on our part in which we can fully exercise our free will, what is it if not the decision to believe?

    So the only problem I have with ID at the moment is at the heart of this. Can there be in principle anything that pushes us via scientific means (observe-measure-predict) towards religious belief? My concern is therefore more religious/philosophical rather than scientific. I have no scruples as a scientist as regards ID: in the end of the day, why can’t there be a means to infer design?

  31. 3.1.1.2.9

    Eocene,

    Either way, the question askers of both ideologies are still left in the dark with no educational thirst satifying answer to be obtained.

    It is already a huge achievement to get materialist scientists to realise that reality is bigger than our understanding of it can ever get. The concept of unknowable that believers have been familiar with since day 1, is now recognised by science, which is nice.

  32. Eugene_S:

    “The concept of unknowable that believers have been familiar with since day 1, is now recognised by science, which is nice.”
    ====

    Sad thing is when making those claims of ‘unkowable’ or ‘unsolvable’, at least in their own minds it allows them an excuse to not deal with the question and move onto terms like ‘directed’ or ‘guided’ for which they do not have a right to and from the beginning they use to poke fun at. The problem for them, tho they’ll never admit this, is that the data points towards direction, purpose and intent. The need then is to hijack the words/terms as their own and actually lie about the what they think those processes do when it comes to unobserved MACRO. When pinned to the carpet on their continued Faith-Based statement making, they proceed to religiously dogmatically defend their new found position for which they insist they always believed in anyway.

    *sigh*

  33. Hello Eugene,

    Yes, I was wondering why life was doing it in the first place. But as far as I understand Eigenstate, “it” meaning nature presents an even broader context, which I cannot dispute. However, what I feel must be disputed is the point of view which in principle does not distinguish life from inanimate matter.

    As I said, I am happy to retract.

    My real issue with Eigenstate is the anthropocentric delusion that matter “computes” itself, and the bastardization of information by suggesting that everything “contains it”.

    These are very alluring and pervasive visions, one’s which lead to great uitility, but they are false and those who hold them should have the discipline to not conflate them with reality.

  34. UB,

    Many Thanks for this clarification. I agree. Yet again I found proof for myself that there is no such thing as pure science per se. It always requires some sort of philosophical wrapper. What I don’t like about materialistic world views is their unwillingness to recognise this. IMHO, theistic thought is much more transparent (and intellectually honest).

  35. 35

    Eocene wrote:

    It’s sort of like asking a member of Christendom to explain the “Trinity”. The answer usually is – “It’s a Mystery”.

    The concept of The Trinity comes naturally from a plain reading of scripture. It’s not like Jesus is Michael the Archangel or something.

    If you insist on taking classless, snarky pot shots at Christian theology, I have no lack of Watchtower ammo to return fire with. You should decide now if that’s what you want going on on this board — you mocking Catholicism, and me mocking the Watchtower.

    Your call.

  36. Thanks, paulmc. My question was perhaps a bit vague, so perhaps I can clarify. By “programming” I am referring to the algorithms necessary to build and maintain the organism. In a very broad sense (applying it to us), what makes us human, as opposed to a blob of homogenous cells?

    There are many thousands (likely millions) of functional instructions that must be executed to create and maintain a human. We know this information must exist, and that it must exist in DNA (with perhaps some existing in the mother’s egg), because that is all we start with. What causes the egg to begin to divide? What causes the cells to differentiate in the right location at the right time? And on and on. Just looking in the mirror we can see myriad pieces of informational instructions that have been carried out at some point in development. Certain cells must differentiate to form cartilage, and then they must be organized to continue in this direction until point x, then over to point y, etc., until the cartilage for a nose is formed in the right place and in the right shape. Same for the ears. Cells have to be programmed to become the various parts of teeth, at exactly the right place and time, and then as the child grows, to be replaced by a new set of larger teeth at just the right location. And on and on.

    There are some biological processes that take place based on the basic chemical and physical interaction between different proteins. However, many ongoing biological processes, and virtually all of the construction processes, do not occur by simple virtue of chemical and physical interactions. Rather, they must be controlled to produce a certain result. That means a plan, initiations signals, instructions, feedback, control loops — programming.

    So far, we have identified some code that codes for specific proteins. We have also identified some DNA sequences that aid in protein expression (introns, LINEs and the like). We haven’t even scratched the surface in understanding the rest of DNA or understanding the algorithms involved in going from a single fertilized egg to an adult human. We know these processes take place. We know they require specified information, signaling, feedback and control algorithms. And we know that much, or most, of it must be contained somewhere in DNA.

  37. m.i.,

    The concept of The Trinity comes naturally from a plain reading of scripture. It’s not like Jesus is Michael the Archangel or something.

    While I couldn’t disagree more, this is why my better judgment tells me to stay away from scriptural debates on the internet. I haven’t always listened to it. I will now.

    I’m here for the science. If someone else makes a theological statement I disagree with, I really need to bite my tongue. (It would be nice if some would be a little more considerate and refer to their specific religion rather than make blanket statements like ‘Christians believe this…’ But more likely I’ll just have to ignore it. Oh well.)

    In an internet forum every house is a glass house. I’ve chucked my share of stones. I don’t retract but I repent.

  38. 38
    material.infantacy

    Hi Scott, indeed we will not agree on the Trinity, and I certainly don’t expect that the debate will be resolved here. I don’t necessarily mind doctrinal disagreements and discussions on occasion, and they’re certain to come up on UD, as those of many faiths and denominations find something here to agree on that doesn’t otherwise compromise strongly held beliefs.

    As I said before, I have often found your comments entertaining and insightful. And our brief debate on the Christian Darwinism thread about the Endor passage was at least amiable.

    I understand that when folks here refer to Christians they generally are referencing Catholics and Protestants. It’s not possible to be inclusive of all groups who self-label as Christian. However I would like you to note Eocene’s use of the term “Christendom” as pejorative of Catholics and Protestants. I’m well aware of the WBTS’ use of that term to denote all the fallen outside of your church, so I think it’s fair to presume that we disagree on what it means to be Christian. So IMO we shouldn’t dissemble all-inclusiveness of the term. Neither one of us believes the other fits the definition of Christian, unless I’m mistaken about your beliefs.

    Things often appear just as they are, and it appears that I have little patience for Eocene’s rudeness toward non-WS Christianity. I can’t promise to bite my own tongue if he continues to ridicule it. I have plenty of fire in my belly for what I regard as doctrinal perversion; but I think we have already agreed, at least between the two of us, that this isn’t an appropriate place for such.

    All that said, I too appreciate the general perspective of UD as focused on both science and evolution, and I find that in this environment I am a student, taking advantage of those better spoken and educated. I much prefer that to incessant theological disputes, as there is no lack of internet destinations suitable for those who cannot resist.

    I will make no solemn vow regarding my own behavior, and few apologies. I make no claims to any righteousness that’s not imputed to me externally.

    Your level-headed and pragmatic approach to this is appreciated, and stands in contrast to the bigotry displayed by Eocene — however it appears that both he and I would have plenty to say if the gloves were taken off. In the interest of deemphasizing our differences for the sake of others on this board, any perceived past offenses are, for my part, forgiven.

    I hope going forward that self-control can be exercised regarding unresolvable doctrinal disputes; otherwise I foresee a lot of back and forth about things which will likely not interest many others, and will ultimately detract from an otherwise singular environment here at UD.

    Best,
    m.i.

  39. m.i.

    I understand that when folks here refer to Christians they generally are referencing Catholics and Protestants. It’s not possible to be inclusive of all groups who self-label as Christian.

    You’re right. It’s not practical or realistic to suggest that they do otherwise.

    I understand why someone shoots off. I’ve done it too. I’m as guilty as anybody. It’s the same problem that affects the way drivers treat each other in traffic – sometimes we forget those are real people out there. It’s easy to get on a soapbox, and it’s all fun until someone says “Ouch!” and then it’s embarrassing.

    I’ve addressed subjects in this forum in a manner that I would never, ever, do face to face. I don’t mind a little more bluntness when it comes to the science aspect. It’s the whole point, and it’s less personal.

    When it comes to religion I need to hold myself to a much higher standard. There’s no such thing as “winning” such a debate. It’s never happened in my whole life. The best I can hope for is to drag my most cherished beliefs through the mud. So whatever I’ve said on more than one occasion, I apologize for undo tactlessness and pomposity.

  40. 40
    material.infantacy

    Scott, agreed, especially regarding our tendency to dehumanize others behind the text — it’s not the same to look into someone’s face and say out loud the things we type here. I’m guilty of this as well. Thanks. mi.

  41. Hi Eric. I think I did understand your line of thinking there. To claim this:

    However, many ongoing biological processes, and virtually all of the construction processes, do not occur by simple virtue of chemical and physical interactions. Rather, they must be controlled to produce a certain result. That means a plan, initiations signals, instructions, feedback, control loops — programming.

    is to assume that everything about an organism is highly deterministic. That is far from evident.

    A lot of ‘input’ comes from the environment. I am not a developmental biologist and cannot do the topic justice, but one of the main criticisms of the neoDarwinian framework came from developmental biologists who argued that genetics did little to answer these types of questions that you are asking – that in fact organisms are much more integrated systems. While the past couple decades certainly haven’t borne out everything that these developmental biologists claimed, most would remain sceptical of excessively deterministic genetics. What you are proposing is an extremely high level of genetic determinism.

    Just as an example of what I mean, Brian Goodwin’s work modelling spontaneous organisation in Acetabularia is a caution against always assuming such determinism. There’s an extract from a chapter of Goodwin’s work here. Again, I am not a developmental biologist; someone else could do a much better job of outlining this. Also, Goodwin was far from right about everything, and I am not holding him up as the gold standard for developmental biology.

    So to be clear, don’t get me wrong – overall, I agree there will be much more to learn about the genome, and much of what we learn will probably be surprising, weird, counterintuitive. But I don’t think we’ll find millions of deterministic, but currently unknown functional instructions.

    As far as ID goes, I find this line of reasoning a little unusual. While I know it is meant to go unsaid here, in practice I’m sure we can agree that at least many of you ID guys personally believe that the designer was likely the Christian God, even if this is not a tenet of ID. So, why should God work in just the same way that we do? A minimal ‘program’ that works in concert with the environment has a sort of beautiful, simplistic elegance that I wouldn’t write off, were I religious. Certainly not for the sake of a computer programming analogy.

  42. @Eugene S

    Excellent. Thanks. I can even see grounds for doubting if there is any semantic cargo in this question. This is as much as materialistic thought can get. Fair enough. Anyhow, we need an oracle to learn why. Who might that oracle be? I think it is the One who designed this Big Quantum Computer of nature.

    Why would you think that? It’s transcendentally self-defeating, that stance. Some putative oracle says “I am god, and I created you and the your entire universe”. How would one test that? Why would one suppose that supposed oracle was an actual oracle in the first place? You can’t get outside our system to test that claim. At all.

    Ever.

    So you are in the same epistemic position as I am — non-knowledge, and no path to knowledge. You can say “I think it’s this”, there’s nothing but desire for such a belief to recommend it. I certainly understand such desires, but desire doesn’t help you epistemically. My reference to an oracle was a winking one; that doesn’t help, and is part of the problems we have to deal with in reasoning about such question.

    Anyway, this is just a nuts-and-bolts example of materialist atheism demonstrating where it declines the temptation to be religious, to concoct stories where it’s knowledge ends. We are not physically (!) able to jump out side our physical system, so we cannot get ANY feedback on the question of the provenance of the system IN THAT SYSTEM’S REFERENCE FRAME. So, aware of that hard limitation through reasoning about it, I acknowledge it. It’s not a religiously satisfying answer. It doesn’t answer the big, outer metaphysical question. It can’t, so it does not.

    That’s a feature, not a bug, as they say in the software biz.

  43. @Upright Biped

    My real issue with Eigenstate is the anthropocentric delusion that matter “computes” itself, and the bastardization of information by suggesting that everything “contains it”.
    Oh, I’ve no problem with pointing out the limits of the analogy — and it is an analogy. The universe is not a computer. That analogy breaks down at some point, as all analogies do. It is useful conceptually, but usually I am on the other side reminding others of the limits of the analogy, as many are wont to take it where the mapping breaks down — “Hah! Gotcha! If it’s a computer than who programmed the software then? Hmmmm???”

    Information is analogous, too. There’s pedagogical value there, but hey, this blog is a shrine to dangers of analogical thinking about information (including, notably, you specifically, Upright Biped, and I have my names right and you were the one who recently posted a longish reply to the (now departed) Dr. Liddle on information-related matters).

    The isomorphism with computing concepts, where it applies, is sound (decoherence, for example, mapping to the process of recomputation/re-rendering of a model which is updating in real time in a computer). But the map is not the territory.

    These are very alluring and pervasive visions, one’s which lead to great uitility, but they are false and those who hold them should have the discipline to not conflate them with reality.

    All over that. Back to the original idea that sparked this sub-thread though, it’s difficult to locate a better set of semantic grips on the concept of “computing fitness” in nature than to invoke computing terminology. It’s law-based (algorithmic), iterative (cycle-based), and resource intensive, with all of that having to run as a smooth process (computational). In evolutionary algorithms with actual computers, it’s exceedingly difficult to get sufficient computing resources deployed to achieve anything more than superficial fitness tests done at scale without having to wait six months for any results.

    This drives an appreciation for the lavish “computing resources” (analogically speaking) of nature, which not only support real time fitness resolution, across all biological entities, but also the resolution of every other interaction in parallel with that, as well. If you have every done any goofing off with 3D rendering programs, any attempt to try and “truly model and render” even a small, simple scene from real life prompts a similar kind of appreciation: the “polygon count” and “texture shaders” of reality are stupendously large and complex.

    I’m curious, though. In my experience, the ditch that people fall into in my experience with computing analogies for nature is a creationist ditch; that such analogies somehow establish some kind of support for “Intelligent Developer” or something. I don’t think that’s your beef with the analogy. What do you see as ditch on the other side in terms of nature-as-computer analogy?

  44. If you have every done any goofing off with 3D rendering programs, any attempt to try and “truly model and render” even a small, simple scene from real life prompts a similar kind of appreciation: the “polygon count” and “texture shaders” of reality are stupendously large and complex.

    Doesn’t compare with the problem of protein folding, which chemistry can do in a thousandth of a second.

    One of many reasons why the computing metaphor cannot be used to calculate the difficulty or improbability of phenomena in biochemistry.

  45. eigenstate, if you are able to justify your conclusions without going “outside our physical system” then why, without equivocation, do you suggest that for others to justify their conclusions they would need to go “outside our physical system”?

    It appears that you have used whatever you’ve seen as useful within the system to justify your position, so why would others not be afforded that same utility? Are you operating under assumption that there are nothing within the system that is of any use on the matter? If that is so, the what did you use?

  46. You’ll have to keep in mind that my argument has nothing to do with a computer analogy. In any case, the question you ask of me is one I have already answered. It is posted in the same thread you referenced regarding Dr Liddle. You simply have failed to respond to it.

  47. The simple fact is that I only have time to look at this site once in a while, and the new thread structure and the quantity of new threads means that old threads disappear very quickly.

    I have no idea what the old question was. I am responding to something said on this thread posted just above my response.

    Due to the unexpected formatting — a post consisting entirely of nested quotes — I don’t even know who made the statement, nor do I care who made it. I am just responding to the implication that computer resources are a relevant metaphor.

  48. @Upright Biped,

    eigenstate, if you are able to justify your conclusions without going “outside our physical system” then why, without equivocation, do you suggest that for others to justify their conclusions they would need to go “outside our physical system”?

    The conclusion I have is what I began with — “I don’t know”. I don’t have to be outside the universe to conclude that. It’s the epistemic starting point, knowledge. If another says “I don’t know”, I don’t need any justification for that position, any more than my position needs it. If someone else says “God did it”, that is to claim some knowledge. So I wonder how one would “jump outside of the system” that we are enclosed in to establish that. Or altenatively, if one accepts a claim from some other source inside the system that they have (or it has) knowledge grounded in outside-the-system experience, I wonder how they can establish the bona fides of such a frame-jumping claims.

    It appears that you have used whatever you’ve seen as useful within the system to justify your position, so why would others not be afforded that same utility? Are you operating under assumption that there are nothing within the system that is of any use on the matter? If that is so, the what did you use?

    There well may be a lot of resources inside the system that COULD BE of use in building knowledge about facts from outside the system. But without being able to transcend the system to investigate or test it, we can’t make use of it. It may be here, but we can’t use it to discern, as we are “stuck in the system”. String theory, which I was just discussing with you (? or was it PaV?) is an example which may have such in-system resources that could point to “out-of-system” explanations – perhaps there was a “String Bing” that predicated the “Big Bang” as one physicist conjectured recently — but it can’t rise to anything more than that: conjecture.

    We can’t test it, because we are physically contained by, and constrained within the system. So perhaps some string theory notion has implications that would be knowledge if we could test it regarding “out-of-system” facts. But it can’t be anymore than a “perhaps”, given our limitations.

    If you think dangling conjecture is “of any use”, then I guess on your terms, there ARE items of use. But as knowledge, no, we have much fodder for conjecture, some more scientific in its origins, some less, but the key steps in knowledge building — testing, falsification, integration of dispositive evidence, are not available to us, even in principle, so far as we can tell.

  49. “I don’t know”

    So to put your position into play without contradiction, “dangling conjecture” that there is ‘something more than matter’ holds the same usefulness in obtaining knowledge as “dangling conjecture” that ‘matter is all there is’.

    Likewise, in terms of existence, someone saying that “God did it” has equal material footing as anyone saying anything else. This includes, of course, every convinced materialist alive.

    I think this is an interesting admission on your part, at least to the extent that no one is fooled by it. In any case, you’ll be happy to know that the argument I make provides no claims about what may or may not exist outside our system. In fact, it is based entirely upon what is inside our system, particularly material objects and their observable dynamics.

  50. It okay petrushka, eigenstate accidentally posted his comment in the wrong place, I was responding to him.

  51. @Upright Biped,

    So to put your position into play without contradiction, “dangling conjecture” that there is ‘something more than matter’ holds the same usefulness in obtaining knowledge as “dangling conjecture” that ‘matter is all there is’.

    We don’t know that “matter is all there is”, and can’t know that. But we can say “all we know is matter”, which isn’t the precise way I’d phrase that on its own, but works for the purposes of turning “matter is all there is” around.

    For example, in math, the Goldbach Conjecture remains an open question (equivalent to being open as to whether ‘matter is all there is’), but even so, vast sets of empirical tests of the conjecture, calculating for n < 1×10^18 — huge values of n, all of which support the idea. This matches the turnaround : "all we know is matter". All the evidence we have, and there is a LOT of it, conforms to the Goldbach's hunch.

    "matter is all there is" “all we know is matter”, and the latter is the position that rests on a positive epistemology, with no frame jumping or anything like that needed.

    Likewise, in terms of existence, someone saying that “God did it” has equal material footing as anyone saying anything else. This includes, of course, every convinced materialist alive.

    Yes. And would toward that everyone could have the clarity to acknowledge that. As a materialist, it’s an epistemic frustration, a question I’d very much like to know, but can’t. But I’ve got no eternal salvation riding on any answer that gets diminised by admitting I don’t know. I don’t have any “oracles” I have to re-evaluate from “outside the system” I have to re-evaluate.

    So we’re all on equal footing, but the ramifications for me aren’t even minor in terms of my belief in contrast to a theist who “knows God created the universe”.

    I think this is an interesting admission on your part, at least to the extent that no one is fooled by it. In any case, you’ll be happy to know that the argument I make provides no claims about what may or may not exist outside our system. In fact, it is based entirely upon what is inside our system, particularly material objects and their observable dynamics.

    Fooled by what? I have no problem with the epistemic parity everyone shares on this.

    On your arguments that are entirely system-internal, and predicated on material objects and dynamics, that’s great. I salute that.

  52. “We don’t know that “matter is all there is”, and can’t know that.”

    Despite your attempt to couch the issue, I once again applaud you for the tacit admission that the virtual whole of biological and cosmological science is operating under a false assumption which it cannot support.

    And trying to rephrase it to say that “matter is all we know” simply changes one assumption for another, and therefore does nothing but highlight the irrational (non-scientific) conduct among scientists who make these claims in the name of science.

    “So we’re all on equal footing, but the ramifications for me aren’t even minor in terms of my belief in contrast to a theist who “knows God created the universe”

    Of course this only holds true if you are able to think your conclusions into being, otherwise, you live and die under the same realities as everyone else. In any case, I am happy to remind you that ID doesn’t trade in questions of “eternal salvation”.

    “On your arguments that are entirely system-internal, and predicated on material objects and dynamics, that’s great. I salute that.”

    Then you have, at the very least, negated your argument that one must be outside our system in order to “make use” of evidence.

  53. Thanks, paulmc.

    I appreciate your thoughts on “determinism” and I don’t mean to suggest that there is no environmental influence. However, I find the “spontaneous organization” ideas somewhat lacking, because they ultimately boil down to just a fancy label for a process that is governed by chance, necessity, or some combination of the two. Environmental factors may have some influence at the periphery, but they are clearly not the primary drivers of organismal development or structure.

    I’m more interested in the engineering aspect. What is required to build, for example, an eye, or teeth or ears, or nose? We know that this construction is contingent and is carefully controlled. For example, there is nothing about cartilage itself that causes it to self-assemble into an ear as opposed to a nose. And certainly nothing that would cause it to self-assemble in the precise location and shape. (Indeed, when the instruction set gets messed up, we occasionally see failure to develop properly.)

    Because these things are contingent, they cannot — by definition — be caused by some kind of physical or chemical law. And because they occur in a regular, controlled and organized way, they cannot be the result of pure chance.

    Thus far, when we have looked into the cell, we have found machines and programming and instructions, and signals, and feedbacks, and switches, and on and on. Things don’t “just happen” in the cell based on vague notions. Take the bacterial flagellum. Scientists have discovered the genes that code for the various proteins. But having the various proteins in the cell does not mean that we have a flagellum. Indeed, the proteins must be organized in a highly specific sequence and location to properly produce a flagellum. This doesn’t happen by simple chemical and physical attractions (although those attractions are certainly utilized in the construction and ongoing function). This doesn’t happen by chance. Rather, there is a carefully orchestrated — programmed — process that takes place to produce the flagellum.

    Similarly, there are thousands upon thousands of other molecular machines, and many thousands of construction characteristics that don’t happen simply by necessity or chance. We know these machines exist; we know the construction takes place; we know there must be information stored, accessed and utilized to build these machines and structures. That information doesn’t come from some vague environmental influence or the laws of chemistry and physics. Rather, that information must reside somewhere in the cell. We have only scratched the surface at uncovering that information, and it would be a great marriage of ignorance and hubris to think that the information doesn’t exist, based solely on the fact that we haven’t found it yet.

    ———-

    On your last point, I’m not focused on the idea that the designer(s) of life is the Christian God. Even so, to respond to your thought, I do agree that the organism has to live within an environment and must be able, within degrees, to respond to that environment. But I’m not too impressed with vague, unspecified notions about minimalist programs and self-organization. Everything we have discovered thus far in cellular processes shows an incredible degree of carefully orchestrated and controlled processes. There is every reason, from a physical and engineering standpoint, to believe that we will find the same as we continue to elucidate other machines and processes within life.

  54. Hi Eric, I appreciate your response and I will address it in more detail, but in the meantime, have a look at this recent TED talk on the boundaries between life and non-life and the curious properties that emerge from very simple things.

    I’m not too impressed with vague, unspecified notions about minimalist programs and self-organization.

    I would urge you to read a bit of Goodwin’s work on self-organisation, as I linked to before. It is surprising and unintuitive what, for example, a simple calcium gradient does in terms of generating the form of Acetabularia. I don’t find the concept as vague or unspecified as you seem to – there are examples of how this works (also, to a certain extent, in Martin Hanczyc’s talk linked to above).

    Everything we have discovered thus far in cellular processes shows an incredible degree of carefully orchestrated and controlled processes.

    Gradients of hox gene expression, in turn regulating cascading expression of other genes during ontogeny does not smack of engineering in the sense of strict control occurring at every step of the process. It is instead a rather minimal setting up of conditions from which everything else unfolds.

  55. Information is analogous, too. There’s pedagogical value there, but hey, this blog is a shrine to dangers of analogical thinking about information (including, notably, you specifically, Upright Biped

    By the way eigenstate, information is an observable reality with physcial entailments. If you’d like to lay out each other’s definitions, and see which can be substantiated from a physical perspective, I am more than happy to accept that challenge.

  56. Let me suggest that you at least make a small effort to learn about origin of life research, AKA “abiogenesis.” A good place to start would be a “Short Outline of the Origin of Life.”
    http://stonesnbones.blogspot.c.....tline.html

    We do know that both peptides, and catalytic short RNA formed spontaneously, that these bind to various minerals which stabilized them, and that they were encapsulated by equally natural forming phosopholipid vesicles. We follow two lines of inquiry- geochemical studies of sedimentary rock of about the same age as the origin of life, and experimental studies of the fundamental chemistry of life as we find it today.

  57. paulmc, I read the chapter of Goodwin’s work you linked to. Thanks for the link. (Haven’t had time to listen to the TED talk, but perhaps after the holidays.)

    First of all, it looks like Goodwin was doing some interesting science and I hope he is still at it or that someone has picked up the baton, because there are certainly worthwhile things to research in this area.

    As it relates to my point, however, I would say Goodwin’s research is the exception that proves the rule. Just a couple things that jumped out at me from reading about his research:

    - The takeaway headline from Goodwin’s research seems to be, as expressed in the summary title “there is more to evolutionary biology than genes.” Indeed. That is part of my point. We have discovered that a portion of DNA consists of genes that code for proteins. Based on that, many staunch materialists have proclaimed that the rest is junk. Little by little, other functions are being discovered, yet the junk DNA myth persists. I agree, there is definitely more to biology than genes.

    - Goodwin’s research relates to a single-celled organism that has a repeating-patterned, circular shape. That is precisely the kind of shape that natural chemical affinities are good at making. That a particular single-celled organism would make use of natural chemical/physical affinities between molecules in making its shape is interesting, but is hardly relevant to the vast majority of other organisms that do not have such simple, repeating shapes.

    - Even with the simple shape that acetabularia acetabulum enjoys, it is not clear from the article that this is really an example of “self-organization”. Specifically, the organism still has a suite of proteins that are necessary for construction and, apparently, the organism has some kind of regulation over the amount and speed of construction, as well as its ultimate size (why does it stop growing when it does, for example?).

    - Even if it were the case (which is not at all clear from the article) that A.A.’s macrostructure is 100% determined by the laws of physics and chemistry, the organism’s structure does not simply arise on its own, but is made possible by numerous functional steps (each of which is governed by its own process), including “the fusion of two tiny motile, flagellated cells”, which then “secrete a sticky substance and attach to rocks” and then begin, based assuredly on some internal signal reception, to begin the process of changing the cell wall and producing the chemical constituents that permit the formation of the “little tip that grows up toward the light”. There is no evidence that these processes just happen by dint of “self-organization.”

    - Finally, whatever may be the case with the shape of the cell wall of a single-celled organism, we need to recognize that it apparently does not apply to many, or perhaps most, of the macro structures in ourselves (example below).

    As a general comment, we need to be careful to distinguish between necessary and sufficient conditions. Certainly the laws of chemistry and physics influence biology, because biology is built upon and under the influence of those laws. Thus, it is always correct to observe that chemistry and physics have an influence on biological development and structure. However, that does not mean that biology (in any particular case) is simply a natural outgrowth of natural laws.

    One way to think about whether something is “caused” by natural law, as opposed to being simply “influenced by” or “consistent with” is to ask whether the thing in question must necessarily occur. Simple example: is there something about cartilage cells that cause them to physically arrange in the form of a nose? The answer is clearly no, because we know of other arrangements that exist. Is there anything about chemistry and physics that causes your front tooth to take the shape that it does? No. Because we can look and see that other shapes are possible. This does not mean that chemistry and physics aren’t working. They are; but they didn’t cause the particular structure to come about. Chance didn’t either, because we see a regular, coordinated outcome across millions of examples. Thus, there must be, by necessity, some functional specified information that harnesses the laws of chemistry and physics to make this particular outcome come to pass in physical reality. There are thousands upon thousands of such machines and systems with similar information requirements. That information must reside somewhere.

  58. Okay well. Looks like we won’t be discussing any evidence today. I suppose asking someone to substantiate their remarks is off the table. I guess I should join the deniers and talk about socio-politics and religious angst instead.

    ;)

  59. I am beginning to note a certain pattern regarding the semiotic argument.

  60. The takeaway headline from Goodwin’s research seems to be, as expressed in the summary title “there is more to evolutionary biology than genes.” Indeed. That is part of my point.

    That’s quite a stretch. Yours and Goodwin’s points are almost opposites: you have claimed several times that there is a much greater degree of genetic determinism than is currently known to exist – indeed the types of genetic controls you expect are undiscovered in the genome would be described as a gene class were they to exist; Goodwin, on the other hand, de-emphasises the role of genetics in favour of a structuralist approach as he believes the genome had a far less prominent role in the generation of form than do neoDarwinists. See his book chapter discussion with Dawkins – web text version here.

    We have discovered that a portion of DNA consists of genes that code for proteins. Based on that, many staunch materialists have proclaimed that the rest is junk. Little by little, other functions are being discovered, yet the junk DNA myth persists.

    Are we back to this again? Junk DNA is a scientific inference, not some empty/ignorant assertion. If you want to claim the death of junk DNA from either newly discovered classes of RNA genes, new miRNAs or from other functions then you need to provide the numbers for your case (an argument I’ve made on this blog and elsewhere in response to Wells). While there has been lots of talk disparaging the concept of junk DNA here, no one has yet outlined a legitimate basis for referring to junk DNA as a ‘myth’. Nor does the inference of junk DNA have any logical connection with a materialist viewpoint. I have outlined parts of the case for junk DNA so many times above and in the previous thread that spawned this one, that it beggars belief that the same, ‘argument’ is still getting recycled when those points have still not been answered.

    Goodwin’s research relates to a single-celled organism that has a repeating-patterned, circular shape.

    Goodwin says this: “In Acetabularia the spherical zygote has to break out of its simplicity into ordered complexity of form. The technical term to describe the transition from a state of higher symmetry (lower complexity) to one of lower symmetry (higher complexity) is bifurcation.

    So it is not a matter of it being a simple, circular shape. The simple interactions between calcium concentration and the cytoskeleton result in complex breaking of symmetries – the whorls and end cap.

    Simple example: is there something about cartilage cells that cause them to physically arrange in the form of a nose? The answer is clearly no, because we know of other arrangements that exist. Is there anything about chemistry and physics that causes your front tooth to take the shape that it does? No. Because we can look and see that other shapes are possible.

    I completely agree that there are genetic components that influence cartilage and bone morhpology, tooth morphology and all manner of other phenotypes. These genetic components contribute to the differentiation between and within species in phenotypes. However, there is a giant gulf between that and assuming such differentiation is the result of “an incredible degree of carefully orchestrated and controlled processes” – with such tight and specified control that you seem to believe they make junk DNA a myth.

  61. In case any of you guys are interested, I am blogging fuller discussions about junk DNA elsewhere.

  62. Me too:

    Yeah, yeah, enough with “junk DNA” already. Relax this is a different angle.

    In Genetic/ Evolutionary Algorithms and My Front-Loaded Evolution, I said:

    So with my idea of front-loaded evolution we would have the initial conditions, the required resources, the specified result (ie what you are trying to accomplish) and then the algorithms to make it all happen. (bold added)

    As with Richard’s Dawkins “weasel” program, which took scrambled letters and having the whole alphabet as its resource, was able to create a pre-specified target sentence, front-loaded evolution would be able to take truly non-functioning DNA sequences and splice them together to meet some pre-specified function.

    That is why front-loaded evolution does NOT need to have all the alleles present as evotards so wrongly claim. All front-loaded evolution requires is that the future design be obtainable through the present design.

    In this case the alleged “junk” is just stock to select from, ie “the required resources”.

  63. Bornagain,

    Could you point me to Jon Wells’ you-tube video where he shows that once a cell membrane is punctured, the cell dies regardless of all the other conditions favourable to life (basically a counter-abiogenesis demonstration). This was called something like “10 minutes proof of ID” if I remember rightly.

    Thanks.

Leave a Reply