Home » Intelligent Design » The Altenberg Sixteen

The Altenberg Sixteen

HT to Larry Moran’s Sandwalk for the link to this fascinating long piece by journalist Suzan Mazur about an upcoming (July 2008) evolution meeting at the Konrad Lorenz Institute in Altenberg, Austria.

“The Altenberg 16″ is Mazur’s playful term for the sixteeen biologists and theoreticians invited by organizer Massimo Pigliucci. Most are on record as being, to greater and lesser degrees, dissatisfied with the current textbook theory of evolution. Surveying the group, I note that I’ve interacted with several of the people over the years, as have other ID theorists and assorted Bad Guys. This should be an exciting meeting, with the papers to be published in 2009 by MIT Press.

Mazur’s article is worth your attention. Evolutionary theory is in — and has been, for a long time — a period of great upheaval. Much of this upheaval is masked by the noise and smoke of the ID debate, and by the steady public rhetoric of major science organizations, concerned to tamp down ID-connected dissent. You know the lines: “Darwinian evolutionary theory is the foundation of biology,” et cetera.

But the upheaval is there, and increasing in amplitude and frequency.

[Note to Kevin Padian: journalists don't like it when you do this to them. Mazur writes:

Curiously, when I called Kevin Padian, president of NCSE's board of directors and a witness at the 2005 Kitzmiller v. Dover trial on Intelligent Design, to ask him about the evolution debate among scientists –- he said, "On some things there is not a debate." He then hung up.

That hanging-up part...not so wise. If you're going to say there's no debate, explain why.]

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

212 Responses to The Altenberg Sixteen

  1. I love the irony of the comment made by Mazur that she met Gerry Fodor on Darwin’s birthday at Lincoln Center. She completely ignores that Darwin and Lincoln were born on the same day 199 years ago. Yet she is in a citadel named after Lincoln and ignores it or doesn’t even realize it.

  2. Mazur:

    ….through the years most biologists outside of evolutionary biology have mistakenly believed that evolution is natural selection.

    A wave of scientists now questions natural selection’s relevance, though few will publicly admit it. And with such a fundamental struggle underway, the hurling of slurs such as “looney Marxist hangover”, “philosopher” (a scientist who can’t get grants anymore), “crackpot”, is hardly surprising.

    I think I need to make an appointment with an optometrist or ophthalmologist.

  3. No one from Cornell was invited. Allen MacNeill and Will Provine are not included nor interviewed by Mazur.

    It is hard to separate out all the various ideas in this article. But like MacNeill’s 47 sources of variation it might be worthwhile looking at each to see what they really mean.

    I am getting the impression that natural selection is still a very viable construct but that it isn’t everything. This is exactly what ID is saying. There doesn’t seem to be any discernment over the separation of variation and genetics and how the current theory would be modified.

    Behe made this distinction and has pointed to the variation side of Darwinian ideas as the real achilles heel to the modern synthesis. These people in this article would never admit an intelligent input but I am not getting any feeling as to what they actually believe. None of the 16 were interviewed except Pigliucci. Most of the interviews were with people not invited. The papers have already been written so I wonder if any of their ideas will be available before next year.

  4. Mazur:

    Pigliucci cites epigenetic inheritance as one of the mechanisms that Darwin knew nothing about….these kinds of phenomena are part of what’s loosely being called self-organization , in short a spontaneous organization of systems.

    Um, I thought this was touched upon years before by none other than the much maligned Dean Kenyon in his Biochemical Predestination monograph. I wonder if his name or his work will ever come up in these upcoming discussions.

    Snowflakes, a drop of water, a hurricane are all such spontaneously organized examples.

    I hope these scientists are kidding when nonchalantly throwing about such superfluous analogies. And I just wonder how they are going to bridge the connection between snowflakes that exhibit repeatable patterns with the highly specified functions of DNA.

    These systems grow more complex in form as a result of a process of attraction and repulsion.

    Right. Like the wind of a hurricane acting against snow-making forces and blowing through a garden to create a lovely and tasty Piña Colada.

  5. I found this passage by biologist Stuart Kaufmann very interesting:

    “”Well there’s 25,000 genes, so each could be on or off. So there’s 2 x 2 x 2 x 25,000 times. Well that’s 2 to the 25,000th. Right? Which is something like 10 to the 7,000th. Okay? There’s only 10 to the 80th particles in the whole universe. Are you stunned?”

    A couple of comments.

    First, it seems that someone is beginning to do the math, which is comforting.

    Second, Kaufmann here is pointing to an aspect which is often overlooked, even in the ID debate, and which instead is, in my opinion, fundamental. It is the problem of how transcriptomes are “selected” in each single cell from the bulk of available genes.

    That’s very intriguing, because we really don’t have the slightest idea of how it happens.

    As Kaufmann correctly points out, if we have 25,000 protein coding genes in the human genome (and that should not be very far from truth), and if we consider two possible states for each gene (transcripted – not transcripted),in each cell we have a search space of 2^25,000 possible transcriptomes, which Kaufmann justly finds stunning.

    But the reality is much worse than that, even with the little knowledge we have of the process of transcription. Indeed, we must consider a couple of other things:

    a) Each gene has not only two possible states, but many more. Indeed, the “quantity” of gene transcription is of fundamental importance, and that is a continuous variable, rather than a binary one.

    b) Moreover, as it is well known today, the “one gene – one protein” dogma is completely false. We know that one gene can, through various mechanisms (introne regulation, post transcriptional regulation, and many others), really code for many different proteins (in theory, even thousands of them, in practice certainly more than one). Indeed, darwinists, with their usual short-sightedness, seem to be very fond of this “one gene – many proteins” fact, apparently not understanding the terrible pitfalls which await them there.

    So, to sum up: we have a “minimal” transcriptome search space of 2^25,000, which, according to Kaufmann (I have not checked) is equivalent to 10^7,000, which is quite a number, if we remember that 10^150 is “just” Dembski’s UPB. That number has certainly to be increased of many, many orders of magnitude(nobody can say how many), if we take into account the different “levels” of transcription of each single gene plus the different “types” of transcription of each single gene.

    So, a very simple question: how can each cell, in each specific moment of its individual life, choose the right transcriptome in that almost infinite search space?

    And be careful, that question is really two different, and equally problematic, questions:

    1) How did all that information pertaining to the right trancriptomes evolve?

    2) Where, and how is it written?

    Well, darwinists seem to ignore both those questions. Maybe they already have too many other unsolved questions to deal with (CSI, IC, and similar). But we here in the ID field, who, I hope, can sleep with greater tranquility and a more serene cognitive consciousness, what answers can we suggest?

  6. Jerry:

    It is hard to separate out all the various ideas in this article. But like MacNeill’s 47 sources of variation it might be worthwhile looking at each to see what they really mean.

    I loved the discussions going on w/ MacNeill around the time when he published his 47 sources of variation. It was once he had published that statement that he acknowledged that the 47 sources, plus natural selection, did not provide an adequate explanation for the major adaptations that have occurred. I actually think that MacNeill would be an excellent asset at the meeting. I seem him to be an honest biologist, who honestly sees that NDE doesn’t have it in the bag.

  7. gpuccio,

    There is another issue and that is cell type. Apparently the promoters of the various genes are different in the different cell types. At least that is what I thought I heard in one lecture of many on genetics that I just watched.

    So that is why cells in the bone marrow don’t produce hair but red blood cells and hair cells do not produce red blood cells but hair cells. The promoters in each are different. If this is true then each cell’s DNA is not quite the same but has different promoters for the appropriate genes.

    If anyone has information on this, I would be curious because I always thought each cell is the same in terms of DNA.

    And if it is true, what determines these differences during gestation.

  8. Actually, I never admitted that the huge variety mechanisms listed in the post at my blog “couldn’t provide an adequate explanation for the major adaptations that have occurred.” On the contrary, my point was that so far we know about nearly fifty different mechanisms that produce a nearly incalculable amount of genetic and phenotypic variation (as shown by Kaufman’s calculations). Hence, there is absolutely no necessity to postulate the existance of other sources of variation unless and until there is empirical evidence that such mechanisms exist. Proposing that some new source of variation is needed (when clearly it is not, if the calculations cited above are within 1000 orders of magnitude), isn’t science, it’s pure wishful thinking.

    As to what could possibly winnow down the almost mind-numbing amount of genetic variation produced by the processes Kaufman cites, perhaps one might cite the same one proposed by Charles Darwin – natural selection? That is the whole point to the mechanism of natural selection as proposed by Darwin: variety (for which he didn’t propose a mechanism, but for which we now know about at least 47 major mechanisms, heredity (for which Darwin once again did not propose a mechanism, but which now comprises the fields of genetics, genomics, and proteomics, just to mention a few), and fecundity (the implications of which were first pointed out by Malthus, and which virtually no one disputes today), all of which together have the effect of producing unequal, non-random survival and reproduction; evolution, in a word.

  9. Yet another mechanism for producing variation (which no one has mentioned here yet) is the editing of mRNA transcripts by spliceosomes (also referred to as SNRPs, for Small Nuclear Ribonucleo Proteins). These objects, which are structurally related to ribosomes in some ways, cut out segments of primary RNA transcripts prior to translation in riobosomes in eukaryotic cells (prokaryotes apparently don’t have them). The activity of SNRPs can produce hundreds (and in some case thousands) of different proteins from a single nucleotide sequence in DNA. SNRPs are themselves coded for by DNA, as are the proteins that make up part of their structure, which means that genes in eukaryotes massively modify the expression of other genes in eukaryotes. Therefore, the current estimate of the number of genes in the human genome (i.e. somewhere between 25,000 and 30,000) is really only an estimate of the number of different DNA sequences, not the number of different gene products produced by protein synthesis in the ribosomes. This number is at least 100 times larger, and may be thousands of times larger. Multiply that times the number of different cell types, and modify that product by the number of different environments that such cells can respond to, and the amount of “variation space” potentially available to eukaryotes is so massively large that the real problem is not “is there enough variation to produce all the things we see”, but rather “how is it that out of all that variation, we only see a relatively small subset?” Again, the best answer is natural selection, combined with random genetic drift (which allows for the “colonization” of parts of the “variation space” that selection alone effectively prohibits).

    Also, for the curious, the reason I haven’t been posting of late (nor updating my blog) is that I have been recovering from a fairly serious medical condition, which has only now (after many months) allowed me to do more than meet the minimum responsibilities of my position as husband, father, and teacher at Cornell. I plan on posting about my experiences at my blog (http://evolutionlist.blogspot.com) within the next couple of weeks. The title will be “On Pain” – watch for it.

  10. jerry:

    the genome is the same in all single cells of a multicellular organism, the only exception being the cells of the immune system, where specific rearrangements take place in the course of ontogenesis to generate the repertoire on antibodies and T cell receptors. Another possible exception could be random mutations in single cells or clones, and obviously neoplastic cells.

    For the rest, all cell types share the same genome. Promoters are just genes, or anyway DNA segments, which, like the protein coding genes, can be activated or not according to cell type, differentiation and functional state. That’s exactly what I meant in my post. Nobody knows where the information is coded that allows billions of cells to differentiate in different tissues, cell types, and so on, and to respond to any kind of stimuili, selecting the appropriate transcriptome from the almost infinitely varied possibilities.

    What makes a cell what it is, and not another type of cell, is, as far as we know, mainly its transcriptome. The set of possible transcriptomes is the set of all possible combinations of all possible states of all the genes in the genome. It is absolutely huge. How cells apparently know, each differently from the others, which genes to transcribe, how and how much, in each single moment of their life, and in harmony with the general plan of the organism, is one of the greatest mysteries of biology.

    Transcriptomes are being studied, now, mainly through the technology of micro-arrays, which allow to test which genes are transcribed in a cell population at a certain moments. The quantity of information which is being gathered through this technique is huge, but at present very poorly understood. We are just scratching the surface of the complexity of cell life. It’s only the beginning. And the more we know, the more the fairy tales of darwinism and reductionism will be exposed.

  11. Actually, it seems likely that the primary source of information determining what proteins (and therefore what traits) a cell will produce is the location of the cell, especially the cells it is touching and the cells that are nearby. Cells that touch each other exchange their contents (via gap junctions) and modify the expression of each other’s genes via signal transduction pathways that have their start in the cell membrane. Cells that are near each other send each other a myriad of chemical messages, which bind to cell sruface receptors (and, in the case of lipid-soluble molecules, to cytoplasmic receptors), both of which have the capacity to modify the expression of genes inside the cells.

    Furthermore, the similarities and differences between the specific pathways currently identified for these processes are entirely consistent with the hypothesis that they have evolved via descent with modification from a set of common ancestral pathways which have their origin somewhere in the deep evolutionary past (i.e. probably about 1 billion years ago or more – that is, somewhere coincident with the origin of eukaryotic cells). Far from “exposing the fairy tails of the Darwinists”, the discovery and elucidation of these pathways has paved the way for the new evolutionary synthesis, soon to be celebrated at Altenberg.

    And the reason I’m not going is that I’m already committed to teaching yet another upper-level evolution seminar course at Cornell in July. The topic this year will be “Evolution and Ethics: Is Morality Natural,” and will feature readings from such authors as T. H. Huxley, David Sloan Wilson, and Edward O. Wilson, among others. Check my blog for more information as it becomes available.

  12. Allen_MacNeill:

    First of all, I am well aware of transcriptional and post-transcriptional variations, and I had explicitily mentioned them in my post.

    Moreover, I think you are grossly underestimating the search space of possibilities which has to be faced by the nechanisms of variation you cite. I have to remark again here that Dembski’s UPB of 10^150 is a level of complexity so high that it could never be exhaustively explored by random search even if the whole universe had done nothing else, in its whole existence, than using all its bits to calculate the possibilities. Enough for your idea of variation!

    The complexity of even the smallest genome is well beyond that, and of thousands of orders of magnitude. So, I am not impressed at all by your mechanisms of random variation, be they mutation, alternative splicing, duplication, or anything else, be they 50 or 500. They are completely powerless to accomplish the task that darwinists have given them.

  13. Allen_MacNeill (#11):

    “Actually, it seems likely that the primary source of information determining what proteins (and therefore what traits) a cell will produce is the location of the cell, especially the cells it is touching and the cells that are nearby.”

    That’s exactly the kind of fairy tales I was referring to. To think that such a complex and integrated process like cellular differentiation and organization may take place “only” as a mechanical response to outer stimuli, without being guided by complex information procedures “inside” the cell, is pure folly. Again, I affirm that we have not the slightest idea of where the code which stores the real important information is located. Protein coding genes are only the “effectors” of the program of life. The true program of life is elsewhere to be found.

  14. Here is the misconception that consistently misleads most IDers:

    “I have to remark again here that Dembski’s UPB of 10^150 is a level of complexity so high that it could never be exhaustively explored by random search even if the whole universe had done nothing else, in its whole existence, than using all its bits to calculate the possibilities.”

    The whole point to natural selection is that the search isn’t random. Natural selection is no more random than is gravity, a point that a small fraction of my students consistently misunderstand. As Daniel Dennett pointed out in Darwin’s Dangerous Idea, natural selection constrains the pathways that variation can take through “variation space”, winnowing down an almost unimaginable number of possible pathways to a surprisingly small number (which grow smaller in umber, rather than larger, with time).

    So, like my students, repeat after me: natural selection is not random.

  15. For that matter, the variations that are produced by the various mechanisms listed at my blog aren’t random either. Yes, in many cases (but not all), the underlying combinatorial processes that produce the variations (such as Mendelian independent assortment, which is mechanism #32 in my list) are random for all intents and purposes. However, when one combines these processes, the results are anything but random.

    Think of it like shuffling and then dealing from a very large deck of cards. At first, there are a huge number of possible combinations. However, if you start selectively removing cards from the deck (which is essentially what natural selection does), the number of possible combinations steadily drops. Also, the remaining possible combinations are not a random sample of all of the combinations that were originally possible. On the contrary, as selection continues to winnow down the deck, the amount of variation available in the remaining combinations of cards steadily declines as well. And, this decline is not random; only the cards that don’t get eliminated continue to be shuffled and dealt.

    To complete the model, of course, one also has to imagine that new cards are constantly being added (e.g. by modifying the remaining cards), again by mechanisms that appear entirely random. Once again, however, the continuous winnowing process that is the heart of natural selection reduces both the amount of possible variation and its “randomicity” over time.

  16. Allen MacNeill,

    I have a question for you. If you take a class such as birds or aves with about 10,000 species and look at their genomes would you expect to find major differences or minor differences separating the various species.

    Now I realize that penguins and humming birds are quite different but would the difference be just in a series of minor differences in the proteins or would there be major structural variation between them that would represent creation of new complexity or maybe loss of some original complexity.

    I may not be expressing it clearly but what could we expect the differences be that cause these very different phenotypes?

    And if we were to examine all the bird species would we find that most of them just represent minor changes in the genomes of each. I realize you may not know much in particular about birds per se so if you want to pick another class, order or family of species and comment it would be appreciated. I only used birds since we have been discussing them here recently and while they have some very unique species, most are quite similar.

    It seems to me that natural selection would have taken various gene pools or an original gene pool and refined them to smaller gene pools and thus over deep time led to all the variety we see in birds. And as many of these sub population gene pools came about after a time many wouldn’t be able to inter breed. So the card deck generated numerous smaller card decks and these smaller card decks occasionally get augmented by occasional mutations.

    If so then it would seem possible to examine the nature of these mutations by comparing genomes. I realize that the data for such research probably does not exist today but maybe in the near future such a project could be possible. In other words what speculation/knowledge could you provide that would probably explain how the aves class developed over deep time or some other population that you are more familiar with?

    I look forward to your comments and any thing you can add to clear up my rambling and muddled questions.

  17. Dr. MacNeill, nice to see you are back. I am sorry for misunderstanding you on the last UD thread where you became chatty.

    (I would love it if someone could find the thread to see what Dr. MacNeill actually said, and what I misunderstood as: “couldn’t provide an adequate explanation for the major adaptations that have occurred.”)

    Thanks for the information about SNRPs. I have heard that there are actually hundreds of thousands of protein products despite there being only 25,000 coding genes. I know that Denton discusses some of the mechanisms that produce multiple protein products from single genes. However, I don’t believe that he discusses SNRPs.

    As a software developer (aren’t about half of us on this site?) I find the concept of a single gene coding for multiple proteins to be a challenging concept for evolution. Once a gene begins to code for more than two or three protein variants, especially if more than one of these variants plays a vital role, it should become immutable. It should become ultra-conserved. Failure of that being the case would be seem unevolvable from the prospective of a computer nerd who recognizes that intolerance of error that is the result of any data compression technology — this is data compression technology.

    Dr. MacNeill:

    So, like my students, repeat after me: natural selection is not random.

    I think you would be surprised at just how few IDers suffer from this misunderstanding.

  18. How many of these mechanisms of variation led to the first self-replicating lifeform?

    (Snicker)

  19. Edit SMTP should read SNRP.

    {DLH corrected SMTP to SNRP in 17}.

  20. Allen

    The whole point to natural selection is that the search isn’t random.

    You need to learn something about search algorithms. Of course natural selection isn’t random, but it doesn’t produce anything, it doesn’t create anything — it just throws stuff out. Throwing stuff out is a destructive process at worst and a conservative process at best, not a creative process.

    It continues to boggle my mind that Darwinists can’t see this. All the random variation in the world, no matter the variety of the variation and no matter how many failed attempts are thrown out, won’t turn a Hello World computer program into a grandmaster chess or checkers program. But this is what Darwinists claim happened in the history of life.

    Just do the math. The combinatorial explosion is so huge that orders of magnitude must be expressed with exponents that must be expressed in orders of magnitude.

    In addition, there is no hard or even soft empirical evidence that these fantastic Darwinian speculations have any basis in reality when it comes to extrapolating them to explain all of life’s complexity, diversity, information content, and functionally integrated machinery, not to mention consciousness, morality, human artistic creativity and much more.

    The only “evidence” that the mechanisms you propose can accomplish that with which they have been credited is that the alternative is philosophically unacceptable.

  21. Allen

    We all know natural selection is non-random. Our position is bascially that non-random natural selection isn’t sufficient to generate all the complexity of life if its only input is random mutations. If we were playing poker I’d say “I see your non-random natural selection and raise you non-random mutations.” I don’t think any of the mechanisms you propose generate non-random variation before they are run through a selection filtern unless you’re trying to resurrect Lamarck. The thing about all the combinatorial mechanisms you list is they are all reactionary as selection can’t operate on unexpressed characters. It can only react like a movie critic AFTER seeing the movie. We assert that a proactive process must be in play now and/or in the past.

  22. Regarding gpuccio’s comments in #5 on gene-protien dogma and probability….

    Here’s a link to a relevant article on the Encode Project, which, ( unfortunately for darwinist true-believers ) has determined that human biological complexity appears to be even MORE insanely complicated than we originally thought. ( apparently the “junk DNA” wasn’t junk after all…)

    http://www.boston.com/news/glo.....unraveled/

    To quote briefly – “Cellular processes long assumed to be genetic appear quite often to be the result of highly complex interactions occuring in reigons of DNA void of genes.”…”It’s a radical concept, one that a lot of scientists aren’t very happy with”, said Francis S. Collins, director of the national Human Genome Research Institute”.

    So basically, even given our CURRENT, incomplete specification of biological complexity, the odds of the genetic information present in even the SIMPLEST known life form self-assembling, exceeds the actual number of fundamental particles that exist in the universe? … and now the odds have just gotten worse?

    I wonder just how deep this rabbit hole goes, and I wonder at what point will darwinists simply have to look around at the probabilistic ground upon which they are standing and finally admit to themselves that there has to be SOME limit of SOME kind as to what is realistically plausible in a finite universe?

    The more complex and non-linear ( what “small sucessive steps” of RM+NS created these non-genetic “complex interactions”?) our model of human biology becomes, the closer we also have to come to a realization of actual irreducible complexity being present here.

    Regarding Mr. MacNeill’s lecturing to ID’er’s about their misled misconceptions ( #14 )…

    I would like to hear fewer condescending assertions, and fewer hypothetical guesses about how it all “might ” have happened, and instead see more specific details and observable, duplicatable EVIDENCE from darwinists.

    Could Mr. MacNeill please specify the EXACT RM+NS sequence of small, linear, succesive steps that got us from the first life forms to the kind of complexity the Encode project is talking about?

    Regarding Mr. MacNeill’s apparent belief that universal probability bounds don’t apply to biological information because “natural selection isn’t random”, may I ask how his naturalistic theories overcome the improbabable series of allegedly un-guided steps that got us from a pile of rocks to abiogenesis, before RM+NS could have even been called into play?

    By what means did the first self-replicating information processor create itself and then proceed to write upon itself the minimum information neccesary to begin synthesizing proteins in the first functioning cell?

    EXACTLY how did it occur? By what method did it occur, if not by random chance?

    And if it WAS random chance, then what do you figure the odds are of THAT all coming about?

    How does RM+NS get you out of THAT probabilistic jam, mister Smarty-MacSmarty-pants ?

    It would seem that naturalistic theories of life have exceeded dembski’s UPB ( and many other UPB calculations that I have heard thrown about here at uncommon descent ), before RM+NS even has a chance to function.

    I mean, come on MacNeill, naturalistic theory strains credulity at best, and with NO explanation whatsoever for abiogenesis, and NO specification of the precise RM+NS steps that allegedly got us from THAT mysterious point to our current specification of biological information complexity, why should critical minds accept this nonsense?

    It doesnt surprise me that your students repeat your “NS is not random” mantra back to you like sheep,and are willing to accept all of this unsubstantiated naturalistic rubbbish as though it were unquestionable fact.

    Most students these days are just empty heads full of mush, ready to believe whatever they are told. That’s why all the kids are for Obama. he tells them he belives in”change”, in “hope” and in “the future”. Like darwinism, obama-ism sounds good, but it lacks any actual realistic or specific mechanism to acheive what it claims to be able to deliver.
    Intellectual courage is about as rare as intellectual objectivity these days ( ie. the willingness to see what you see, not what you want to see…)

    Jerry Maguire said ” Show me the Money!”

    Now I say ” Show me the proof!”

  23. Dr. MacNeill, please allow me to defend my statement. I said above:

    It was once he had published that statement that he acknowledged that the 47 sources, plus natural selection, did not provide an adequate explanation for the major adaptations that have occurred.

    On 11/10/2007 in post: http://www.uncommondescent.com.....he-genome/
    you said:

    As for macroevolution, I agree that at the present time we have little or no formal theory predicting the observed patterns of change in deep evolutionary time. This is one reason why I have asserted that the so-called “modern evolutionary synthesis” of the mid-20th century is “dead” – it’s theoretical predictions have either been superceded (e.g. by evo-devo) or shown to be inadequate.

    I am puzzled that you now say that I was in error.

  24. Allen_Macneill (#14):

    “Here is the misconception that consistently misleads most IDers:
    The whole point to natural selection is that the search isn’t random. Natural selection is no more random than is gravity, a point that a small fraction of my students consistently misunderstand.”

    Here is the usual habit of darwinists, changing their arguments according to convenience, and never really answering a counter argument:

    1) First of all, their is absolutely no misconception here. We are all well aware that NS is the “only” part of the darwinian fairy tale which “appears” to be not completely random. I have personally and recently defended exactly that point here, in another thread. But, as you should have understood, in my post (#12), I was exactly answering to your affirmations, very explicit, in post #8, about “variation”:

    “my point was that so far we know about nearly fifty different mechanisms that produce a nearly incalculable amount of genetic and phenotypic variation (as shown by Kaufman’s calculations). Hence, there is absolutely no necessity to postulate the existance of other sources of variation unless and until there is empirical evidence that such mechanisms exist.”

    My post #12 was very obviously trying to demonstrate (and, I thing, with good success) that what you were saying about “variation” was wrong. You say that there are a lot of causes fro variation, and that they more than enough to explain all the necessary variation. I very easily showed that nothing could be more distant from truth.
    So, why did you answer to my post with those self-sufficient arguments about “natural selection”? NS, be it random or not, is not a cause of variation. It has to act on variation after it was already generated. We can speak about NS and its ambiguity as long as you like, and indeed I have often done exactly that on this blog, but I would like that, if you answer to what I say, you really answered to what I say.

    2)You say that NS is no more random than is gravity. I am afraid you don’t have a detailed understanding of one or the other. Gravity (to be more correct, the theory of gravity, let’s say for the sake of simplicity the newtonian one) is a theory which explains facts in terms of necessity, that is in detailed and non ambiguous mathematical form. It is a completely deterministic and explicit mathematical theory. If you believe that NS theory (let’s not forget our epistemology, they “are” theories!) has the same properties, I would be very interested to hear your arguments. But you know, may be your students are wiser than you think.

  25. Allen_Macneill (#15):

    “For that matter, the variations that are produced by the various mechanisms listed at my blog aren’t random either.”

    I don’t understand which mechanism generating “variation” is not random. Ah, yes, I know… design.

    NS is not a way to create variation, at best it is a way of eliminating and/or keeping it. But, again, variation has to be there, for NS to “act”. Random variation is the only mechanism generating variation in darwinian theory, unless you invoke voodoo or magic (and I do think that some of the vague discourses of a few frustrated darwinists about “self-organization” and similar are really no more than that).

    “Think of it like shuffling and then dealing from a very large deck of cards.”

    I am really something more than bored of all these “deck of card” analogies. I will not repeat here what I think of them just to stay polite. Here we are talking of proteins, molecular machines, complex regulation networks, and so on, not of cards. And, if you stick to your cards, please do the math, which is not encouraging…

    All the rest of your argument is a rather vahue discourse about the magic powers of NS. I dont’t want here to address the NS problem in detail, because I have not the time, but I am ready to discuss it in details in future posts. For the moment, I will just say that:

    a) NS has to act on existing information, and cannot create it

    b) NS theory, even if not a theory of randomness, is a very vague conception, and in no way a theory from necessity. The theory has many ambiguities and inconsistencies, even in its use of language

    c) Even as it is, NS theory is completely inadequate to explain what it tries to explain. First of all, Dembski and Marks have shown that a search process, to really improve the results of random processes, has to incorporate active information about its target. Second, and most important, NS need a constant “step by step” deconstruction of any possible function, which is obviously abdurd, if it has to overcome the insurmpuntable probabilistic difficulties of the darwinian model. And, last but not least, NS is completely powerless versus the fundamental problem of Irreducible Complexity. As I have recently argued on this blog, IC is a complete “NS stopper”. And, finally, NS can act only on information powerful enough to give a reproductive advantage, and anyway considerable time is needed to expand any new information, which, as you probably know, is a motive of very specific difficulties in the theory.

    Finally, you say:

    “To complete the model, of course, one also has to imagine that new cards are constantly being added (e.g. by modifying the remaining cards), again by mechanisms that appear entirely random.”

    I am glad you admit that. That was exactly my original point in post #12, the one you never answered to,
    that random mechanisms absolutely have not the power to create the necessary new cards. That should be obvious, if you consider the math. And mathematical arguments “are” arguments from necessity!

    And, finally:

    “Once again, however, the continuous winnowing process that is the heart of natural selection reduces both the amount of possible variation and its “randomicity” over time.”

    Magic and fairy tales all over again! How? Why? Please, let us discuss on real models, real math, real details.

    One last note: definitely, I am many things, but not one of your students.

  26. 26
    Granville Sewell

    Looking at simple forms like the snowflake, he noted that its “delicate sixfold symmetry tells us that order can arise without the benefit of natural selection”.

    How on Earth do you reason with people who (correctly) discard natural selection and then say, Hmm, I wonder what other unintelligent force can create brains? And point to snowflakes as evidence that order can arise on its own, ergo, so can brains. They’re just too stupid to reason with.

  27. Snowflakes are a wonderful example of the organization that can be produced by chance and law. They’re simple and no two are alike. They don’t reproduce themselves, they contain no coded information, and they don’t fit together with other snowflakes to form machines. In short, they’re a great example of the upper end of the organization that law & chance can produce. The stupidity comes in when it’s imagined that the laws governing crystal formation can produce cellular machinery which doesn’t exhibit any crystalline properties at all.

  28. Jerry:

    I’m not particularly well-versed in the class Aves (mammals are my preferred group), and so will address what I understand to be the situation with respect to mammals.

    Recent research indicates that the differences between individuals within species are close to the same order of magnitude as the differences between species. That is, there often is very little genetic difference between individuals of different species.

    However, this is compounded by the fact that one cannot simply look at particular genes and decide if the differences are significant. To give just one example, the main genetic differences between humans (Homo sapiens) and chimpanzees (Pan troglodytes)amount to changes in a few genes, mostly hox genes (including, but not limited to FOX2P).

    However, this ignores the fact that humans and chimps have different chromosome numbers. Humans have 23 pairs, whereas chimps have 24. There is convincing evidence that this is because human chromosome 2 is actually a fusion product of two of the chromosomes we share with chimps (you can find the fusion region in human chromosome 2 quite easily, if you know what you’re looking for).

    Changes in chromosome number do not actually change genetic information at all. Consider J. R. R. Tolkein’s The Lord of the Rings. Tolkein considered it to be a single volume (the “Red Book of Westmarch”), but divided it into six books (somewhat like the Bible). However, his publishers felt that dividing it into three volumes would make it more saleable, and so now you can buy it both ways: one volume or three volumes.

    However, none of the different ways of dividing up Tolkein’s book changes anything in the text (except perhaps the numbering in the titles). The same can be the case for very closely related species with virtually identical genetic sequences, but different chromosome numbers.

    It has long been known that eukaryote species are generally distinguished by having different chromosome sequences or numbers, but not necessarily different genetic information. This dovetails nicely with the generally accepted definition of “species,” which is based on reproductive isolation, not genetic composition. Clearly, genetic composition can be a component of reproductive isolation (for example, some species have genes that make hybrids with other species effectively sterile). However, it doesn’t have to be.

    There are good examples of species (or near-species) that are virtually genetically identical, but virtually never interbreed as the result of behavioral differences, many of which are learned (i.e. inherited via Lamarkian, rather than Mendelian, mechanisms).

    Personally, I strongly suspect that the whole concept of “species” is mostly an artifact of our Platonically conditioned minds. As Lynn Margulis has repeatedly pointed out, bacteria do not have species at all, at least not as defined by the classical biological species concept. I have a post at my blog (http://evolutionlist.blogspot.com/) on this subject, entitled “Origin of the Specious.”

  29. Dear Allen,

    I am sorry to hear about your health problems. I hope you’ll be doing better soon!

    A friend sent the following to me, to provoke further discussion here (I’ll reply in a bit):

    *******************************

    “Mazur’s article is worth your attention. Evolutionary theory is in — and has been, for a long time — a period of great upheaval.”

    Well, yes and no. The basics – universal common ancestry, descent with modification – are pretty unassailable. What may be interesting (if ironic) for the participants of UD is the debate regarding the reach of natural selection.

    Natural selection is, well, a fact – it happens, it shapes organisms and communities, it leaves its mark throughout biology. But the question as to how universal NS is remains an open one. The best way to understand the question is to consider a well-known case of evolution. The Hawaiian silverswords are a fantastically diverse group of plants that include several genera and undoubtedly share a common ancestry. This ancestry traces back to mainland North America, and the California tarweed (or something similar). This is a classic case of rapid evolutionary diversification upon invasion of a new habitat. The relevant question for these genera is – how much of the diversification, the amazing range of morphologies and phenotypes that evolved after the “migration” of the original tarweed to the isles, may be attributed to natural selection, and how much is the result of genetic drift? Genomes leave some clues; thus, it seems likely that the diversification of flowers was driven in large part by natural selection (that’s the conclusion from studies of sequence variation in flowering genes). But many questions remain, and the depth to which NS reaches in the shaping of the complete organism is not known. It’s quite possible that the divergence that we see was framed partially, even largely by random genetic drift, rather than by selection for each and every trait at each and every step of the evolutionary trajectory.

    So where’s the irony? Reflect if one might – one aspect of the ”raging debate” is essentially “NS vs. random genetic drift”. This actually places Darwin and Nelson (that’s right – that Nelson) on the same side of the fence in this debate, arguing against “random chance”. That’s an interesting twist.

  30. tyharris and gpuccio:

    I do not answer people who resort to insults, personal attacks, or ridicule (another reason why I rarely post at this site, BTW). I honestly attempt to answer people who are willing to accept that all conscientious people are interested in discovering as much as we can about how the universe works. If you are willing to reframe your questions in such a way as to indicate that you are willing to participate as members of a “community of scholars”, then I will do my best. Otherwise, you have demonstrated just what kind of “scholar” you actually are, and do not deserve notice by me (or anyone else).

  31. Paul Nelson:

    Good to hear from you. I appreciated immensely our discussion following your “debate” with Will Provine (in which you two agreed much more than you disagreed). I hope you are working toward finally finishing that MS on common descent. Remember my rule: “The greatest enemy of accomplishment is the desire for perfection.” You can’t get it right if you don’t get it written!

    As to the extended quote from Mazur, I have already commented on parts of it. To me, it simply expresses what most of us in evolutionary biology have known for a while: that the neo-Darwinian “modern synthesis” is showing its age, and is due for a major overhaul. I have commented on this repeatedly in my blog and elsewhere, and am currently writing a book on the subject for Wiley, entitled Evolution: The Darwinian Revolutions, in which I point out what is obvious to any historian of evolutionary biology: that the theory of evolution itself is evolving, and is currently passing through a Kuhnian paradigm shift, from gene-centered to phenotype/individual centered explanations for descent with modification and the origin of adaptations. I strongly recommend Jablonka and Lamb’s book on this subject, Evolution in Four Dimensions, in which they survey the four main “evolutions”, only one of which looks anything like the “modern synthesis.”

    Looking forward to corresponding with you on this subject in the future. Keep writing!

  32. DaveScot:
    Actually, many of the structures inside of eukaryotic cells bear a striking resemblance to complex crystals. For example, microtubules (which form much of the three-dimensional structure of eukaryotic cells), are formed of virtually identical repeated tubulin units, which spontaneously assemble into microtubules on the basis of a few simple binding rules (but only in the absence of free calcium ions in the cytosol, which disrupts this arrangement). The same is true for most if not all of the various components of cells. I would not be surprised if, in the near future, someone like Stuart Kaufman were to figure out one or more of the “assembly protocols” for such components, thereby rendering what appears to be a “magical confusion” into a “natural arrangement.”

  33. Granville Sewell:
    Please see my note vis-a-vis tyharris and gpuccio, above. Calling someone “stupid” is not within the bounds of civilized discussion, IMHO.

  34. 34

    I found this passage by biologist Stuart Kaufmann very interesting [...] it seems that someone is beginning to do the math, which is comforting.

    Stu Kaufmann introduced random Boolean networks as models of genetic regulatory networks in 1969.

  35. DaveScot:
    You wrote
    “Our position is bascially that non-random natural selection isn’t sufficient to generate all the complexity of life if its only input is random mutations.”

    And I completely agree with you. As the list of the “engines of variation” on my blog was intended to indicate, random mutations are only one (and probably a relatively minor) source of phenotypic variation between individuals in populations. There is no necessary reason to assume that all of the various mechanisms that produce phenotypic variation are random, other than that it makes mathematical modeling immensely easier. IMO this is why the early population geneticists (Fisher, Haldane, Wright, etc.) included such an assumption in their mathematical models for evolution.

    However, we now have a century of em pirical research into the varioius “engines of variation”, and as the various comments above have all indicated, they are fantastically fecund. That is, the problem is not getting enough variation to produce everything we see in nature, but rather how that variation is “pruned” to produce the relatively small number of actual existing variations.

    As just one example, why do insects all have just six legs, while land vertebrates have just four. The answer is clearly that the genetic and developmental processes that cause the formation of legs in insects and vertebrates are constrained, almost certainly by a combination of “rules of form” (such as those discussed by D’Arcy Thompson and Gregory Bateson) and historical contingency.

    The only way to find out is to do the hard, slogging work of formulating and testing hypotheses, publishing the results, and then discussing the results and there implications in public and with the kind of mutual respect (and absolutely ruthless critical viewpoint) to which most scholars are committed.

  36. To all:

    Many of the comments and questions in this and other threads seem to indicate to me that most of you consider genetic information to be the sine qua non of evolutionary biology, if not biology as a whole. Perhaps this is because, as several people have pointed out, many of you are computer programmers (or sys admins, etc.) Ironically, this means that you fundamentally agree with the framers of the evolutionary “modern synthesis”, whose mathematical models were based on precisely the same assumption.

    However, it is becoming increasingly clear that most of biology is not simply reducible to genetic information (the current rage for “genomics” notwithstanding). On the contrary, changes in genetic information are only one way of changing heritable information among biological organisms, and may be a result rather than a cause of phenotypic variation (look up “genetic assimilation” and “genetic accomodation” for a detailed explanation of how this might be the case).

  37. 37
    Granville Sewell

    Allen_MacNeill,

    You are right, I apologize for the use of the word “stupid”, which by the way wasn’t directed at you (I hadn’t even read your comment), it was directed at the author, Stuart Kauffmann, of the quote at the top of my comment, who is called a “genius” in the article.

  38. bFast:

    I think the confusion between your position and mine is at least partly the result of a common confusion among people considering these issues: the difference between macroevolution and the origin and evolution of adaptations.

    The theory of evolution, as originally presented by Darwin, was actually a synthesis addressing both of these issues. Darwin proposed that “descent with modification” (i.e. evolution) had occurred, and proposed natural selection as the cause of such modification. He also explained the origin of adaptations as the result of natural selection.

    In more modern terms, it is generally valid to equate “descent with modification” with the technical term “cladogenesis” (i.e. the process of macroevolution) and “natural selection” with the technical term “anagenesis” (i.e. one of the processes subsumed under “microevolution”). Hence, my statement about macroevolution was intended mostly to address cladogenesis, not anagenetic origin of adaptations.

    Virtually all of ID is about anagenetic origin of adaptations, and rarely if ever addresses the broader topics of cladogenetic macroevolution. Again, this may be due to the fact that evolutionary biologists have formulated precise and testable mathematical models for anagenetic microevolution, but have until recently assumed that such models could be extended to macroevolution without modification.

    I have taken the position that since macroevolution includes (indeed, depends fundamentally upon) contingent historical processes (such as mass extinctions, endosymbiotic innovation, genome fusions, etc.), which by definition cannot be predicted nor mathematically modeled, but only described as historical events, macroevolution cannot be reduced to the kind of mathematically based theory that characterized the evolutionary “modern synthesis” of the mid-20th century.

    Please let me know if I haven’t addressed your query to your satisfaction.

  39. Granville Sewell:

    Apology accepted; indeed, I take responsibility for having assumed that your comment was directed at evolutionary biologists in general, rather than Stuart Kaufman in particular.

  40. DaveScot, “and they [snowflakes] don’t fit together with other snowflakes to form machines.”

    What you talkin’ ’bout! You don’t live where I live, obviously. Around here the snowflakes assemble into machines that we call roof-crushers. They’re quite effective.

  41. Dr. MacNeill,

    I have a rather complicated and lengthy set of questions to ask you about natural selection and species formation. Would it make sense to do it on your blog or should I post it here.

    Essentially I believe that most of the species in the world are the result of natural selection but that naturalistic process do not explain the original gene pools from which all these species are formed. I have used the percentage 99.99% of the species owe their origin to NS but not all.

    Many here get hung on just how much NS can explain and by NS you can include other genetic processes that operate to change a gene pool such as genetic drift and gene flow or whatever other mechanism you can point to.

    As a hastily put together response to a post a couple a weeks ago on this topic go to

    http://www.uncommondescent.com...../#comments

    I would like to expand on the comment I made on this thread and if possible get your reactions to it. I realize that you will not agree with everything but would welcome your critical remarks if you have the time.

  42. Dr. MacNeill:

    random mutations are only one (and probably a relatively minor) source of phenotypic variation between individuals in populations.

    I have read your list of variations. It appears that when you discuss the IDers view of “random mutations” you are assuming that we mean point mutations. We don’t. As with your “natural selection is not random” comment, we are not that simple.

    It remains, though your list of variations goes far beyond the point mutation, it is a list of either contingent events (events without plan or purpose) that happen to an organism without strategy to give the organism any advantage (random with reguard to the organism) or are mechanistic — the (presumed) product of the above. (In here I would include co-evolution, though it doesn’t do a great job of fitting. In any case, organism A wanders from ideal fit via natural causes and organism B catches up via NDE, then organism B wanders, and A catches up. Its a mechanism that maintains balance, a presumed product of the same.)

    Dr. MacNeill:

    Perhaps this is because, as several people have pointed out, many of you are computer programmers (or sys admins, etc.)

    Actually, we are a pretty advanced group of systems programmers. This is as different from system administrators as car designers are from fleet managers. And I bet that between us we hold 50 patents, along with other national and international awards and honors.

    As a systems programmer, I am unimpressed with microevolution — the NDE mechanisms explain it just fine. I am also unimpressed with macroevolution as defined by evolutionary biologists — the crossing of the imaginary species boundary. We know that if you take a wolf and breed it, you can get all of the various breeds of dogs. It is absolutely reasonable that if you put a wolf into a new environment with many available niches, a variety of breeds would each establish itself in a niche, resulting in speciation by the current definition. Once the wolves had established themselves in their niches, contingent mutational events, filtered by natural selection, would move these species into more ideal fits to this new environment. I, as most of us, are quite content that this is pretty much what happened to produce the variety of species that we have. This all seems well within the realm of what RV+NS can accomplish.

    As a systems programmer, however, I am intrigued by the systems that have developed. They are incredible. They are better than I can do. I think therefore, that seeing as the biological community already has a whimpy definition for macroevolution, we need a new term, call it systemsevolution, to present the stuff that is truly challenging.

    One of those intriguing systems is the bacterial flagellum. Behe suggests that this systme is unevolvable. So far the best excuse for a just so story is something produced by Matzke. His story should be testable in the lab. I bet his scenerio is unreproduceable. Another intriguing system, as you mentioned earlier, is SNRPs. It produces data compression technology. As such, it seems unevolvable to me. Show me that it is not. And my third favorite is the HAR1F, an ultra-conserved RNA gene that seems to have taken on 18 point mutations in the human. It appears to me that these 18 mutations had to have occurred simultaneously. This small event alone is probably less probable then the UPB.

  43. Allen,

    I wish you well and hope for a full and speedy recovering concerning your health problems.

    Concerning the following:

    However, we now have a century of empirical research into the various “engines of variation”, and as the various comments above have all indicated, they are fantastically fecund. That is, the problem is not getting enough variation to produce everything we see in nature, but rather how that variation is “pruned” to produce the relatively small number of actual existing variations.

    The problem as I see it is not getting enough variation, but getting enough original, novel, innovative variation. My checkers program to which I linked is composed of about 65,000 lines of C code. In includes hundreds of modules, routines, mathematical operations, etc.: move generation, search algorithms, evaluation functions, hash tables, sorting routines, decision trees, boolean operations, indexing functions, memory access, time control, statistical move-history heuristics, disk I/O procedures, user-interface routines, graphics routines, and much, much more.

    The number of possible ways all these routines and operations could be recombined is virtually limitless, but you’re never going to get from a checkers program to a word processor, PhotoShop, a database program, or an Internet application through recombination. Completely new and original code is required.

    This is the problem as I see it for biological innovation by proposed evolutionary mechanisms. Variation and recombination don’t represent the entire story. It seems clear that recombination of existing biological information, as fecund and varied as it might be, is not going to get you from a bacterium to Bach. Totally new stuff is required — really sophisticated stuff — and lots of it.

    Of course, variation, recombination, and successive expression of dormant biological information could do the trick if the entire history of life had been preprogrammed into the first living cells. Some find this to be a more reasonable hypothesis than contemporary evolutionary theory, and more in keeping with the testimony of the fossil record, which overall is characterized by stasis and the sudden appearance of innovation.

  44. 44
    Granville Sewell

    I’ll admit my “these people are too stupid to reason with” was over the top, but I’ll also have to admit I don’t have a lot of admiration for evolutionary biologists and here why: It seems impossible to get any of them to acknowledge what is obvious to the layman, that the problem they are studying is FUNDAMENTALLY different from all other problems of science. I talk a lot about the second law of thermodynamics, about the fact that Nature cannot create order out of disorder, and invariably I am given examples like the snowflake (I’ll bet I’ve heard that one 20 times) or crystalization to show that Nature can indeed create order out of disorder, hence there is nothing strange about it creating human brains. So I have a bit of an overreaction every time I hear the snowflake postulated as an example of the order that Nature can create on her own. Is it really so hard to see the fundamental difference between unintelligent forces creating snowflakes, and creating brains?

  45. Mr. MacNeill,

    I visited your blog and was immediately struck by the following. You wrote:

    Creationists and supporters of Intelligent Design Theory (“IDers”) are fond of erecting a strawman in place of evolutionary theory, one that they can then dismantle and point to as “proof” that their “theories” are superior.

    Are you sure IDers and creationists are the only ones guilty of using this logical fallacy? How were you able to determine that this is not an otherwise reactionary response on the part of IDers in their attempts at correcting the other side, if indeed it was found out that the darwinist camp was also implicated in bringing up such an “error?” Do you really stand by this statement (as per your quote above) and are you willing to rise or fall upon discovery of the facts?

  46. I just want to know what key roles such factors as inheritance, fecundity and “46 other sources of variation” played in the creation of something so comparatively simple like the flagellum.

  47. I’ll also have to admit I don’t have a lot of admiration for evolutionary biologists and here why: It seems impossible to get any of them to acknowledge what is obvious to the layman

    Thank you, Granville! Someone had to say it. When I think off all the thousands of biologists with all their book learning and fancy degrees, it is just maddening that they refuse to recongize what is obvious to a simple high school graduate like myself.

  48. Jerry:

    One of the basic principles of evolutionary biology is that natural selection can’t be the mechanism by which speciation occurs. Darwin himself pointed this out in the Origin of Species. In chapter 8 (“Hybridism”, located here: http://darwin-online.org.uk/co.....ageseq=263)
    Darwin points out that

    “On the theory of natural selection the case is especially important, inasmuch as the sterility of hybrids could not possibly be of any advantage to them, and therefore could not have been acquired by the continued preservation of successive profitable degrees of sterility.”

    In other words, since sterility cannot be inherited from parents to offspring, it cannot evolve by natural selection.

    On the contrary, Darwin (and virtually all other evolutionary biologists) assert that speciation happens “by accident”; that is, when populations are prevented from interbreeding, they become different from each other via all of the mechanisms I list on my blog. Eventually they become sufficiently different from each other that either they don’t interbreed (for behavioral reasons, in the case of animals) or they can’t interbreed, as the result of the accumulation of increasing degrees of genetic/developmental incompatibility.

    Once separated by reproductive incompatibility, natural selection can begin to eliminate those individuals that do not discriminate between compatible and incompatible mates, via various mechanisms (look up “prezygotic and postzygotic isolating mechanisms”). A minority of evolutionary biologists believe that such isolating mechanisms can “take hold” while populations are still panmictic, a process called “sympatric speciation”, but there is relatively little unambiguous empirical evidence for this, and a mountain of empirical evidence pointing toward allopatric speciation (i.e. speciation that results from geographical isolation).

    So, your questions all point in what appears to be the wrong direction: speciation (technically, “cladogenesis”) isn’t the result of natural selection, but rather other genetic and behavioral mechanisms, none of which have as their “purpose” the production of new species.

    However, they aren’t bad questions in and of themselves. On the contrary, they highlight the difference between anagenesis — the evolution of adaptations within an evolving clade (which happens via natural selection) — and cladogenesis — the splitting of evolving lineages into the reproductively isolated populations we call “species”.

  49. On “random mutations”:

    Like bfast already stated ID proponents are using the term in reference to everything. For example, in Behe’s new book he lists all the mechanisms on one page but in general he uses “random mutations” unless a distinction needs to be made.

    I’ve said it at least several times on UD before, but I’d like a better term that encapsulates all mechanisms for variation. “Engines of variation” wouldn’t be bad since we could shorten it down to EV+NS for general conversation. But what has become the standard used by biologists; or is everyone currently using their favored set of terms?

  50. Gil Dodgen:

    You wrote:

    “The problem as I see it is not getting enough variation, but getting enough original, novel, innovative variation.”

    That was the point to my list of 47 mechanisms for generating phenotypic variation. Several of the mechanisms listed are capable of producing as much genetic variation as there are elementary particles in the known universe, while others (such as whole genome fusion) are capable of producing novel genetic combinations equivalent to the “hybridization” of the Encyclopedia Brittanica and the collected works of Anthony Trollope.

    In other words, the “engines of variation” are more than up to the task of generating anything that could conceivably be of use to a living organism (plus an immensely larger amount of useless variation).

    As to the question of whether any of the mechanisms in my list can produce “new” information, the answer is “yes”, so long as one recognizes that what really matters is the production of new phenotypic variation. As I have already pointed out, the exclusive concentration on genetic variation on the part of both evolutionary biologists (EBers) and IDers has until very recently blinded us to the tremendous potential of other mechanisms that produce the same effects (see Jablonka and Lamb/Evolution in Four Dimensions for a complete discussion).

  51. Patrick:

    It is perhaps instructive to point out that Darwin never used the term “random mutation” (nor “random” anything, for that matter) in the Origin of Species. The concept of randomicity is a mostly 20th century concept (especially in biology), and one of dubious empirical merit IMHO.

    More useful might be “non-foresighted”, as that describes more precisely the character of most (but not all) of the new variations that appear among the members of populations of living organisms.

    For example, how would one go about showing that something is “genuinely random” as opposed to “pseudorandom”?

  52. Allen MacNeill @ 33 wrote:

    Granville Sewell:
    Please see my note vis-a-vis tyharris and gpuccio, above. Calling someone “stupid” is not within the bounds of civilized discussion, IMHO.

    Yet, Mr. MacNeill, I am deeply offended when someone will so mischaracterize my doubts of a certain ideology as a religious caricature or something very close to it, especially when describing an organization that has served as an outlet for expressing these doubts as a “Neo-Creationism Propaganda Ministry.” (see your comment dated 10/9/07 @ http://evolutionlist.blogspot.com/)

    Now what level of cordiality could be expected when someone who is truly seeking to “civilly” debate the merits of a particular scientific or philosophic creed is confronted with such unwelcoming statements?

  53. Patrick, “On “random mutations.””

    I think a number of us have begun to use RV random variation, rather than RM because some of the variation is non-mutagenic, such as natural desasters.

    Patrick, ““Engines of variation” wouldn’t be bad since we could shorten it down to EV+NS for general conversation.”

    Patrick, the problem with the term “engines” is that it implies an intentional mechanism. The variations are, as I understand, well “accidents”.

    Allan MacNeill, “More useful might be “non-foresighted.”

    Good enough. I will, from now on, use NFV+NS. Non-foresighted variation + Natural Selection. I still say that there isn’t any way of producing some of the systems that exist on non-foresighted variation + natural selection alone. The most obvious problem with this mechanistic pair is that many of the mechanisms appear to not have a half-way point. If natural selection cannot work until the mechanism (think flagellum or HAR1F) is complete, if any half-way would rather be destructive to the organism then there is no smooth path up mount probable. The only way up mount probable is for there to be a path with a sufficiently mild slope that a person in a manual wheelchair can navigate it. I don’t personally think that such a path exists.

    Dr. MacNeill, I’m still waiting for some serous wisdom from you as an explanation of systemsevolution. So far, it appears that you have been happy to set up ID strawmen, and bowl them over. Please show us how these systems can develop via NFV+NS.

  54. 54

    JPCollado@4 and gpuccio@5:

    Mazur is not the brightest bulb. The 2nd and 3rd quotations in #4 are of her, yet JP treats them as views of dim-wit scientists.

    I had problems from the get-go with her quote of Stuart Kauffman:

    Well there’s 25,000 genes, so each could be on or off. So there’s 2 x 2 x 2 x 25,000 times. Well that’s 2 to the 25,000th. Right?

    Wrong. That would be 200,000. What Kauffman must have said, and Mazur must have been too math- challenged to get, was “there’s 2 x 2 x … x 2, 25,000 times.”

    I can’t find any evidence that Mazur digested what the scientists told her. For the most part, she’s strung quotations and pictures. It appears she’s paid by the column-inch. The Altenberg meeting is certainly very interesting, but she has not cast much light on it with her verbiage.

    She did, however, record a wonderfully insightful observation by Richard Lewontin:

    [S]cientists are always looking to find some theory or idea that they can push as something that nobody else ever thought of because that’s the way they get their prestige. . . .they have an idea which will overturn our whole view of evolution because otherwise they’re just workers in the factory, so to speak. And the factory was designed by Charles Darwin.

    This has always been true, and claims by UDers that evolutionary biologists swallow “dogma” of the modern evolutionary synthesis hook, line, and sinker have always been false. Evolutionary theory has never wanted for divergent thinkers.

  55. Allen_MacNeill:

    “Insults, personal attacks, ridicule”? I find that rather ironic. I must have missed the kind undertone when you wrote:

    “So, like my students, repeat after me: natural selection is not random.”

    Anyway, in my comments I have tried to express what you call “the kind of mutual respect (and absolutely ruthless critical viewpoint) to which most scholars are committed”. May be you noticed the second point more than the first, but first of all respect has to be “mutual”, and anyway it implies the commitment to hard and explicit intellectual confrontation when one deeply disagrees.

    I don’t think I have to re-formulate my points, because in my opinion they were clear enough. If you don’t feel like addressing them, it’s your choice. I respect that.

    Meanwhile, I hope you will respect if I feel like going on commenting on what you write here. I like intellectual confrontation, even if not corresponded.

    From what you write in your new posts, I feel a bit confused. While I stick to the basic problem of the incredible limits of random variation as a means to create new CSI, as GilDodgen has well summed up in his last post, I see that you are offering some form of new approach which is worth of consideration. Let’s see:

    1) I have carefully reviewed you list of “47 sources of variation” to which you repeatedly refer. I find that list at best ambiguous. Many of the points are clearly forms of random variation, and the only point to be deducted from them is that random variation is not effected only by single nucleotide mutations. Well, that’s very easy and very trivial: obviously, any kind of random variation can modify the genome, but it is always a random process. The relevance when you try to compute probabilities is just the same. Randomness is randomness, and its search power is always the same. In that category we can put deletion, inserion, frameshft mutations, inversion, and all chromosome random modifications. Where is the point? All all them are random mutations, or if you prefer random variations. Maybe we can discover thousands of ways of random variation, what is the difference?

    Another class of items in the list is definitely ambiguous: it’s the long series which starts with the word “changes”. Changes due to what? In the list there is no clue about that. If you say : “changes in activation factor function in eukaryotes (increasing or decreasing binding to promoters)”, what do you mean? Changes secondary to random variation? In that case, we are still resfhuflling the same concept. The same can be said for items like: “addition or removal of gene products (especially enzymes) from biochemical pathways;
    splitting or combining of biochemical pathways”.

    You seem to be listing a series of “results” and not of causes or mechanisms.

    I find this point more interesting: “deletion/insertion of one or more genes via transposons”. But, if you mean that even that is random, we are again in the same class. Or it could not be random, and I am very interested in anything which is not random (except NS), because the only two alternatives to randomness are necessity and/or design. So, could transposons be an instrument of necessity and/or design? I am really interested in that possibility, as I am in any new perspective about the role of non coding DNA. The same applies, in my opinion, for the role of imprinting and of any other epigenetic factor.

    Finally, there are in your list a few items (especially the last ones) which, although vague and not detailed, seem to hint a (neo)Lamarckian perspective. That I am going to discuss in the next point.

    2) Reading those final points in your list (the “changes… in response” type), I get the impression that in some way you are trying (like other darwinists have done) to reintroduce, in “new” forms, a Lamarckian perspective, seeing evolution more as a form of “adaptation” than as the classical product of RV + NS. One of your affirmations in your last posts seems to confirm that:

    “However, it is becoming increasingly clear that most of biology is not simply reducible to genetic information (the current rage for “genomics” notwithstanding). On the contrary, changes in genetic information are only one way of changing heritable information among biological organisms, and may be a result rather than a cause of phenotypic variation (look up “genetic assimilation” and “genetic accomodation” for a detailed explanation of how this might be the case).”

    Well, I have looked. Here is Wiki on “genetic assimilation”:

    “Genetic assimilation is a process by which the effect of an environmental condition, such as exposure to a teratogen, is used in conjunction with artificial selection or natural selection to create a strain of organisms with similar changes in phenotype that are encoded genetically. Despite superficial appearances, this does not require the inheritance of acquired characters, although epigenetic inheritance could potentially influence the result.”

    and:

    “It has not been proven that genetic assimilation occurs in natural evolution, but it is difficult to rule it out from having at least a minor role”

    And this is about “genetic accommodation”, from a paper by Mary Jane West-Eberhard:

    “I argue that the
    origin of species differences can be explained, and the synthesis
    of Darwinism with genetics can be improved, by invoking two
    concepts: developmental recombination and genetic accommodation.
    Developmental recombination, or developmental reorganization
    of the ancestral phenotype (5), explains where new
    variants come from: they come from the preexisting phenotype,
    which is developmentally plastic and therefore subject to reorganization
    to produce novel variants when stimulated to do so by
    new inputs from the genome or the environment. Genetic
    accommodation, or genetic change in the regulation or form of
    a novel trait (5), is the process by which new developmental
    variants become established within populations and species
    because of genetic evolution by selection on phenotypic variation
    when it has a genetic component.”

    So, I try to sum up. This forms of revival of Lamarkism seem to start from the phenotype (ancestral or not) and its (intelligent?) interactions with environment, and then they postulate phenotypre selecion in absence og genetic change, and then, at some point, the transposition of the change at the genetic level.

    While all that is at least different form the usual model, I am certainly peplexed at many levels. The model is even more vague then the classical darwinian model. It poses a lot of difficulties from the point of view of information, and I am sure that it requires design concepts to be feasible at many of its focal points. Moreover, my impression is that the final point, thatg is genetic accommodation, has still to be achieved through some form of random variation, or of design, and so nothing changes. It seems even more magic how a phenotipic variation, probably due to intelligently designed potentialities in the ancestral phenotype/genotype, can translate into a “new” genetic information. By what means? Again, magical reshuffling of cards?

    Finally, I have big epistemological problems with what you say in post #38:

    “I have taken the position that since macroevolution includes (indeed, depends fundamentally upon) contingent historical processes (such as mass extinctions, endosymbiotic innovation, genome fusions, etc.), which by definition cannot be predicted nor mathematically modeled, but only described as historical events, macroevolution cannot be reduced to the kind of mathematically based theory that characterized the evolutionary “modern synthesis” of the mid-20th century.”

    Frankly, I don’t see how any scientific theory may be created without a rigorous mathematical, or at least logical, framework. If something “cannot be predicted nor mathematically modeled, but only described as historical events”, than it is not science, but cronichle. If one renounces the logical mathematical approach, one is renouncing the chance of building theories.

  56. The only negative to using NFV is that it assumes Darwinism to be true if that term is used to encapsulate everything. For example, an intelligence may set conditions by which a pseudorandom function induces variation. So foresight would be involved in setting the conditions. NFV would be a subset of all mechanisms for variation, whatever that may be called.

    Also, while there’s obviously a certain level of plasticity in biology let’s say the Designer(s) designed the system to macro-evolve. As in, due to intelligently configuring the initial starting modular components NFV is all that is needed from then on. But if intelligence is initially required for NFV to begin to function how could you call the mechanisms NFV in the first place since foresight was obviously involved? BTW, this hypothetical scenario is a different type of “front-loading” in that the front-loading is concerned with “designed to evolve via (otherwise) undirected mechanisms” instead of an “unrolling of a specific front-loaded plan”.

  57. Dr. MacNeill:

    It is perhaps instructive to point out that Darwin never used the term “random mutation” (nor “random” anything, for that matter) in the Origin of Species….For example, how would one go about showing that something is “genuinely random” as opposed to “pseudorandom”?

    Thanks for this wonderful insight. My eyes are now opened, realizing that there may be some sort of warped bell-curve to the mutations causes me to see that systemsevolution would most certainly take place if the variations did not perfectly pass the chi^2 test.

    Dr. MacNeill, when are you going to give up pretending that the problem with us IDers is that we are stupid, that we “don’t get” the theory of evolution?

    You knock us any time we refer to you or your coleagues as “stupid”, yet, though you don’t use the term, your pet solutions imply that you think we are stupid. Direct and indirect belittling are both as bad.

  58. gpuccio:

    “In that category we can put deletion, inserion, frameshft mutations, inversion, and all chromosome random modifications. Where is the point? All all them are random mutations,

    And similarly, a tasty Piña Colada a garden will not make whether it gets the help from hurricanes, typhoons, earthquakes, insect migration, continental drift, volcanism, etc…..I will never bet my coconuts on that at all.

  59. Allen MacNeill says many things that I like hearing from an evolutionary biologist. When he tells us that neo-Darwinian evolution is dead, that NS is not the engine of evolution (in fact, sometimes even goes so far as to tell us, with his friend Will Provine, that NS does nothing), that the engines of evolution are actually the engines of variation (of which we are only now getting an understanding — notice that this means we are only now getting started understanding evolution, in MacNeill’s estimation), that there is a real difference between micro-macro evolution, that genetic changes can be the result of phenotypic changes, etc., for instance.
    I even like the current admission that the word “random” is not accurate and doesn’t mean quite what it is intended to convey. As was the case with Darwin’s “spontaneous variation” the point is, as MacNeill has just said, that variation has no foresight. Since the randomness claim has failed the mathematical test over and over again, and biologists have had to resort to saying that because of NS (which does nothing) evolution is “anything but random”, this word is going out of favour. It just is not empirically sustainable. What MacNeill and others mean is “without foresight”. Of course, this is a philosophical assertion, and not an empirical observation.

    However, even with all the things MacNeill says which I like reading, it kind of galls me to have him self-righteously lecture people about the bounds of civil discourse. This lecturing about civil discourse on William Dembski’s blog is especially ironic.

  60. 60

    On Dr. MacNeill vs. Mr. MacNeill:

    UDers should love Mr. MacNeill, if only for the fact that he has distinguished himself in biology education at a top-drawer institution of higher learning without earning a doctorate.

    Of course, he has focused on teaching biology for more than thirty years, and it’s not as though he’s taken it on himself to criticize the shaky axioms of probability theory or hacked methods of data compression.

  61. ps.
    Dr. MacNeill, I am sorry to hear about your recent illness and am glad that you are feeling better.

  62. Hi Turner Coates,
    Yes, that information seemed quite popular on Panda’s Thumb a couple of summers ago when Dr./Mr. MacNeill was offering his summer course on ID. Quite an episode that was.
    And he criticizes UD for incivility.

  63. 63

    Charlie,

    I think you have missed the sense in which natural selection “does nothing.” There is no active process of selection in nature. “Natural selection” merely designates a consequence of, to quote Allen’s first comment in this thread,

    fecundity (the implications of which were first pointed out by Malthus, and which virtually no one disputes today)

    That is, population size tends to grow geometrically (according to Malthus), and there must come a point where variants are in competition for scarce resources. Some variants will do better than others at surviving and gaining resources to reproduce successfully.

    To reiterate, the idea is that the variants in the population actively compete. Nothing in nature actively selects.

  64. Turner Coates wrote:

    “Mazur is not the brightest bulb. The 2nd and 3rd quotations in #4 are of her, yet JP treats them as views of dim-wit scientists.

    Greetings Mr. Coates.

    I’ll be delighted to examine this in piecemeal fashion starting with the 2nd quote, which runs thus:

    Snowflakes, a drop of water, a hurricane are all such spontaneously organized examples.

    OK, Mr. Coates, just to understand you clearly….are you saying that the analogies Mazur brings above, regarding spontaneous organization, are purely her own inventions, having nothing to do with what other world renown scientists have been saying about the “highly ordered behavior” of certain inorganic matter? If this is true, then what are we to make of Ilya Prigogine’s (just picking on one famous scientist) vortex example?

  65. Just an observation:

    Every one of Dr. MacNeill’s engines of variation require a very specific system and non-random yet also non-lawful (high information) template. Every engine of variation requires:

    1. A template of forms and function pre-set by a set of physical laws (of which our life permitting laws are in the smallest percentile of all available combinations of laws which will actually allow life) from which the environment can subsequently choose.
    2. This template must be non-random in character, with linking structures of form and function (a template with high information content) in order for an evolutionary scenario to take effect.
    3. An instruction tape which is not defined by any laws so that it can freely store information.
    4. An information processing system to process the instruction tape and coax out the pre-set forms and functions that the environment can then select from.
    5. According to NFL and COI, there must be problem specific information programmed (fron-loaded/inputed) into the search procedure in order for any selection process to produce better than chance performance (“new” information), which is exactly what an evolutionary algorithm does.

    In fact, over the past year, I’ve come to the conclusion that life is a network of artificial intelligence which artificially discovers the solution to problems.

    So, how does artificial intelligence operate? It only learns what it is programmed to learn. Outputted information can be no greater than inputted information. Thus, information is conserved. Can your AI robot servant do anything other than what he is programmed to do as he searches through a solution space that you’ve provided for him with the problem specific programming that you’ve inputted into him?

    A “learner… that achieves at least mildly than better-than-chance performance, on average, … is like a perpetual motion machine – conservation of generalization performance precludes it.”

    –Cullen Schaffer on the Law of Conservation of Generalization Performance. Cullen Schaffer, “A conservation law for generalization performance,” in Proc. Eleventh International Conference on Machine Learning, H. Willian and W. Cohen. San Francisco: Morgan Kaufmann, 1994, pp.295-265.

    “The [computing] machine does not create any new information, but it performs a very valuable transformation of known information.”

    –Leon Brillouin, Science and Information Theory (Academic Press, New York, 1956).

    Therefore, the actual engine of variation is the programming that goes into setting up an information rich system of artificial intelligence.

    The pseudo-random natural selection filter merely searches through a non-random search space, guided by previously inputted problem specific information to pre-set targets. Of course, from those targets (solutions) there can be some minor effects and cyclical variations in accordance with a truly random search of the space immediately surrounding a pre-programmed potential solution (target).

    Whatever you want to call the latest version of evolution, if it is the hypothesis of the creation of information at consistently better than chance performance via the environment selecting from a random palate, it is the largest scientific hoax in history with not a shred of evidence in its favor and all observation, experimentation with EAs, and information theorems against it.

    In information terms, the above type of evolution is literally an attempt to sell a perpetual motion free energy machine.

    Buyer, you’d better beware!

  66. 66

    No, JPCollado, let’s not play divide-and-conquer.

    Pigliucci cites epigenetic inheritance as one of the mechanisms that Darwin knew nothing about. He says there is mounting empirical evidence to “suspect” there’s a whole additional layer chemically on top of the genes that is inherited but is not DNA. Darwin, of course, did not even know of the existence of DNA.

    Lewontin asks whether it’s “suspect” or “know”?

    Nevertheless, these kinds of phenomena are part of what’s loosely being called self-organization , in short a spontaneous organization of systems. Snowflakes, a drop of water, a hurricane are all such spontaneously organized examples. These systems grow more complex in form as a result of a process of attraction and repulsion.

    Mazur is simply listing common examples of self-organization, with no apparent understanding that they are inappropriate in this context. No biologist would suggest a close analogy of anything going on in epigenesis to snowflake formation. In fact, Mazur quotes Kauffman emphasizing that snowflakes are not alive. And I have never before seen a claim that self-organization in hurricanes is due to “attraction and repulsion.” Have I missed out on something?

    Much better examples of self-organization in living things, comprehensible to laypeople, are flocking of birds and schooling of fish. The complex organization of the school/flock emerges when individuals follow a few simple “rules.” Though I haven’t seen it used anywhere, I think reassembly of a sponge colony after it’s been forced through a sieve is a good example. One may move from that to the observation that the human body is a colony in which 90% of cells are non-human. Clearly there is no discrete locus of control of the colony of organisms, and organization of the whole emerges through interaction of parts.

    I’m far from the best person in the world to explain such things, but at least I’ve provided some non-strawmen.

  67. Excellent post, gpuccio (#55). Allen MacNeill seems to be trying to have his cake and eat it too, in that he feels compelled to admit the untenability of the mainstream “modern synthesis” Darwinist view in which natural selection is the only ongoing source of nonrandom “design” information. But he still tries at least to give the impression of retaining the notion of undirected natural processes with no feedback from environment and phenotype to genome being the only factors in evolution. It seems to be a sort of obfuscation, where part of the scheme is to point out a greatly expanded list of sources and types of genetic variation, and the fact that many of them aren’t physically random with respect to the genome structure. The implication is taken that somehow such “nonrandom” variations can be a key source of information to direct adaptation and speciation.

    But all of his 47 types of known genetic variation, though they greatly exceed the number of simple point mutations, still are (as far as known) inherently random with respect to fitness. The same applies to epigenetic variation. But the only way the scheme is actually tenable is to invoke some form of Lamarkian process where somehow the phenotype, experience of the organism, etc. can appropriately effect the genotype/epigenetic structures. This key point is disguised, and is the basic obfuscation.

  68. Magnan:

    How is it “obfuscation” when I have cited a recently published book (Jablonka & Lamb/Evolution in Four Dimensions which discusses both the mechanisms and their implications in detail? Furthermore, Eva Jablonka is one of the invited participants in the Altenberg conference, and will undoubtedly elaborate on many of the points that I have already discussed in this thread.

  69. Here’s an intersting question for us all to think about: assuming (for the sake of argument) that we replace “random mutation” (RM, the traditional formuation) with “non-foresighted variation” (NFV, my preferred term), how would one go about determining whether or not a particular phenotypic variation were, in fact, “non-foresighted.” Obviously one could do so after the fact, when a new variation began to significantly increase in frequency in a population, but ex post facto explanations are not explanations at all, but rather descriptions (at best, and rationalizations at worst).

    To me, this is the same problem one faces when determining whether an alteration in phenotype is “random” — “random” with respect to what, exactly?

    I would be in favor of dumping the entire concept of “randomness”, especially in biology. All descriptions of “random” sound to me to be either ex post facto, or a form of metaphysical speculation.

    In other words, is there any way to determine empirically if a phenotypic variation is, indeed, non-foresighted when it first appears?

    Personally, I can’t think of one, but I’m willing to be convinced otherwise.

  70. CJYman:

    You wrote:

    “…life is a network of artificial intelligence which artificially discovers the solution to problems.”

    How would one empirically distinguish between that and the following:

    “…life is a network of natural intelligence which naturally discovers the solution to problems.”

    or, for that matter:

    “…life is a network of intelligence which discovers the solution to problems.”

    Let me be clear about this: I am not necessarily advocating any of these. Rather, I’m asking how one would empirically distinguish between them (i.e. not speculate about their metaphysics)?

  71. I just read some of the reviews of Jablonka’s book and it says Jablonka is a Lamarkian. Now is Allen MacNeill a Lamarkian?

  72. Turner Coates:

    You wrote:

    “Clearly there is no discrete locus of control of the colony of organisms, and organization of the whole emerges through interaction of parts.”

    Now we may be onto something. As I explained in an earlier comment, this is precisely how many of the components of eukaryotic cells do what they do. What appears to be a fantastically complex system of “intelligently” interacting parts is, upon closer examination, a system of similar parts that interact according to a relatively small set of “interaction rules.” Like Conway Morris’s Game of Life, a very simple set of rules can produce a very complex set of outcomes.

    This is what we are now just beginning to learn about in the study of the “laws of variation”, about which Darwin admitted he was entirely ignorant.

    And I agree that if such rules do, in fact, exist, they would constitute a kind of “front-loading”. But how would that be different from the “front-loaded” systems that Newton described in his Principia? Like him, “I make no hypotheses” about such things, asserting that they are quite literally outside the purview of the natural sciences, which require empirical verification.

  73. Turner Coates wrote

    “Nothing in nature actively selects.”

    Precisely; natural selection is an outcome, not a selective process (and certainly not a “creative force”). Darwin himself preferred the term “natural preservation” later in life, but by then his original term had become too deeply entrenched.

  74. “This lecturing about civil discourse on William Dembski’s blog is especially ironic.”

    That’s the blog where a federal judge was depicted with fart noises, the way my nine-year-old son likes to ridicule his classmates. That’s the same William Dembski who attacked me personally on this blog without provocation and attempted to ban me from this site, until other members of this community (including Sal Cordova, among others) came to my defense.

    For that matter, I am the person who became in/famous for inviting Hannah Maxson (founder of Cornell’s IDEA club and winner of the Casey Luskin award) to be a co-presenter in my evolution-design seminar course at Cornell. That’s the same Hannah Maxson with whom I have been corresponding for the past year, while she volunteers caring for infants in Mongolia (we’ve been jointly doing a critical reading of Menuge’s Agents Under Fire, and maintaining a civil and collegial relationship while doing so).

    Please read T. H. Huxley’s letter to Charles Kingsley at http://aleph0.clarku.edu/huxley/letters/60.html
    and learn how true gentlemen (and women) who profoundly disagree on these issues can and should treat each other.

  75. gpuccio wrote:

    “…cannot be predicted nor mathematically modeled, but only described as historical events”, than it is not science, but [chronicle].”

    You are exactly right, and that is exactly what I am saying about macroevolution. Perhaps we were more honest when we called it “natural history”, the way Darwin and his colleagues did?

    However, I doubt whether anyone would argue that geology is not a “true” science, when it almost entirely lacks any fundamental underlying mathematical models. The same could be said for the science behind the hertzsprung russell model of stellar evolution, which is almost entirely descriptive, rather than based on an underlying mathematical formalism.

    For that matter, most of developmental biology (including evo-devo) is descriptive and depends fundamentally on historical contingency (a developing embryo is an “historical” entity), yet it is one of the hottest disciplines in biology.

    The idea that a field of inquiry isn’t science unless it has some underlying mathematical model(s) is precisely the viewpoint that I am criticizing when I criticize the “modern evolutionary synthesis”. Yes, the mathematical models formulated by Fisher, Haldane, and Wright provided the basis for the “modern synthesis,” but more recent research has shown them to be inadequate models of biological reality.

    It seems to me to be doubly ironic that some of the members of UD should support the idea that “true science requires mathematical foundations” when this is precisely what I (as an evolutionary biologist) see as the problem with the “modern synthesis”.

  76. jerry asked:

    “…is Allen MacNeill a Lamarkian?”

    If I am, I’m in pretty good company. Lamark asserted that “use and disuse” was the primary mechanism driving the evolution of such adaptations as long necks in giraffes. So did Darwin, beginning in the first edition of the Origin of Species. Indeed, by the sixth edition, he had expanded his views on the subject to the point that many of his arguments were indistinguishable from Lamark’s.

    Weissman, not Darwin, rejected Larmarian inheritance, for primarily metaphysical (rather than empirical) reasons. Anti-Lamarkism became one of the cornerstones of both Mendelian genetics and the founders of the “modern synthesis” (Fisher, Haldane, and Wright). However, in both of these cases the reasons were again primarily metaphysical, rather than empirical.

    As Jablonka and Lamb have pointed out, there are a whole suite of Lamarkian mechanisms now known and published in the scientific literature. As we find more of them, we are doing what science is supposed to do; change its theories in response to new empirical findings.

    So, you can call me a neo-Darwinian, neo-Lamarkian, evo-devonian supporter of four-dimensional evolutionary theory.

    Or you can call me Al…if you buy me a cellar-cool Sam Adams.

  77. Well, Al, I mean Dr. MacNeill, maybe I will take you up on it some day and buy you a couple of cold ones. I use to make the trek up Rt 17 five to six times a year to see my son when he went to Cornell or to take my other son to play hockey in central and western New York.

    Now all we need now is the fifth dimension, ID.

  78. Let’s be honest about the passions in this debate. For the average person like me, design simply screams from every corner of human experience. Darwinism is a desperate, scientifically naive, 19th-century attempt to explain the obvious away for philosophical and theological reasons (primarily the problem of evil).

    It all comes down to ultimate nihilism versus ultimate purpose, and nothing could be more meaningful or important in our lives.

  79. GilDodgen:

    Let’s be honest about the passions in this debate. For the average person like me, design simply screams from every corner of human experience.

    While it is true that design simply screams from every corner of the human experience, what if NFV+NS really does explain it all? Surely there are dozens of other examples within science that are counter-intuitive. Intuition, even “screaming from every corner of the human experience” cannot be the proof.

    Is a simple filter like NS sufficient intelligence to explain the development of DNA’s code? Surely the answer lies in a careful study of DNA itself.

    It is clear that Allan MacNeill is unprepared to provide a defense when systems level evolution is presented. You and I both know the significance, the unlikelihood of evolvability of two genes, a SNRP and a coding gene, producing dozens, nay hundreds of working protein variants. Even the simple HAR1F gene is baffling — you can’t get half-way there from here, you’ve got to go all the way or not at all.

    Though design simply screams from every cornder of human experience, the fact that should be of greater challenge to the detatched scientist is that the data does not fit the model. NFV+NS doesn’t explain the data — end of story.

  80. Hi Allen,
    First of all, I must reiterate my earlier comment about my appreciating your commenting here. I generally agree with all Sal has had to say about you.
    But after your ill-advised attempt to lecture others on gentlemanly discourse I’m surprised that you chose to follow it up with the tu quoque and martyr defence you did.
    As well as being the Allen MacNeill referenced in your comment (and attacked by the Pandas for it) I continue to find it ironic to have the same Allen MacNeill come to Dr. Dembski’s blog telling others how they ought to behave when he had the following to say over at Telic Thoughts:

    MacNeill quoted by another commenter:I will agree with one assertion of Dembski and his ilk: there is a “cultural war” being waged in the popular media. It’s a war on science and the objective understanding of nature, a war that was declared by the enemies of science, by people like Phillip Johnson and William Dembski. And, as the old saying goes, the first casualty of war is the truth. It’s time for everybody on both sides of the issue to face the fact that Dembski and his cohorts are either profoundly deluded, or deliberate, bald-faced liars. My money’s on the clean-shaven hypothesis…

    MacNeill defending his position:
    I have also commented on some of the weaknesses of M. Behe’s arguments, but find him (unlike Dembski) to be a gentleman, a scholar, and a worthy opponent.

    What makes Behe a worthy opponent (and Dembski an unworthy one)

    And then:
    And indeed, I stand justly accused of the kind of behavior that I have criticized in Dr. Dembski. My students and friends will understand that I shall try to amend such behavior in the future.

    I know there’s a more recent example as well, but couldn’t find it as of this writing.

    And, speaking of Hannah and your summer course, aren’t you the same Allen MacNeill who had to shut his own blog down for a couple of days’ cooling-off-period after violating his own rules against ad hominems and calling somebody a liar?
    http://evolutionanddesign.blog.....omment-432

    I do recall when Dr. Dembski “attacked you personally” here on this blog, and I recall him saying that, given your past comments, you really were quite capable of taking care of yourself. Actually, come to think of it, didn’t Dr. Dembski say that he was tempted to ban you but didn’t want to give you a martyr-complex? I bet you have the link to set me straight.

    But thanks for telling me to read and learn from Huxley. Is there a quote in there about the phrase “lie for Jesus”?

    Or maybe we can get back to enjoying your insights on variation and selection and let those with smaller beams critique the specks in the eyes of others.

    ps.
    I’m also enjoying your take on the history of the biological sciences and the role metaphysics has often taken – even superseding empirical observations. I’ve often suspected as much.

    Thanks again.

  81. jerry

    The house I grew up in and still spend summers in is ON old Route 17 in New York one door down from the U.S. 219 junction. The house actually on the junction (still standing and occupied by an elderly distant cousin) was built by my great grandfather in the 1800′s. He brewed some of the best beer around and there’s still some hops growing wild near the house, hops that he planted close to 120 years ago.

  82. Allen

    Well I’m sure glad there are a few other souls out there who realize Darwin was follower of Lamarck’s inheritance of acquired characters. That actually makes the Origin of Species plausible IMO (but it still doesn’t make non-intelligent origin of life credible). That said, Lamarckism goes full tilt against the central dogma of molecular biology: information flows in only one direction from DNA to RNA to protein.

    What might be called the central dogma of intelligent design is that only with proactive involvement of an intelligent agent can complex specified information flow in opposition to the central dogma of molecular biology.

    How is it you propose information flows from the environment into DNA other than through the roundabout means of random genetic mutations filtered by natural selection or through intelligent agency?

  83. Hi Allen,
    On Dr. Dembski’s “unprovoked” “attempted” banning of you:
    You need not search for that link as I think I have found it. At least this is the one I remembered.
    Is this where you thought Dr. Dembski had “attempted” to ban you, and would have if not for Sal’s speaking up for you?
    http://www.uncommondescent.com.....ment-68613

    Or do I have it wrong? Is this not the incident you had in mind? Since I don’t see Sal’s name anywhere on the thread I just might have the wrong one?

    As for Dr. Dembski being unprovoked (and as an example of your gentlemanly discourse):
    http://scienceblogs.com/dispat.....l_fisk.php
    This link has your “bald-faced liars” line as well as many other accusations of deceit, from December 2005.

  84. Allen

    You wrote

    Many of the comments and questions in this and other threads seem to indicate to me that most of you consider genetic information to be the sine qua non of evolutionary biology,

    Don’t count me in that number. In my view as a systems design engineer I find it bordering on ludicrous that a system as complex as the human body can be fully described by a mere gigabyte (the information carrying capacity of 3 billion nucleotides). Clearly though the information IS contained within a single cell (all the information needed to build a chicken resides inside the shell of the egg). I believe not just some but rather a majority of the heritable information required to construct a human body must reside external to the DNA molecule but still inside the cell wall of the egg. In another thread I suggested that the structure of the cytoskeleton might be where some of that epigenetic information resides. When looking for the non-genetically coded information (the genetic code is one dimensional) think of 3 dimensional topology as a potential coding mechanism and see where it leads.

  85. Allen, Charlie, and others:

    Could we dispense with the history of incivility? We’re all guilty of it so let he who is without sin cast the first stone. Let’s just move along with the topical discourse and in the process not repeat our past ad hominem indiscretions.

  86. Sure thing, Dave.

  87. Allen

    why do insects all have just six legs

    Because the that’s an artificial classification. Insects by defintion have six legs. If they have more than six they’re an arthropod but still have all the other identifying characteristics of insects. Even in the case of insects a butterfly is an insect but its larval form has dozens of legs. I don’t get the point you’re trying to make.

  88. Regarding Stuart Kaufmann, there seems to be a forgotten peer-reviewed paper highly cirical of his work. See: William Dembski and 3 IDers cited in a significant OOL peer-reviewed article by Trevors and Abel.

    For self-organization to be believable, it should be in abundant evidence empirically. Snowflakes and salt crystals evidence self-organization with such an abundance, biotic materials do not. Therefore spontaneous self-organization is not believable with respect to biology.

    In fact there is strong empirical evidence and sound theoretical reasons to suppose that biotic materials are not amenable to self-organization. It appears the Desinger chose exactly those materials that would resist explanations attributable to self-organization.

  89. (Sal, you just beat me to the reference to Trevors and Abel.)

    Turner Coates (66): “Much better examples of self-organization in living things, comprehensible to laypeople, are flocking of birds and schooling of fish … reassembly of a sponge colony after it’s been forced through a sieve is a good example. [etc.]

    “Self-organization” is a nonsense term. “Self-ordering” should be used instead.

    D.L. Abel, J.T. Trevors, “Self-organization vs. self-ordering events in life-origin models”, Physics of Life Reviews (2006):

    Abstract

    Self-ordering phenomena should not be confused with self-organization. Self-ordering events occur spontaneously according to natural “law” propensities and are purely physicodynamic. Crystallization and the spontaneously forming dissipative structures of Prigogine are examples of self-ordering. Self-ordering phenomena involve no decision nodes, no dynamically-inert configurable switches, no logic gates, no steering toward algorithmic success or “computational halting”. Hypercycles, genetic and evolutionary algorithms, neural nets, and cellular automata have not been shown to self-organize spontaneously into nontrivial functions. … Inanimacy cannot “organize” itself. Inanimacy can only self-order. “Self-organization” is without empirical and prediction-fulfilling support. No falsifiable theory of self-organization exists. “Self-organization” provides no mechanism and offers no detailed verifiable explanatory power. Care should be taken not to use the term “self-organization” erroneously to refer to low-informational, natural-process, self-ordering events, especially when discussing genetic information.

  90. Hi Dave, Allen, Charlie, GP, TH et al:

    First, I see that Dr/Mr Allen MacNeill has been ill: I wish you a speedy and complete recovery.

    Now also I see that his thread has had quite a side-discussion on civility or the want thereof. On this, I must say I agree with DS that there is need to move beyond ad hominems and the like. On BOTH sides; though on my observation across years and many contexts — including several personal experiences [up to and including my being called a Nazi] — it is the Evolutionary Materialism advocates who as a rule are the by far and away most guilty, up to and including a growing number of cases of unjustified career busting.

    But equally, ID thinkers and advocates should note on how above A used the slipping into uncivil remarks to divert from having to address the issue of the credible ORIGIN of genetic and wider biologically functionally specified, complex information [FSCI, a subset of CSI] squarely on the merits.

    Further, it is indubitable that AMacN and co have indeed much resorted to all sorts of uncivil behaviour, as was abundantly documented above. So, Allen: if uncivil behaviour is unacceptable when it targets you, why have you and others of your ilk used it so much in speaking to/of ID thinkers? Do you think such adds anything positive to the discussion? [Onlookers, have a look here at my recent update remarks on objectionism, in my online discussion of selective hyperskepticism.]

    But then, that is all on a side track. On the merits — and I can freely bring these back into focus as I have not participated in any ad hominems:

    1] Information generation at OOL:

    fr TH, 22 supra: Regarding Mr. MacNeill’s apparent belief that universal probability bounds don’t apply to biological information because “natural selection isn’t random”, may I ask how his naturalistic theories overcome the improbable series of allegedly un-guided steps that got us from a pile of rocks to abiogenesis, before RM+NS could have even been called into play?

    By what means did the first self-replicating information processor create itself and then proceed to write upon itself the minimum information neccesary to begin synthesizing proteins in the first functioning cell?

    2] Body-plan level macroevolution:

    22, again: Could Mr. MacNeill please specify the EXACT RM+NS sequence of small, linear, successive steps that got us from the first life forms to the kind of complexity the Encode project is talking about?

    3] The UPB:

    I note that Kaufman has put his finger on the core of the issue in his response to the journalist, though he did not elaborate enough to show the debt owed to a certain much-despised WmAD (against whom the rhetoric of spite has descended to the utterly childish level of maliciously and contemptuously distorting his surname):

    Well there’s 25,000 genes, so each could be on or off. So there’s 2 x 2 x 2 x 25,000 times. Well that’s 2 to the 25,000th. Right? Which is something like 10 to the 7,000th. Okay? There’s only 10 to the 80th particles in the whole universe. Are you stunned?

    Oh, yes, re TC at no 54: on the premise that each of the changes is in effect sufficiently independent of the others so that the entire potential config space is accessible [and Kaufman is highly qualified to know that], each of the on/off switches becomes a multiplicative term: 2 * 2 * . . . 25,000 times is 2^25,000 ~ 5.62*10^7,525.

    I hardly need to add the basis for the UPB, here from that oldie but goodie by Dan Peterson on the RH column:

    Dembski has formulated what he calls the “universal probability bound.” This is a number beyond which, under any circumstances, the probability of an event occurring is so small that we can say it was not the result of chance, but of design. He calculates this number by multiplying the number of elementary particles in the known universe (10^80) by the maximum number of alterations in the quantum states of matter per second (10^45) by the number of seconds between creation and when the universe undergoes heat death or collapses back on itself (10^25). The universal probability bound thus equals 10^150, and represents all of the possible events that can ever occur in the history of the universe. If an event is less likely than 1 in 10^150 [or ~ 2^500], therefore, we are quite justified in saying it did not result from chance but from design. Invoking billions of years of evolution to explain improbable occurrences does not help Darwinism if the odds exceed the universal probability bound.

    If we want to suggest that in the relevant config spaces, functionality clusters in islands, that leads to isolation of islands in the space. My simple way to address that is to say the islands are up to 10^150 states in size, as a reasonable upper bound. So, instead of 500 bits of storage capacity, we up to 1,000.

    In such an expanded space, it is then essentially impossible to access the shores of these islands of function on the gamut of the observable cosmos. And, the space we are looking at is 2^25,000 >> 2^1,000.

    4] But NS is not random so this is irrelevant . . .

    Not so fast. First, let’s take in a 101 level look at NS, via the ever so handy materialism-leaning “prof” Wiki:

    Natural selection is the process by which favorable heritable traits become more common in successive generations of a population of reproducing organisms, and unfavorable heritable traits become less common. Natural selection acts on the phenotype, or the observable characteristics of an organism, such that individuals with favorable phenotypes are more likely to survive and reproduce than those with less favorable phenotypes. If these phenotypes have a genetic basis, then the genotype associated with the favorable phenotype will increase in frequency in the next generation. Over time, this process can result in adaptations that specialize organisms for particular ecological niches and may eventually result in the emergence of new species.

    The “more likely” part is a giveaway.

    For, indeed, there are cases of mutations etc that are incompatible with life function and the cell or the organism dies. That is indeed not a matter of chance per se, it is a matter of destruction of biofunctionality.

    (H’mm, way back in my radiation and safety course, the key point was that radiological damage hits especially water molecules, and if hard enough the cells die off and you get radiation sickness that can lead to death. At lower levels of damage through this plainly chance-based, effectively random process, longer term harm comes into play, including of course cancer and genetic mutation.)

    But, at less extreme levels of undirected variation, the issue of chance does come in: hence, more/less likely to survive and reproduce in ecological niches and in contexts of existing competitiors.

    Thence, population shifts contingent on — on an evolutionary materialist scenario — many factors set by chance. [Where high contingency is at work, chance or intelligence are the only reasonable alternatives, necessity being per definition, about regularities not contingent variability. Put together heat, fuel and oxidiser and reliably you have a fire.]

    So, natural selection is a FILTER, one that is partly deterministic and partly chance-based. [Whether we use various random or pseudo-random or chaotic models to get to the chance is immaterial to the point.]

    5] Variation and information-generation

    Obviously NS is not the source of information, it is a biofunctionality-based absolute or relative filtering out mechanism.

    That brings us back to the other half of the NDT-as-extended dynamic:

    CHANCE variation [CV] + NS -> claimed OO body plans and species etc

    Bio-information or potential bio information taking in OOL, can only come from chance or mind, as it is highly contingent. However, given the sorts of complexity and functionality and sensitivity to perturbation involved, we are well beyond any reasonable version for the UPB, whether at OOL [300 - 500,000 4-state base pairs plus the epigenetics] or body-plan origination.

    For the latter, a good indicator is in this from Meyer’s famous Proceedings article:

    One way to estimate the amount of new CSI that appeared with the Cambrian animals is to count the number of new cell types that emerged with them (Valentine 1995:91-93) . . . the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types . . . New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information. Thus an increase in the number of cell types implies (at a minimum) a considerable increase in the amount of specified genetic information. Molecular biologists have recently estimated that a minimally complex single-celled organism would require between 318 and 562 kilobase pairs of DNA to produce the proteins necessary to maintain life (Koonin 2000). More complex single cells might require upward of a million base pairs. Yet to build the proteins necessary to sustain a complex arthropod such as a trilobite would require orders of magnitude more coding instructions. The genome size of a modern arthropod, the fruitfly Drosophila melanogaster, is approximately 180 million base pairs (Gerhart & Kirschner 1997:121, Adams et al. 2000). Transitions from a single cell to colonies of cells to complex animals represent significant (and, in principle, measurable) increases in CSI . . . .

    In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types, but also for the origin of new body plans . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes–the very stuff of macroevolution–apparently do not vary. In other words, mutations of the kind that macroevolution doesn’t need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don’t occur.6

    Some serious questions need some serious answers — answers that I observe over years of trying to get them, are persistently conspicuous by absence.

    GEM of TKI

  91. That’s the blog where a federal judge was depicted with fart noises

    Actually, the animation was not here, but at anohter blog called Overwhelming Evidence. I think that is one of Denyse’s blogs and I thought it was a clever way to introduce ID to a yonger generation. Although, I thought the enhanced version was going to be restored, but only the cleaned up version is still there the last time I checked.

  92. Poachy:

    There is a substantial issue on the table. Kindly address it. [E.g. cf. nos 88 - 90 just above.]

    GEM of TKI

  93. Charlie wrote:

    “But thanks for telling me to read and learn from Huxley. Is there a quote in there about the phrase “lie for Jesus”?”

    No, and the fact that you asked this clearly indicates that you didn’t read it.

  94. DaveScot wrote:

    “Clearly though the information IS contained within a single cell (all the information needed to build a chicken resides inside the shell of the egg).”

    I think it would be more accurate to say that the information needed to build a newly hatched chicken resides inside the shell of the egg.” However, even that assumption is just that: an assumption. And it is probably wrong insofar as the egg itself does not sit in a vacuum, but rather inside its mother, and then inside a nest. As just one example, without the body heat from the mother (or an incubator), a chicken egg with all of its self-contained information becomes, not a cute little ball of animated yellow fluff, but a nasty, smelly mass of decomposing slime in a surprisingly short time.

    What you are arguing (and I agree) is that genetic information alone isn’t close to enough to explain how a fertilized unicellular zygote becomes a fully integrated cooperative community of 100 trillion connected individuals…wait, that leaves out the mitochondria and the undulapodia. Okay, assuming that every cell has at least one kineticore and 100 mitochondria, the real number is more like 20 quadrillion.

    IOW, if one leaves out the environment within which the organism develops and lives, one is blind to the real causes for the organism’s existence.

  95. Charlie wrote:

    “And, speaking of Hannah and your summer course, aren’t you the same Allen MacNeill who had to shut his own blog down for a couple of days’ cooling-off-period after violating his own rules against ad hominems and calling somebody a liar?”

    Yes, and I credit Hannah (and my friend and mentor, Will Provine) with gently but persistently teaching me to attack people’s ideas, but not their person. Here’s what I wrote at the end of that course:

    “One person in particular deserves special mention: that is, of course, Hannah Maxson, without whom I suspect we might not have achieved anything like what we eventually did. She helped us all immensely in understanding and wrestling with these issues, faithfully attended every class session despite not being an enrolled student (the only “invited participant” from either side to do so), consistently presented an example of how to respectfully but forcefully argue for one’s positions, and spent uncounted hours setting up and moderating the two websites associated with this course, while at the same time holding down a demanding day job. For all of us, I humbly say “thank you, Hannah.”

    Unfortunately, the web preserves every gaff we have ever committed, without attaching any subsequent apologies, retractions, or other modifications. So, if it will make you feel better (and Dr. Dembski, this is for you as well), I most humbly apologize for every time that I attacked you as a person, rather than your ideas. As I have said before, I assume (now) that we are all committed to an unbiased search for understanding about how the universe operates. It is only by adhering to three simple rules that we can accomplish this:

    1) Never attack people

    2) Always attack their ideas (and be especially critical of your own)

    3) If someone attacks your own ideas, and succeeds in convincing you of their position, say so immediately and move on

    Here’s what Huxley said about this in the letter I cited earlier:

    “Surely it must be plain that an ingenious man could speculate without end on both sides, and find analogies for all his dreams. Nor does it help me to tell me that the aspirations of mankind–that my own highest aspirations even–lead me towards [a particular doctrine referenced in the letter]. I doubt the fact, to begin with, but if it be so even, what is this but in grand words asking me to believe a thing because I like it.

    Science has taught to me the opposite lesson. She warns me to be careful how I adopt a view which jumps with my preconceptions, and to require stronger evidence for such belief than for one to which I was previously hostile.

    My business is to teach my aspirations to conform themselves to fact, not to try and make facts harmonise with my aspirations.

    Science seems to me to teach in the highest and strongest manner the great truth which is embodied in the Christian conception of entire surrender to the will of God. Sit down before fact as a little child, be prepared to give up every preconceived notion, follow humbly wherever and to whatever abysses nature leads, or you shall learn nothing.”

    Let me repeat the core of this idea:

    “She warns me to be careful how I adopt a view which jumps with my preconceptions, and to require stronger evidence for such belief than for one to which I was previously hostile.”

    ’nuff said.

  96. bFast wrote:

    “It is clear that Allan MacNeill is unprepared to provide a defense when systems level evolution is presented.”

    Indeed, I am not conversant with systems level evolution at all. My specialties are the evolution of human sexual behavior, the evolution of the capacity for religious experience, and the history and philosophy of biology, especially evolutionary biology.

    Could you please recommend a reference that a complete tyro might be able to learn from without tearing out his already very thin hair?

  97. DaveScot wrote:

    “Because the that’s an artificial classification. Insects by defintion have six legs. If they have more than six they’re an arthropod but still have all the other identifying characteristics of insects. Even in the case of insects a butterfly is an insect but its larval form has dozens of legs. I don’t get the point you’re trying to make.”

    The point I’m trying to make is that, given that the larval form of many insects have many pairs of “legs” (I don’t want to get into a semantic argument about legs versus parapodia), how is it that when they become adults, they all have only three pairs, whereas the
    Chelicerata have four pairs, the Crustacean have lots of pairs, and tetrapod vertebrates only two pairs (yes, the name says so, but that’s not the causative reason).

    And why pairs? There are no animals of which I am aware that have unpaired appendages. Why not? What is the “constructive logic” whereby appendages only develop in pairs, and that in most cases the numbers of pairs is strictly regulated?

    I strongly suspect that the answers to these questions cannot be simply reduced to “that’s what the genes prescribe”. As I tell my students, genes don’t “do” anything at all. DNA just sits there, as inert as a blob of melted glass. It takes a whole functioning organism living in a hugely complex and interrelated environment to translate the genetic information in DNA into a functioning organism. Focusing exclusively on the genetic material is to commit the same fallacy as (some of) the founders of the “modern evolutionary synthesis.”

    We know better now.

  98. There is a substantial issue on the table. Kindly address it.

    KF, officious tone notwithstanding, your role at this blog is betrayed by the fact that your comments don’t appear with a white background. If my defense of Dr. Dembski is inappropriate I trust he will tell me so and I will respect his wishes.

    {DLH kairosfocus is one of the best contributors here. Heed him well! Stay on focus and don’t distract the thread.}

  99. With respect to natural selection, irrespective of questions of ID and irreducible complexity, Kimura made sound mathematical arguments about the limits of the power natural selection can exert.

    If a population of 100,000 has 100 individuals each with a selectively advantatged trais, it would be essentially impossible for natural selection to preserve the 100 selectively advantaged individuals or their traits. In such cases random drift is a far more accurate model for the evolution of the population. Think then of the implications for the evolution of complex proteins or binding sites in such a scenario!

    Kimura argued non-Darwinian evolution successfuly for the MAJORITY of molecular evolution. I think he gave an obligatory salute to Charles Darwin by saying his math didn’t apply to adaptive traits, but in truth it does! Masotoshi Nei (an NAS member) has taken the next step and argued that Kimura’s arguments against Darwinian evolution at the molecular level are extensible to anything else, including adaptation.

    Salthe (named in Mazur’s article) cited a critique by Lewontin (also named in the article) that demonstrated that Darwinian evolution cannot even be stated in a way clear enough mathematically that it can be tested. The main problem is the inability to define fitness. If a theory cannot even measure its central quantity, one has to wonder if its scientific. [See the Santa Fe Bulletin Winter 2003 "Four complications in understanding the evolutionary process" which regretably is no loger online.]

    Central to modern Darwinism is the idea that mutations and variations are random with respect to fitness. The math shows that if this is true, natural selection does not have the population resources to shape a trajectory toward any more than a few adapatations at a time. This was known as Haldane’s dilemma, and Kimura and Ohta appealed to Haldane in defense of their neutral theory.

    I have not even to mentioned that random mutation poses the problem of weeding out bad mutations, and not just favoring good ones. See: Nachman’s U-Paradox.

    Thus if one begins to suggest mutations (whatever mechanisms create them) are not random with respect to adaptive fitness, one is skirting near the possiblity of front loading, which is right around the corner from ID.

  100. Both Stuart Kaufmann and Massimo Pigliucci are favoring the idea that somehow the physics laws of physics can explain biology. Without going into detail, this would be like trying to explain the origin of software soley in terms of the hardware it runs on.

    What Trevors and Abel and others have pointed out in various papers is that biological phenomena can not be reduced to explanations which soley appeal to the laws of physics. Thus self-organization can not possibly work!

    Further, from an empirical standpoint, if John Sanford’s hypothesis of Genetic Entropy is confirmed by improved sequencing technologies (such as Solexa), it will empirically disconfirm the possibility of self-organization or even any sort of “new synthesis” which the Altberg 16 might suggest.

    Thus I hold out hope that hard-nosed empiricism can settle the question once and for all.

  101. Sal et al:

    I am uncomfortable with the concept of “front loading” — “front” with respect to what? Time, I suspect, as in Michael Behe’s suggestion in DBB that the first cells were “front-loaded” with all of the information necessary to produce all of the biodiversity we see today.

    This suggestion is absurd, if by “information” Behe (and MikeGene and other “front-loaders”) mean genetic information. Once again, genomes do not construct organisms; organisms construct organisms, in concert with their environment. As I hope should be obvious from some of my previous comments, the environment within which an organism develops contains immensely more information than can possibly be encoded within the organism’s genome. This “external” information is absolutely essential for regulating development.

    As just one example, the homeotic genes that produce the first orientation “fields” in a developing animal require positional information to operate. Part of this positional information is an axis from “top” (i.e. dorsal) to “bottom” (i.e. ventral). “Top” and “bottom” are positional information that requires, at a bare minimum, a gravitational field (their is no “top” and “bottom” in a microgravity environment). Hence, the operation of the homeotic genes absolutely requires “external” information.

    So, I would prefer to call the kinds of information we are talking about as “external”, or even better “holistic”, as they include relational information both within and around the organism (and this information also includes and requires sequential time as another axis).

  102. Sal wrote:

    “Further, from an empirical standpoint, if John Sanford’s hypothesis of Genetic Entropy is confirmed by improved sequencing technologies (such as Solexa), it will empirically disconfirm the possibility of self-organization or even any sort of “new synthesis” which the Altberg 16 might suggest.”

    Will Provine and I invited John Sanford to make a presentation in our evolution course at Cornell last fall. He did so, and was very grateful to us for our respectful treatment of him. However, I attacked his ideas directly using the same arguments I have in this thread. Ironically, Sanford bases his models on precisely the same outdated assumptions that formed the mathematical basis for the “modern evolutionary synthesis”:

    1) that genes are the only significant “causes” of the phenotypes of organisms;

    2) that changes in genes are the only signicant “causes” of changes in the phenotypes of organisms;

    3) that the only way that traits can be inherited is via the inheritance of genes that code for them; and

    4) that, given the foregoing, it is impossible for changes in genes alone to explain the origin of biological diversity.

    I agree, and for the same reasons that I assert that the “modern evolutionary synthesis” is now outmoded. For reasons of mathematical convenience, it reduced the “causes” of evolution to two mechanisms (natural selection and genetic drift) operating on a single medium of inheritance (Mendelian, and later molecular inheritance). We now know that both of these are inadequate to explain observed patterns of development and descent with modification.

    This is why I am advocating that we all recognize that the “modern synthesis” has for at least fifty years been gradually replaced by what could more precisely be called the “evolving holistic synthesis”, one that recognizes at least four different modes of evolutionary change (i.e. genetic, epigenetic, behavioral, and symbolic) and includes a much larger role for information exchanges between organisms and their environment.

  103. Sal wrote:

    “Both Stuart Kaufmann and Massimo Pigliucci are favoring the idea that somehow the physics laws of physics can explain biology.”

    If this is in fact the case, I must strongly disagree with Kaufmann and Pigliucci. As Ernst Mayr forcefully asserted, biology is characterized by emergent properties that cannot be reduced to physical laws (unless those laws are modified to include them).

    Indeed, I have a huge problem with the whole idea of “reduction” in science, and especially evolutionary biology. As I have been discussing in my correspondence with Hannah, I think the term “reduction” is itself part of the problem. I would prefer the term “transformative expansion” rather than reduction.

    For example, consider the so-called “reduction” of chemistry to physics. Physical chemistry (“Honk if you passed P Chem!”) doesn’t so much “reduce” chemistry to physics as it “expands” both physics and chemistry to explain phenomena that neither one can explain alone. The elegant equations that were developed to “explain” the orbital structure of the hydrogen atom don’t work for more complex atoms, and certainly don’t apply to molecules in any meaningful way. Why not? Because atoms with more subatomic particles (and molecules composed of many atoms bonded together) have properties that isolated atoms do not have. These properties are emergent properties, and as such cannot be “reduced” to anything, but rather can only be explained by “expanding” our previous understanding and “transforming” it on the basis of new empirical knowledge.

    This is what makes science so fascinating; it never “holds still”. New discoveries are constantly forcing us to revise our old models of how nature is put together and how it operates.

    “The ‘modern synthesis’ is dead; long live the ‘evolving synthesis!’”

  104. Sal wrote:

    “Thus I hold out hope that hard-nosed empiricism can settle the question once and for all.”

    So do I; while mathematical models have their place in science, they rise or fall on whether the universe actually operates in a way that is reflected in those models. That is, the math doesn’t “run” the universe; math simply describes the universe in ways that are irreducibly inadequate.

    So (to quote “Thought Provoker” at Telic Thoughts) “Let’s do science!” Let’s formulate hypotheses about how something in nature is constructed and operates, let’s formulate predictions that flow from such hypotheses, then design experiments or further observations that will reliably either confirm or disconfirm such predictions, and then let’s publish the results and argue publically about what they mean. And while we do so, let’s treat each other as if there were “that of God in every person” shall we?

  105. Dr. MacNeill,

    It is a delight as always to hear from you and along with others I hope you are well and getting better.

    The environement has a lot of information necessary to build a system, but I do think it is sufficient. Aerospace engineers need to gather large amounts of information from the environment about the atmosphere — this is necessary information, but I do not think such information is sufficient information to build an airplane.

    Even if the environment has large amounts of information, it requires machinery (or intelligence) to translate the information into something useful in the construction of systems.

    I have no doubt that environmental information is used by organisms to drive their evolution. I think James Shapiro is doing great work in this area as well as people studying develpmental plasticity. Shapiro was actually so bold as to suggest cellular intelligence and thus finds a middle ground in the ID debate. See: Bacteria as Engineers.

    My only issue with Shapiro is that if an organism’s own intelligence shapes its evolution, from where did the first intelligence originate?

  106. And for what it’s worth, the only way I have ever driven to the City (and the only way I ever will) is down Rte. 17 (soon to become I-86). I once worked a whole summer in a field crew surveying a mountain for a timber company outside of Hancock. Ate breakfast every morning at the Hancock Diner, where George Ivanitsky was the most efficient and graceful short-order cook I have ever had the delight to watch.

    Besides, the Roscoe Diner isn’t on I-81, and I don’t like driving through Scranton…

  107. Hi Sal:

    So that we’re all on the same page, would you please define “intelligence” in a way that one might confirm its operation empirically (i.e. without reference to any “hidden” — that is, unobservable — qualities)?

  108. Hi Allen:

    I do not have an explicit definition for intelligence any more than I have a definition for life. In the realm of logic, mathematics, and physics we call these undefined terms or primitives. I prefer to let intelligence be an undefined term or primative. Hofstadter in the book, Godel, Escher, Bach goes in to the difficulty of defining intelligence explicitly. [Hofstadter, by the way, was a good freind of Dennett]

    However, it does not mean that merely because we leave a term undefined that we cannot define characteristics which are sufficient to say something exists. For example even though no one in the world can sufficiently what life is, a doctor has a set of identifying criteria to rule whether someone is dead or alive.

    In like manner there are identifying criteria for intelligent actions by intelligent agents.

    One such criteria are the artifacts left behind by an intelligence, i.e.

    1. Beaver Dams
    2. Bird Nests
    3. Human Houses

    At issue is whether evolutionary processes can create such artifacts.

    My personal view is that it complicates the issue by insisting on whether intelligence is required to do this or that.

    A more empirical approach is to ask whether evolutionary processes can create artifacts that have strong analogy to engineered artifacts in our technological world. Further, is the perception of these analogies merely a postdiction like seeing faces in the clouds? These are the sort of things subject to empriical and theoretical investigation without having to make recourse to arguments about intelligent causation.

    Even though I believe in ID, proof in the ultimate sense is hard. For example, can any one formally prove he is not the only conscious intelligent being in the universe?

    That said, we can at least demonstrate the existence of improbable analogies in life to engineered artifacts. That was what the book Design Inference was about. That part can be subject to empirical investigation.

    Further it can be theoretically and empirically investigated whether evolutionary processes can create sturctures analogous to human artifacts. My favorite example is the Turing Machine (computer) which we find in the modern world of technology as well as in the cell.

    Discussion of ultimate causation is another story, and perhaps lies outside the bounds of direct empirical, operational science.

  109. I agree, and for the same reasons that I assert that the “modern evolutionary synthesis” is now outmoded. … We now know that both of these are inadequate to explain observed patterns of development and descent with modification.

    This is why I am advocating that we all recognize that the “modern synthesis” has for at least fifty years been gradually replaced by what could more precisely be called the “evolving holistic synthesis”, one that recognizes at least four different modes of evolutionary change (i.e. genetic, epigenetic, behavioral, and symbolic) and includes a much larger role for information exchanges between organisms and their environment.

    In regards to the modern synthesis I think that ID successfully refutes it. But even if ID is rejected at the outset or is not included in considering the evidence it should now be obvious that the modern synthesis is an inadequate model of biological reality. So now the real question is whether ID holds true in regards to the “evolving holistic synthesis”. I don’t think anyone could say for certain at this point; it’s too early. It’s a different question with a potentially different answer.

    I’ve said this before over the last couple years but I think that ID proponents have been focusing way too much on the modern synthesis. Part of this focus is due to there still being so many supporters of the modern synthesis. And most of the public discussion still involves the modern synthesis. But I say it’s dead, it’s gone, let’s move on and ignore those supporters.

    Back in #56 what I was trying to say was that BOTH ID and the “evolving holistic synthesis” could turn out to be true. (I’m about to get in trouble with everyone… ;) ) In order to function, the “evolving holistic synthesis” requires OOL, which is its own separate question. Dembski’s recent work shows that in order to find the targets in search space active information is required. Besides “directed front-loading” (what I’m calling Behe’s and Mike Gene’s hypothesis in order to differentiate it from other variants) there is the potential that ID only holds true in regards to the OOL. The front-loaded active information is the design of the system (modular components, plasticity in the language conventions, etc), which allows the “evolving holistic synthesis” to function without there being a directly embedded plan.

    Thoughts? I’ve actually been mulling over this concept for a while but never got around to posting it. Now here is the real question: would the majority of Darwinists find such a hypothetical scenario acceptable? As in, is it even possible to have a middle ground where both ID and Darwinism* hold true? Can’t we just all get along? Even though I’m suggesting this idea I’m not convinced of it myself. I just think it a good starting point where both sides could potentially stop the arguing, the hating, and the career-busting and work toward finding the truth.

    *of a kind…obviously I’m not referring to concepts that ID proponents already accept.

  110. Hi Allen,
    Thanks.
    Yes, that does make me feel better.

    And actually, I have read the Huxley letter – you’ve quoted from it and linked to it several times previously.

    Charlie

  111. Patrick asks:

    “…would the majority of Darwinists find such a hypothetical scenario acceptable?”

    I can’t speak for my colleagues (nor would they let me ;-), but I do not reject such a possibility out of hand. What it will take for me to accept it, however, is a rigorous and long-term program of intensive empirical testing of hypotheses that clearly differentiate between alternative hypotheses. Until this has been accomplished, all we are doing is speculating without evidence.

  112. Allen

    This “external” information is absolutely essential for regulating development.

    Ever hatched a chicken egg under a light bulb? I did. Try it and get back to me about how much information the environment provided. You can roll the egg over from time to time like a mother hen does if you want, or not. I fail to see how even gravity is providing any information to the egg when its rolled over at odd intervals. The chick that hatches is pretty much just a tiny adult chicken. I could simply use a budding yeast cell in isolation instead of chicken egg to get around your assertion that the egg gets information while it’s still inside the chicken. Clearly all the information (essentially) to build an organism is contained within a single cell.

    re; John Sanford

    You should probably read Sanford’s book (I did) before criticizing it. What you took from the lecture is not an entirely accurate reflection of what he wrote in the book. For one he discusses the whole genome not just coding genes. Second your objection that he doesn’t include epigenetic information (which is true, even in the book to the best of my recall) is irrelevant because the genome, if not the sole repository of heritable information, is a repository of very critical information which quite easily renders the organism unviable when there are mistakes in it. Epigenetic information or not, the genome is critical and failure to maintain it well results in disability and death. The only criticism I had of Sanford’s book is he cites literature supporting a background mutation rate orders of magnitude higher than the commonly given rate and by that means makes a case for genetic entropy destroying a genome in thousands of years (I presume to be in accord with Young Earth Creationism) while the testimony of the fossil record is that species persist (relatively unchanged) for an average of 10 million years before they mysteriously disappear as abruptly as they entered the record. I wrote an article here last week asking not why almost all species go extinct without spawning any new species(genetic entropy explains that quite well) but rather how a lucky few cell lines managed to avoid the catastrophic effect of genetic entropy for hundreds of millions or billions of years. I speculated that there’s a disaster recovery mechanism at work that restores a genome to a previously known working state. This is how we address the problem of “software entropy” in computer systems and my experience has been that anything human engineers invent to solve problems likely has an analog in the machinery of life for the same class of problem.

    There are no animals of which I am aware that have unpaired appendages

    Starfish (5 legs) and clams (1 leg) come immediately to mind as animals without paired legs but if you restrict the discussion to articulated legs then you have me stumped for an answer.

  113. In case any of you are interested in the (literally) gory details, I have now finally posted an explanation of why I have been out of commission for so long. It’s at my blog:
    http://evolutionlist.blogspot......-pain.html

  114. DaveScot:

    Yes, I did read John’s book (I was given a free copy when he came to our class), and was impressed with his arguments, but once again not convinced. Once again, his argument is essentially that without some “magical” input, all lineages of organisms should eventually degenerate and disappear because their genomes “decay”.

    However, that would only be the case if their genomes

    1) had no built-in error correction mechanisms (which all do, and which use “sex” – that is, exchange of genetic material — as part of the mechanism); and

    2) depended completely on the genome for assembling and operating the organism.

    As you yourself have repeatedly pointed out, the latter is clearly not the case. I agree.

    As to your chicken’s egg example, you have amply supported precisely the point I was trying to make. If you don’t provide heat and turn the egg every now and then, you don’t get a chicken. Therefore, at least two external sources of information are absolutely essential for the production of newly-hatched chicks from eggs. We don’t yet know how many such sources of information there are, nor what effects they have on developmental programs, nor how such programs work, nor how they change over deep evolutionary time.

    That’s what empirical investigations are for, right?

  115. And touché: I had forgotten about the pentaradial Echinoderms and the Bivalvia. Exceptions that test the rule, eh?

  116. What it will take for me to accept it, however, is a rigorous and long-term program of intensive empirical testing of hypotheses that clearly differentiate between alternative hypotheses. Until this has been accomplished, all we are doing is speculating without evidence.

    I agree 100%. Unfortunately, it then becomes an issue of funding and resources. Which is why I suggest a middle ground as a starting point, so that ID proponents won’t find their careers endangered or funding taken away. ID research is generally almost a “hobby” in comparison to the needed income the day job brings*. There is some work slowly being done, but it’s not enough in comparison to what could be done. But even if a middle ground does not work out the good news is that the equipment is getting cheaper every year. So salaries will be the largest cost. So funding such a research program should become attainable I’d hope.

    *No insult intended. Honestly, is there anyone where ID research IS their primary source of income?

  117. And a partial qualification: Echinoderms actually have hundreds or even thousands of “legs” – the little “tube feet” that sprout from their ventral surfaces. I suspect that the “arms” of sea stars are so structurally and functionally different from the appendages of both arthropods and vertebrates that they follow completely different developmental rules.

    So perhaps the “paired appendages” rule is intrinsic to jointed legs?

  118. Allan_MacNeill:

    Indeed, I am not conversant with systems level evolution at all. My specialties are …

    I appreciate your honesty. As with many IDers, I am solidly in the common descent camp. Not to say that I totally reject any challenge to common descent, but that I have seen compelling evidence for common descent, including man from “apes”, and I have seen no compelling case to challenge common descent. Basic evolutionary theory seems to hold significant components of truth. This includes variation, and natural selection. I am not prepared to go as far as Dembski when he suggests that new useful information cannot be generated via these natural mechanisms. It seems compelling to me that if a gene is duplicated, then each copy becomes the followed copy for two separate tasks, a divergence from the original in one of those copies is both expected, and information increasing. Ie, I don’t by any means throw out all of neo-Darwinian biology.

    However I am compelled by Behe’s case of Irreduceable Complexity. I think, however, that he has taken off too big of a bite when he looks at genes as single non-morphing units (he recognizes that the do morph, but doesn’t account morphing in his discussion.) I prefer to examine irreduceable complexity found on the point mutation level, such as with the HAR1F. We have 150 million years * some 20,000 species of evidence that the thing is non-mutating (it is ultra-conserved). Yet in man it has taken on 18 mutations, 18 that cause a slip-cog in its 3d shape. I think it a much simpler, therefore less refuteable, case of IC.

    I think that the clearest published thinker on the issue of systems evolution is Behe. I think the best text on the subject is Darwin’s Black Box.

    I appreciate your discussion of epigenics. I think that all of us software types find the idea that man is described by 25,000 genes, or 4 * 25,000(genes) * 300(nucleotides/gene) bits of information to be a bit inconcievable. That said, I think that epigenic replication is similar to DNA replication in many ways. The epigenic portion of the cell clearly gets copies. There are surely copying errors (mutations). These copying errors most usually would be destructive to an organism. As such, by adding the epigenic data to the mix, we increase the count of copying anomolies that are experienced per cell duplication.

    My computer simulations challenge that a maximum of one mutation in active information can be tolerated per generation to allow nautral selection to select for it. Whenever the copy error rate goes beyond 1 deleterious copying error (copying errors, as far as I can see are either in inactive regions or are nearly always destructive) even idealised, computer generated selection could not maintain the code. As we know, however, humans have far more than one destructive mutation per reproductive cycle. That’s just in the DNA, let alone the epigenic data.

    The net result is that there seems to need to be another data preservative beyond natural selection.

    Additional evidence for the need of an additional data preservative includes the discovery of ultra-conserved DNA sequences that do not have any obvious function in the organism.

    We recently discussed evidence of a data preservative in plants (barley, wasn’t it?)

    However, we also get to the interesting topic that you brought up w/ DaveScot, re the six-leg thing. I started an interesting thread in ISCID’s brainstorms about polydactilism (six-finger syndrome). When quadrupeds first appear in the rock record, they had 8, and 6 digits per limb. At some point early on, they settled on 5. Five has been the maximum for any prototypical (not a mutatant within its species) quadruped since then, as far as I can tell. Nature has experimented with 4, 3, 2 and 1 digit creatures. Nature has extended the wrist bone of the pandas to produce a simulated 6th digit. Yet despite all of the variety of quadrupeds, none have concluded that 6 digits would be an easy way to make a large foot (rabbits, polar bears, desert animals) or a large flipper (aquatic quadrupeds). You would think, based upon this data, that the code for 5 digits would be deeply engraned in the hox genes or something. But no, polydactilism has shown up as an anomoly in humans, cats, dogs, and mice. It is, in all cases, caused by a single point mutation. The six-digit creatures appear to have no deleterious side-effects. Some human cultures have valued six-digitism, even killing their five-digit young. Yet pentadactilism has proven to be universal. Why? I think that the only explanation is that there is another preservative.

  119. Above: I think that all of us software types find the idea that man is described by 25,000 genes, or 4 * 25,000(genes) * 300(nucleotides/gene) bits of information to be a bit inconcievable.
    I do understand that epigenics is that which is not in the DNA, and that the above only calculates the “coding” DNA. However, even the entire human DNA, if none is junk, is some darn impressive tight code if it produces a human with all of his inhereted characteristics.

  120. DaveScott
    “The only criticism I had of Sanford’s book is he cites literature supporting a background mutation rate orders of magnitude higher than the commonly given rate and by that means makes a case for genetic entropy destroying a genome in thousands of years. . .”

    Since mutations, hybrids etc appear to be Sanford’s expertise and commercial involvement, I would think he would be well abreast of those parameters.

    Can you suggest any reviews or papers on the mutation rates that show values significantly differ from the values Sanford cites?

  121. DaveScot:

    “The only criticism I had of Sanford’s book is he cites literature supporting a background mutation rate orders of magnitude higher than the commonly given rate.

    Actually, I’m not sure its the mutation rate that is the trouble. It seems rather that he counts mutations that are in DNA that is not (yet) known to do anything. That said, even if you eliminate the mutatations in apparently inactive DNA, he still has a fundimental case. As far as I can see, even optimal natural cannot work in an environment with greater than 1 mutation per generation. In higher organisms, the generation that must be considered is when a male and female get together to reproduce, not the generation of a single cell, because it is the organism from birth to reproduction that is tried in the fire of natural selection.

    Alas, whether we are degrading fast or slow, if we are degrading, this would be a problem. It is still frustrating when cases are exaggerated. The exaggeration gets shot down, and the baby goes out with the bathwater.

  122. Allen_MacNeill
    Welcome back. Appreciate you willingness to take up these issues.

    On Sanford, you say:

    “all lineages of organisms should eventually degenerate and disappear because their genomes “decay”.

    However, that would only be the case if their genomes:

    1) had no built-in error correction mechanisms (which all do, and which use “sex” – that is, exchange of genetic material — as part of the mechanism); and

    2) depended completely on the genome for assembling and operating the organism.”

    (without access to Sanford’s Genomic Entropy) I don’t think 1) applies. I believe the population models that Sanford reviews result in an accumulating load of near neutral mutations even with the be full sexual reproduction and error correcting mechanisms functioning.

    One of the most critical parameters is the ratio of harmful to beneficial mutations. Sanford cites literature estimating this at least 1,000 to 1, and possibly one million to one.

    I believe this ratio dominates all neo Darwinian hopes of “development”.

    Please clarify how your 2) follows. I do not see how that would invalidate Sanford’s arguments, as they would apply in both cases.

    “Once again, genomes do not construct organisms; organisms construct organisms, in concert with their environment.”

    I agree with you here. See Jonathan Wells comments on embryology where he gives much more detail on the importance of the embryo vs the genome.

    . . .extract fragments of dinosaur DNA from fossilized mosquitoes, splice them together with DNA from living frogs, then inject the combination into ostrich eggs which had had their own DNA inactivated. . . . In every case, if any development occurred at all it followed the pattern of the egg, not the injected foreign DNA. . . .DNA does not program the development of the embryo.

    Your comments point back to the Origin of Life.

    Omne vivum ex vivo

    The foundational observation of biology: Omne vivum ex vivo (Latin: Only life from life) is first attributed to William Harvey, c. 1630 (1) — William Harvey, c. 1630 (1). (Also referred to as: Omne vivum ex ovo – Latin: only life from an egg.) Louis Pasteur disproved the prevailing theory of spontaneous generation, showing conclusively that biological life was never observed rising spontaneously from sterilized media.

    i.e., Both the genome AND the surrounding cell are essential to self reproducing life.

    This points to the recursive nature of life. From mathematics and controls, this consequently means that the starting conditions are also essential to life.

    {DLH corrected from (10,000 -1 million) to (1,000 to 1 million) per Sanford (2005) p 24.}

  123. Hello Allen,

    I wrote:
    “…life is a network of artificial intelligence which artificially discovers the solution to problems.”

    You wrote:
    “How would one empirically distinguish between that and the following:

    “…life is a network of natural intelligence which naturally discovers the solution to problems.”

    or, for that matter:

    “…life is a network of intelligence which discovers the solution to problems.”

    Let me be clear about this: I am not necessarily advocating any of these. Rather, I’m asking how one would empirically distinguish between them (i.e. not speculate about their metaphysics)?”

    My response is that there is no difference between artificial and natural when it comes to intelligence as artificial intelligence is still perfectly natural as I believe capital “I” intelligence is also. So, that’s not my point. The difference is between the foresight used by capital “I” Intelligence and the lack of foresight of artificial intelligence.

    However, that’s not even the point I was making.

    I was asking, what causes artificial or even capital “I” Intelligence for that matter without violating conservation of information and thermodynamics.

    And yes, AI and EAs operate on the same basic principles — inputted information of problem or target characteristics so that there can be better than chance performance. IOW, they are not realizable without previous problem specific information.

    I will repost the main point of my comment:

    “So, how does artificial intelligence operate? It only learns what it is programmed to learn. Outputted information can be no greater than inputted information. Thus, information is conserved. Can your AI robot servant do anything other than what he is programmed to do as he searches through a solution space that you’ve provided for him with the problem specific programming that you’ve inputted into him? [it is true that there may be some minor surprising random effects, however that does not describe the operation of the process as a whole or the ability to get any better than chance performance in the first place].

    A “learner… that achieves at least mildly than better-than-chance performance, on average, … is like a perpetual motion machine – conservation of generalization performance precludes it.”

    –Cullen Schaffer on the Law of Conservation of Generalization Performance. Cullen Schaffer, “A conservation law for generalization performance,” in Proc. Eleventh International Conference on Machine Learning, H. Willian and W. Cohen. San Francisco: Morgan Kaufmann, 1994, pp.295-265.

    “The [computing] machine does not create any new information, but it performs a very valuable transformation of known information.”

    –Leon Brillouin, Science and Information Theory (Academic Press, New York, 1956).

    Therefore, the actual engine of variation is the programming that goes into setting up an information rich system of artificial intelligence.

    The pseudo-random natural selection filter merely searches through a non-random search space, guided by previously inputted problem specific information to pre-set targets. Of course, from those targets (solutions) there can be some minor effects and cyclical variations in accordance with a truly random search of the space immediately surrounding a pre-programmed potential solution (target).

    Whatever you want to call the latest version of evolution, if it is the hypothesis of the creation of information at consistently better than chance performance via the environment selecting from a random palate [of form, function, and law], it is the largest scientific hoax in history with not a shred of evidence in its favor and all observation, experimentation with EAs, and information theorems against it.

    In information terms, the above type of evolution is literally an attempt to sell a perpetual motion free energy machine.

  124. Turner Coates @ 66,

    I noticed that in your response you failed to address my question concerning Prigogine’s role in all of this.

    You wrote:

    ”Mazur is simply listing common examples of self-organization, with no apparent understanding that they are inappropriate in this context. No biologist would suggest a close analogy of anything going on in epigenesis to snowflake formation. ”

    I think it is a grave mistake to boldly and arrogantly state that biology alone is capable of explaining the phenomenon of life, considering that OOL studies, as a system of inquiry attempting to explain the emergence of life from inanimate matter, very much requires the assistance of the physical sciences. And it is at the juncture where biology and chemistry intersect that such noted figures like Prigogine prominently come into play when he tried to use the self-organizing properties intrinsic to the material constituents of living systems to explain how life began.

    Now it is quite another matter if you were to maintain the belief that OOL has nothing to do with biology, or is this your position? Unfortunately, you did not clarify in your opening statement or follow-up response. My original intimations are still valid. Is OOL relevant to the physical sciences or not? And if so, are the analogies used by physical scientists involved in OOL research (as illustrated by Mazur’s examples) appropriate to these discussions? Like I said, you didn’t address Prigogine or his use of the vortex example as a way of explaining the OOL via self-organizing or self-ordering matter; you instead offered to stick with developmental biology, which then, of course, will have no relevance to the examples Mazur cited.

    You also wrote:

    ”In fact, Mazur quotes Kauffman emphasizing that snowflakes are not alive.

    Ah, but what seems to have escaped your notice, Mr. Coates, is that this conversation between Kauffman and Mazur confirms what I’ve been trying to intimate to you all along. Indeed, at the heart of these discussions is the origin of biological information, the real crucible of any theory dealing with OOL, and in turn, development. In fact, the OOL quandary seemed to have been a major reason behind the formation of the Altenberg meeting, and a focal point for reformulating evolutionary theory. Picking up where you had left off with the “snowflakes are not alive” quote, Mazur informs us that

    [Kauffman ] reminded me in our phone conversation that Darwin doesn’t explain how life begins, “Darwin starts with life. He doesn’t get you to life.”

    Thus the scramble at Altenberg for a new theory of evolution.

    If self-organization could be defined as “a process where the organization (constraint, redundancy) of a system spontaneously increases, i.e. without this increase being controlled by the environment or an encompassing or otherwise external system,” then Mazur did not veer way off by including some examples of self-organization or self-ordering that have already been used by noted theorists in their futile attempts at explaining the OOL.

  125. bFast:

    You say:

    “I think that all of us software types find the idea that man is described by 25,000 genes, or 4 * 25,000(genes) * 300(nucleotides/gene) bits of information to be a bit inconcievable.
    I do understand that epigenics is that which is not in the DNA, and that the above only calculates the “coding” DNA. However, even the entire human DNA, if none is junk, is some darn impressive tight code if it produces a human with all of his inhereted characteristics.”

    I agree with you. The information content of the 25,000 or so protein coding genes is really trivial, if considered self-sufficient. Those genes are only the final effectors, a database of useful protein sequences with functional potentialities.

    Even if we consider the one gene -> many proteins paradigm, which is favoured by darwinists at present, things do not improve. It is certainly true that the proteome is much bigger than the genome (nobody really knows how much), but the problem remains that all those proteins have to be coded starting with only 25,000 genes, and that means that we need a lot of procedural information so that the right choices may be made at the right times.

    But that’s not enough. As I have tried to discuss here, with not great success, is the fundamental problem of transcriptome selection and control. I have already outlined (post#5), starting from Kaufmann’s observations, how big the search space for transcriptomes is, and how important transcriptomes are. They are the true key to all regulations in the cell. They are the true mystery, because no code is known which allows specific cells to know what transcriptome to operate at any given moment.

    Please, bear in mind that if a blood white cell is completely different from, say, a hepatocyte, the main reason is that they are implementing completely different transcriptomes from the identical genome they share. Each transcriptome means a different state of regulation of nucleus activity and of protein synthesis, different structural components, different metabolic pathways, different activation or repression of functions, and so on. Moreover, each specific cell has a dynamic transcriptome, which changes at any given moment to address different states and challenges. A white blood cell (WBC), for instance, can be quiescent, or reproduce, or differrentiate, or undifferentiate, or be active in defense procedures, or enter apoptosis. Each of those different states will require very different transcriptomes and proteomes in the cell, which add a new layer of selections and variety to the basic transcriptome choices which make a WBC a WBC.

    Now, if we were software programmers, after having written the basic code of our 25,000 functional structures (the protein coding genes), we still would have to face the major work, that is to write down the real working code, the procedures to implement all our billions of different activities, the logical operations, the measurements, in other words the “noblest” part of the software. And that would be huge. A lot of bits, even in a very compact form. And, obviously, it would not be written in the same form of the 25,000 functional sequences, because while those are “effectors”, the other part would be “procedures”.

    That’s the problem, I think, with the genome. We don’t know where the procedures are coded. And, even more important, we don’t know “how” they are coded.

    In my opinion, our best guess at present is of two kinds:

    1) The procedures are in the 98% non coding DNA. That is probably at least partly true. There are already many evidences of that. Intrones, promoters, non protein coding nuclear mRNAs, effectors of alternative splicing, all have been demonstrated. But it’s only a drop in the ocean. And, to the present moment, non coding DNA is still mostly an enigma. The reason that for so many years it has been considered junk is not to be searched “only” in the fact that darwinists are not so bright (you see, I can defend darwinists at times! :-) ), bur first of all in the fact that its “appearance” is really enigmatic, and defies any easy interpretation. Obviously, I am referring here especially to those parts, a really big percentage of the whole, which really look meaningless, repetitive, or just destructive, to pseudogenes, to transposons, to ERVs, even to introns. No surprise that darwinists, for such a long time, have easily “coopted” those data in defense of their weak theory.

    Take introns, for example. If you were a programmer, for which reason would you take a specific, single sequence of information (a protein coding gene) and cut it in, say, 20 different parts, interspersing among them long segments of apparently random code, which has to be carefully removed after transcription? And how would you utilize such a tool to implement “procedures”? The answer, I suppose, is not easy.

    2) The procedures, the real code, or at least a big part of them, could be somewhere else. A few possibilities are:

    a) In cytoplasmic structures (epigenetic information)

    b) In the nucleus, separate from DNA

    c) In any other molecule, including the same DNA, but encoded in a way that we can’t at present imagine or understand, for example at a level which is not grossly biochemical, like the traditional genetic code, but rather biophysical (structures, conformations), even, or maybe especially, at the quantum level.

    All these hypotheses are interesting and promising, but we have to admit that at present we have almost no real tangible clue to support them.

  126. gpuccio:

    I congratulate you on a masterful condensation of the fundamental problems facing developmental biologists during this new century. Let me suggest yet another; that we should seriously consider that, in addition to the genome, transcriptosome, and proteosome, we need something that could be called the “phenome”. That is, the sum total of all of the structural and functional components by means of which organisms construct and operate themselves.

    As I have suggested above, genomes, transcriptosomes, and even proteosomes do not constitute organisms, nor can they “do” anything without constant feedback from the environment. Therefore, the whole organism, considered as a “focus of activity” in its environment, must be factored into a comprehensive theory of the origin and evolution of life on Earth.

  127. Along these same lines, it is instructive to note that Craig Venter’s much touted “creation” of a complete genome from off-the-shelf reagents is about as far from creating life as writing a constitution is from creating and governing a nation. As Venter himself admits, the artificial genome that his team has created doesn’t do anything, not even when inserted into a specially prepared host cell. That is, there is something about a genome that a cell constructs for itself (and that in turn guides the construction of the cell) that makes the whole system work.

    This is s little like John Wheeler’s question about the theories of general relativity: the equations aren’t the things they describe. The equations sit there on the paper, but the things they describe make the universe.

    Kind of like the difference between genomes and organisms…

  128. Allen_MacNeill:

    Thank you for your comments, which are very much appreciated.

    I completely agree with you that the sum total of genome, transcriptome and proteome is still unable to really describe life. I had limited the analysis at those levels just to keep it simpler and more realistic, because those are the levels we know more about.

    But you are perfectly right, even given the right proteins at the right moment in the right quantities, that would not ensure the existence of a functional cell. There are a lot of other factors which need to be managed and controlled, one of which is certainly the spacial and reciprocal distribution and configuration of all the components, which is as essential to function in the individual cell as it is in the body plan of a multicellular organism.

    I remember a scientific paper I read many years ago (I think in Nature), which described a complex network of ions flow in the cytoplasm of cells, apparently not supported by definite “anatomic” structures, which very much resembled a basic nervous system. That is only one example of how much internal structure, whose origin is at present poorly understood, can be the carrier of infinite levels of complexity. Another stimulating example could be the early localization of homeotic factors in the zygote of drosophila.

    I completely agree with your comments about Venters’s reductionist approach (which, anyway, is welcome in the measure that it can give us new facts; I have always thought that we can well appreciate the facts given by researchers, without having to necessarily share their ideas about them).

    One of the fundamental weaknesses of all reductionist OOL scenarios is that they seem to assume that, somehow, given the necessary gross components (which, as we know, already is a very big problem) they very easily joined together to give life; while we know that even now, in intelligent and sophisticated laboratories, nobody has even tried, least of all succeded, in mechanically building up simple bacteria from their inert components (which are, by the way, easy enough to get from existing living organisms).

  129. DLH

    One base pair substitution per billion base pairs is the commonly given number for DNA replication errors in humans. It’s such a basic number in molecular biology it goes uncited in the textbooks.

    Here’s one of many examples:

    http://www.sparknotes.com/biol.....ion3.rhtml

  130. The term “epigenesis” has been used here and it is not clear what it means. Here is a comment about it by Henry Gee.

    “Harvey speculated that the egg or primordium is truly formless and that the embryo develops gradually from homogeneous matter by a process called epegenesis. However, this says no more than form arises out of nothing by some unspecified mechanism. As a name epigenesis is a wild west storefront with nothing behind it. At best it is an observation of what happens – that is form emerging from nothing not an explanation of why it does so.”

    Harvey coined the term in the mid 1600′s. But Gee is extending its vagueness to today. It is like the term “emergent” which is used also to say something happened/appeared without specifying why or how it happened.

  131. Allen

    . . . we need something that could be called the “phenome”. That is, the sum total of all of the structural and functional components by means of which organisms construct and operate themselves.

    I highly agree.

    From a design point of view, every factory needs jigs, assembly equipment, conveyor belts, and power systems to operated. Design information is both in the “blueprints” and equivalently embedded in the design of the assembly system, not just in the structure of the finished component.

    May I further propose breaking that “phenome” down into:
    an energy processing system and
    a material processing system
    in addition to the “information processing system” which you effectively described above.

    As I understand it, life cannot operate without an energy processing system. This is often overlooked or assumed to be operating. Yet each process needs external energy converted to controlled biotic energy (e.g., ATPsynthase & ATP etc.)

    Similarly cells and nuclei would not function without material processing system to form membranes and equally to transfer material through the membranes. etc.

  132. DaveScott at 129
    Thanks for that link. Interesting how that link describes multiple repair mechanisms to achieve that low an error rate.

    As I recall, Sanford listed many different kinds of error rates with citations.

  133. DLH

    Sanford cites 3 studies suggesting higher rates: Kondroshov 2002 (30 per billion), Nachman and Crowell 2000 (50 per billion), Neel et al 1986 (10 per billion). He then goes on to say that in personal communication with Kondroshov, Kondroshov admitted that 30 per billion was his lower estimate and his higher estimate was 100 per billion. Sanford then builds his hypothesis around the number 100 to 300 substitutions per human per generation. This was such an extraordinarily higher number than is commonly given in molecular biology texts it raised a big red flag in my mind the moment I read it. It appears on page 34, very early in the book. At that point I immediately knew Sanford was going to insinuate that the human gemone couldn’t be older than 6000 years and I became quite disappointed. However if you use 3 mistakes per human per generation and apply that in Sanford’s hypothesis we get lifespans for species that agree very nicely with the fossil record.

  134. DLH (con’t)

    The next question that came to me regarding genetic entropy was why didn’t P.falciparum go extinct in the last 50 years from genetic entropy. So I did a little math using the standard number for eukaryotic mutation rate and found that genome size makes a huge difference in genetic entropy. Humans end up with an average of 3 mistakes in every replication. However, P.falciparum‘s genome is so much smaller that 19 out of 20 replications are PERFECT copies. That number of perfect copies allows natural selection to select one mutation at a time which is impossible with humans. P.falciparum undergoes purifying selection that is impossible for humans. That’s why it didn’t go extinct due to genetic entropy even though it replicated more times in the last 50 years than all the replications mammals have undergone from the time they were still reptiles. Big genomes aren’t purified by selection nearly as well as small ones. That said, if Sanford’s sources are right about orders of magnitude higher mutation rates than usually given then P.falciparum should have gone extinct from genetic entropy in the last 50 years but it didn’t. Sanford’s hypothesis works out really well against real world observations if you use the commonly given mutation rate. If you use those Sanford proposes his hypothesis doesn’t agree with anything except the 6000 years of creation in the bible.

    This raised one further question for me. Why have a few large genomes managed to survive the ravages of genetic entropy over hundreds of millions of years? If Sanford’s hypothesis is correct (and I believe it is with the caveat of DNA replication error rate of 1 per billion nucleotides) then reptiles and all their descendants should have gone extinct long ago. I then speculated about a recovery mechanism similar to what human engineers use in computers to ward off the effects of entropy in software programs and data and since “evolution is cleverer than we are” it shouldn’t be unreasonable to presume that nature utilizes the same techniques to thwart genetic entropy that we use to thwart software entropy.

  135. DaveScot, “Humans end up with an average of 3 mistakes in every replication.”

    In this context, what exactly is a replication — each time a human cell replicates, or each time a human replicates?

    Recently there was an article on PhysOrg.com I believe that discussed the genetic differences between identical twins. It would appear that identical twins are a lot more than six mutations different from each other — a lot more! Using this measured data, Sanford may not be terribly wrong.

  136. Jerry (130) you asked for a working definition of epigenesis in this context. I would site dictionary.com’s biology definition 2:

    b. the approximately stepwise process by which genetic information, as modified by environmental influences, is translated into the substance and behavior of an organism.

    I think in this context we are specificly interested in the “environmental influences”, most specifically the environment within the cell. My understanding is that the cell contains a variety of structures which are involved in the process of cell replication. At some point this structural stuff is also replicated. In all likelihood, the development of the copy structural stuff uses the original structure as the guide to make the new structure. If so, then the original structure is also replicated, it is part of the “data” that makes up the organism. If a structural component has particular essential properties, such as shape, these properties must be replicated exactly or the new copy will perform worse (usually) or better (once in a blue moon). It becomest another layer of data that defines the cell — another opportunity for duplication error.

  137. So let me throw a wench into the system. I have been looking for a way to illustrate what I call the signal to nose ratio problem.

    Consider five organisms that each contain 200 genes for which there are two alleles. Let us order the alleles, calling the slightly less fit for the current environment (possibly more fit for a slightly different environment) allele 1, and the more fit, allele 2. We can now determine the relative fitness of the organism by adding up the allele numbers.

    Now we throw in a new, slightly beneficial mutation, and give it allele #3. One of the organisms below has an allele 3. Do you really think that if we added reproduction, allele mixing (which doesn’t even happen in non-sexual organisms) and natural selection that natural selection is sensitive enough to cause allele 3 to spread throughout the population? I kinda doubt it. Now what if every cycle we also through in two or three allele 0s at random? Would that not make natural selection’s challenge even greater? What if we assumed an average of 2 alleles for each human gene. With this assumption, the one beneficial allele is lost in a jungle of 25,000 alleles (if you limit your count to coding genes.)

    Organism 1:
    1111111222 2212112211 1221212112 1112222121 1212221112 1112212112 1112122121 1122212212 1221122222 1121121121 2221122222 2221112122 1121121112 2221222212 1222211212 1221212121 1221111221 1122121111 1122212112 1212112222 SUM = 301

    Organism 2:
    2112121121 1111111222 2211121122 2211112111 2222221122 2212122222 2221111221 2212212122 2211211122 2121112222 1221222222 2112112112 1122222212 1121112121 1122221222 1112121111 2111222111 2221211121 2221121122 2222212111 SUM = 306

    Organism 3:
    2112111222 1111222221 2212222111 1211222211 1221121112 2211221121 1221212111 2111111112 1221221122 1212122211 1112111222 2111211111 2111221112 1122212212 1122211222 2211221112 2221211121 1211112221 1112122122 1222222212 SUM = 296

    Organism 4:
    1111211111 1122111121 2211222122 1212222121 2221111212 1222111221 2111121221 1122212112 1212111222 1121211122 2211121212 2312111211 1111212121 1221121222 1112211212 2222221112 1211122122 1211111112 2211212121 2212222121 SUM = 295

    Organism 5:
    1112221211 1111222221 1112112112 1211112222 2121211112 1112111121 2121112212 2222211121 2111221111 1111211222 1212122121 2221222221 1111222112 1222211211 1211212221 2222111212 1111211111 1222221122 2221221111 1121111212 SUM = 290

    Where is Waldo, anyway?

  138. DaveScott
    Thanks for the clues. Intriguing point on purifying selection and the possibility of “rebooting”.

    Somewhere I saw an inverse log-log plot between mutation rate and genome size. That has a very major impact on mutations, evolution and Haldane’s Dilemma in comparing microbes to macrobes.

    Some followup data from the Kondrashov reference:

    “Direct estimates of human per nucleotide mutation rates at 20 loci causing mendelian diseases”
    Alexey S. Kondrashov, Human Mutation, Vol. 21, Issue 1 , Pages 12 – 27

    The average direct estimate of the combined rate of all mutations is 1.8×10-8 per nucleotide per generation, and the coefficient of variation of this rate across the 20 loci is 0.53. Single nucleotide substitutions are 25 times more common than all other mutations, deletions are three times more common than insertions, complex mutations are very rare, and CpG context increases substitution rates by an order of magnitude.

    Context of deletions and insertions in human coding sequences Alexey S. Kondrashov *, Igor B. Rogozin Hum Mutat 23:177-185, 2004.

    Two-thirds of deletions remove a repeat, and over 80% of insertions create a repeat, i.e., they are duplications.

    Most Rare Missense Alleles Are Deleterious in Humans: Implications for Complex Disease and Association Studies, GV Kryukov, LA Pennacchio, SR Sunyaev – Am J Hum Genet, 2007 Vol. 80, # 4, 727-739, 1 April 2007, – UChicago Press.

    We combined analysis of mutations causing human Mendelian diseases, of human-chimpanzee divergence, and of systematic data on human genetic variation and found that ?20% of new missense mutations in humans result in a loss of function, whereas ?27% are effectively neutral. Thus, the remaining 53% of new missense mutations have mildly deleterious effects. . . . Surprisingly, up to 70% of low-frequency missense alleles are mildly deleterious and are associated with a heterozygous fitness loss in the range 0.001–0.003. . . .Several recent studies have reported a significant excess of rare missense variants in candidate genes or pathways in individuals with extreme values of quantitative phenotypes. These studies would be unlikely to yield results if most rare variants were neutral or if rare variants were not a significant contributor to the genetic component of phenotypic inheritance.

    I think this last reference particularly supports the contention that mutations degrade system functionality (or “design”) much faster than “beneficial mutations” with NS could provide new “function.”

  139. bfast,

    “epigensis – the approximately stepwise process by which genetic information, as modified by environmental influences, is translated into the substance and behavior of an organism.”

    This sounds like natural selection to me. Though it is kind of vague, just as Gee said, it could mean a lot of things. For example, does it mean somatic cells or gametes? Does it mean that the environment will directly change a cell’s DNA information as opposed to affecting how cells can develop? Is the environment inside the cell as you suggested or outside and where outside. Neighboring cells, the organism or outside environment.

    It is sufficiently vague that I could probably come up with 3-4 more interpretations. I ordered Jablonka’s book from Amazon so maybe in a couple weeks I will have a better idea.

  140. jerry:

    The term “epigenetic”, which can certainly have other historical contexts, is used in modern biology and medicine to indicate heritable factors which are not in the genome (usually cytoplasmic factors). Here is the Wiki definition:

    “Epigenetics is a term in biology used today to refer to features such as chromatin and DNA modifications that are stable over rounds of cell division but do not involve changes in the underlying DNA sequence of the organism.[1] These epigenetic changes play a role in the process of cellular differentiation, allowing cells to stably maintain different characteristics despite containing the same genomic material. Epigenetic features are inherited when cells divide despite a lack of change in the DNA sequence itself and, although most of these features are considered dynamic over the course of development in multicellular organisms, some epigenetic features show transgenerational inheritance and are inherited from one generation to the next”.

    One example of epigenetic factors is the methylation of specific genes which is the basis for genetic imprinting: DNA does not change, vut a gene may express itself or not in a child according to a specific signal (methylation) given by one of the parents.

  141. bFast

    That’s 3 mutations each time a cell replicates but the rate varies quite a bit by species and loci. One per billion I understand as a rule of thumb for eukaryotes in general. Prokaryotes don’t have the DNA proofreading that eukaryotes do and the rule of thumb for them is one per ten million. Presumably if somatic cells of identical twins were compared there would be a cumulative deviation of 3 mutations per replication downstream from the egg cell that split so yes, you would expect many more discrepancies. Primordial germ cells, on the other hand, differentiate very early in embryonic development. Presumably that’s to limit the number of downstream replications and thus limit the number of DNA replication errors. That explains why few babies are born with cancer but acquire it later in life as mutations accumulate in somatic cell lines. Cells where the background replication error rate is accelerated due to environmental insults (carcinogenic chemicals and ionizing radiation) are more apt to become cancerous. So I guess one can say that most cancers are caused by genetic entropy.

  142. DLH:

    All good points. As to the “energy processing module”, all cells have at least one, consisting of the biochemical pathways that comprise glycolysis and fermentation. These do not need a cell to function, as they consist entirely of enzyme catalyzed reactions, and hence can be carried out in vitro.

    However, almost all cells rely on a membrane-bound system for most of their energy. In bacteria, the various proteins (cytochromes, etc.) and coenzymes (quinones, dinucleotides, etc.) are inextricably part of the plasma membrane.

    In eukaryotes, these same assemblies are embedded in the inner membranes of chloroplasts and mitochondria. The similarities between these two systems is not accidental. There is very strong evidence for the hypothesis that chloroplasts and mitochondria were once free-living bacteria that formed endosymbiotic partnerships with their Archaean host cells about a billion years ago.

    Since the energy processing modules for most cells involve molecular assemblies that are embedded in membranes, these are once again not reducible to genetic information alone. Rather, they absolutely require the presence of membranes for their function, and so until such membranes are constructed (either spontaneously in the OOL, or artificially in the laboratory), the “creation” of life that relies on such assemblies for their energy is quite literally impossible.

  143. DaveScot asked:

    “Why have a few large genomes managed to survive the ravages of genetic entropy over hundreds of millions of years?”

    A plausible (and testable) hypothesis is that the huge amount of non-coding DNA in such organisms acts as a “mutation sponge”. That is, by providing a huge target for random mutations, almost all of which have no effect on phenotype, the non-coding DNA has the effect of lowering the rate of deleterious mutations that occur in coding regions.

    Sex also plays a hugely important role in this process. The larger a genome is, the less likely it is that there will be exactly the same mutation in exactly the same location in the two genomes that get combined during fertilization and sexual recombination.

    Lynn Margulis (in her book The Origin of Sex, coauthored with her son, Dorion Sagan) has proposed that this was the original reason why sex involved: not as a means of increasing genetic diversity among offspring, but rather as a means of providing a spare copy of genetic material for error checking with every new generation.

    This hypothesis is strongly supported by multiple lines of evidence, including the fact that such error correction does in fact take place during meiosis I in diploid eukaryotes. This process is further enhanced by crossing over, which has the effect of recombining good copies of genes from what were originally separate genomes in one copy. Obviously, this also creates a “mirror” set that has both of the “bad” copies, but this one gets used in only half of the gametes, which are made in such large quantities in males that the probability of a positive outcome from a recombined “good” set outweighs the probability of a negative outcome from a “bad” set (especially if the “bad” set lowers the viability of the sperm cells prior to fertilization)

    Once again, Dr. Sanford’s assumptions do not reflect biological reality any better than the overly simplified assumptions upon which the “modern evolutionary synthesis” was based. Fortunately, evolutionary biologists have begun to recognize such deficiencies and move on. I hope John eventually does so as well, although the fact that he massaged the numbers to reify an hypothesis he had chosen for reasons not related to science (i.e. his absolute committment to the “young Earth” hypothesis) does not auger well in this regard.

  144. bFast asked:

    “In this context, what exactly is a replication — each time a human cell replicates, or each time a human replicates?”

    Both; this is why the probability of a cell becoming cancerous increases with each replication. This is why we tend to get cancer as we age, and why we tend to get cancer in tissues composed of continuously dividing cells — skin, lining of the digestive system and lungs, lining of the ducts in mammary glands and the prostate, testicles, and bone marrow.

  145. Allen

    Huge amounts of junk DNA can’t act as a mutation sponge if mutation rate is constant in junk and non-junk. However, if the non-junk DNA in a human is as small as say the malaria parasite then that works out as the total errors in the functional DNA would be small enough that most copies would be perfect. This however doesn’t make much sense from an engineering viewpoint as the total amount of human DNA already makes it highly questionable whether much of it can be junk. There’s too much additional complexity in a human compared to a malaria parasite and that complexity has to be encoded somewhere. The total amount of DNA in a human, even if every scrap is functional, still beggars belief that it’s enough information to build a human. Recombination can’t be the savior either as only natural selection is capable of telling a good allele from a bad one and culling it. Error checking requires a test of some sort to discriminate between errors and non-errors. How is the discrimination test made during recombination? Natural selection is such a test but obviously that requires growing the organism out long enough for differential reproduction to manifest enough to cull the less successful mutants. It’s the practical inability of natural selection to select one allele at a time that’s the root of the entropy problem. Selection only selects whole genomes so it must consider the good, the bad, and the nearly neutral mutations altogether. How would that discrimination be operative in any other way? Where’s the test? Proofreading can only be done as long as you have an original copy to compare to the new copy. In recombination neither copy is the proof copy so there’s no way to test for errors (or improvements) except through differential reproduction.

  146. DaveScot wrote:

    “Primordial germ cells, on the other hand, differentiate very early in embryonic development. Presumably that’s to limit the number of downstream replications and thus limit the number of DNA replication errors.”

    Exactly right, as is your analysis of the etiology of cancer as well.

    This entire discussion has circled around an elephant in the room: that fact that we are here provides prima facie evidence that there is clearly something wrong with the “genetic entropy” hypothesis (unless one agrees with Dr. Sanford that the universe and everything in it is less than 10,000 years old). IOW, as DaveScot has pointed out, there must be at least one mechanism that compensates for the surprisingly rapid decay of the genome over time. My guess is that there are multiple mechanisms, probably added in a stepwise fashion as genomes increased in size as the result of gene duplication, genome fusion, virus and transposon insertions, accumulation of tandem repeats, etc.

  147. DaveScot wrote:

    “Proofreading can only be done as long as you have an original copy to compare to the new copy.”

    Not necessarily; the copy that is used for the proofreading almost certainly comes from the set that was provided from the other parent following fertilization. During the first division of meiosis (Meiosis I), the two chromosomes that make up each homologous pair (i.e. one from each parent) line up in register, a process called synapsis. They remain in this condition for a surprisingly long period of time (indeed, in female mammals, it lasts from before birth until the eggs are fertilized, which in humans can be longer than 40 years). This combination of two double-stranded chromosomes is called a “tetrad”.

    While the homologous chromosomes are lined up “in register” (meaning the genes on the two copies are lined up next to each other) a large protein complex, called the recombination complex, works its way along the tetrad. Parts of the complex, called recomination enzymes, check for differences between the two copies. When a difference is detected, it can be corrected using the undamaged code in one of the other strands.

    How can undamaged code be recognized? There are several mechanisms, all having to do with specific sequences (especially in promoters). And, of course, sometimes the “proofreading” recombination complex makes a mistake and uses a “bad” copy as the template for a repair. In many cases, this is caught, as it eventually causes the release of a chemical signal that stops the completion of meiotic division. If it isn’t caught, a “bad” gamete gets made, but this will presumably be eliminated by phenotypic selection.

    IOW, there is indeed a whole set of “proofreading” mechanisms in eukaryotes that tends to reduce the frequency of deleterious mutations actually making their way into a population via sex and reproduction. None of these proofreading mechanisms are accounted for in Dr. Sanford’s mathematical models of “genetic entropy”, which apply only to point mutations in DNA sequences. This is yet another reason to suspect that the explanation for why such models do not match observed reality (i.e. the fact that eukaryotes, including us, are still around) is that they do not model reality precisely enough to be meaningful. Interesting yes, but irrelevant to the analysis of actual biological reality.

  148. I made the comment on another thread a couple days ago that the species of the world seem quite healthy, especially humans as we live longer and are more hearty when we are fed correctly. Species extinctions seem more due to human interference than the viability of the line deteriorating.

    So those biologists predicting doom and gloom for the species of the world due to genetic mutations seem to lack any empirical evidence. We have descriptions of humans and other animals going back over 4000 years or 2/3 of the supposed history of the earth and all is fine. While we may not be better than our Greek or Persian ancestors, we certainly are not worse.

    Of course they were the times of heros such as Achilles, Odysseus, Roland, Gilgamesh and King Arthur and maybe we are on a down hill slide. No more super men, we only read about them in stories like the Iliad, Beowulf etc. Oh for the good old days when humans were giants.

  149. Allan_MacNeill:

    This entire discussion has circled around an elephant in the room: that fact that we are here provides prima facie evidence that there is clearly something wrong with the “genetic entropy” hypothesis (unless one agrees with Dr. Sanford that the universe and everything in it is less than 10,000 years old). IOW, as DaveScot has pointed out, there must be at least one mechanism that compensates for the surprisingly rapid decay of the genome over time.

    I think you will find that we all are of the mind that Sanford has taken a very exaggerated view of the genetic entropy problem. However, I still see a fundimental problem once we experience 1 mutation in active DNA (DNA that does something, and, for that matter, epigenic material that does something) per generation (birth to birth of offspring (b to b)).

    As far as how many mutations a human has, I think there’s a lot to be said for identical twin studies. If, as has been suggested, the germ cells are separated off early in the development cycle of mammals, then let the seed of two identicals be analysed for differences. We will then have an imperically determined count of mutation rate per (b to b) generation.

    I think that the general theme of the discussion here is that there must be some unknown preservative(s). We are not suggesting that these preservatives necessisarily be direct acts of God, nor are necessarily “unnatural” in any way.

  150. I have a question. Is it only the female gamete cells that are separated off early in development. Aren’t sperm cells constantly being produced from a germ cell and as such these cells subject to mutations as much as any other cell.

  151. Right now I am reading a history of the genome by Henry Gee and it starts with Aristotle and others trying to explain how new life is formed. Right now I am up to Darwin and the chapter after next is on Mendel which is appropriate for my comments below.

    Gee said the problem with Darwin’s ideas was always the source of variation. Natural selection works fine on the current gene pool given enough time. But how does variation arise? This is relevant here since we have occasionally evoked Dr. MacNeill’s 47 engines of variation. So a rightful area of enquiry is just how well do these engines generate variation or is what they really generate is random dysfunctional genomes.

    However, a second issue was raised by Gee and that is that natural selection needs time and lots of it to do its work and essentially what you get are variations of the original that look a little different and have some other modest changes to the phenotype.

    But here is another real problem, where Dr. MacNeill is leading us to. Are there ways to jump start the genetic changes or other organic changes that are necessary to explain the more dramatic changes we have seen in the world and which would happen too slowly according to modern genetic theory.

    Dr. MacNeill’s specialty is evolutionary psychology which I personally always looked upon as related to alchemy, astrology etc. because suppose there was a psychological outlook that favored things such as religion, altruism, etc, or some other desirable trait. If it was based on some genomic DNA combination, it could not be transmitted to future generations except through typical population genetics and this takes ages.

    So it sounds like these other three dimensions being discussed by Jablonka and recommended by Dr. MacNeill are meant to pave the way for faster changes in species of various traits and have nothing to do with variation generation which has been our perception of the real achilles heel of the modern synthesis. So now the genetic half is under assault and it seems at first glance because it is necessary to implement faster changes in the genomes of species so that things like evolutionary psychology or other pet theories can be viable.

  152. bFast wrote:

    “I think that the general theme of the discussion here is that there must be some unknown preservative(s). We are not suggesting that these preservatives necessarily be direct acts of God, nor are necessarily “unnatural” in any way.”

    Nor did I suggest that they were. I think the short list of potential error correction mechanisms that I provided above is a first approximation to an answer to the apparent paradox that Alexi Kondoshov expressed when he asked “Why are we not dead 100 times over?”

  153. jerry (#150):

    It is true that spermatogenesis is constantly active during reproductive life, but that does not necessarily mean that a lot of cell divisions take place. Indeed, the final part is similar both for oogenesis and spermatogenesis (one mitosis and two meiosis). Spermatogenetic cells, like other actively differentiating cell compartments (hemopoiesis is a good example) are maintained from a higher stem cell compartment (spermatogonia). Stem cell compartments are usually characterized by two properties: self-maintenance and ability to differentiate. In other words, a stem cell compartment is usually small (few cells), and may have a very slow reproductive activity, but that activity realizes two different tasks: providing differentiating cells for the downstream differentiation, and at the same time maintain the number of undifferentiated precursors.

    Anyway, as certainly many more spermatocytes are producted in the reproductive life of a male than oocytes in a woman, it is certainly possible that the spermatocytic compartment undergoes a higher number of cell divisions.

    It could be interesting to observe that other factors can certainly influence genetic errors, beyond the number of cell divisions. In particular, especially in oocytes, the age of the cell is known to be an important risk factor for chromosomal abnormalities, even if all oocytes undergo the same number of cell divisions.

    Regarding the ability of cells, at least in humans, to check for erros in DNA duplication, that is a well established fact. Various checkpoints are known in cell cycle, where specific molecules (usually extremely important also in carcinogenesis) can stop the cycle to allow that the cell may “correct” some genetic error generated in DNA duplication, or start the process of apoptosis (controlled death) if the errors couldn’t be repaired.

    That is well known. Anyway, I don’t think that we really know “how” those checkpoints and molecules “recognize” errors. That is a very interesting field of research, and it is certainly being extensively worked out, also because of its important implications in medicine, especially for cancer. But the answers are probably still very far away.

  154. If the error detection and correction mechanisms are too effective, that’s a problem for Darwinism, too. If genomes aren’t allowed to mutate, then evolution won’t occur, only recombination of existing genes. How did the genes originate?

    Also, the presence of error detection and correction mechanisms doesn’t matter provided that they treat potentially beneficial mutations the same as potentially detrimental mutations. The real question is: Can natural selection prevent beneficial mutations from being overwhelmed by detrimental mutations? The answer to that will depend, in turn, on the answers to questions such as:

    1. What is the ratio of beneficial mutations to detrimental but not deadly (i.e., not culled by NS) mutations?
    2. What is the relative magnitude of the average benefit provided by a beneficial mutation in comparison to the magnitude of the average damage caused by a detrimental mutation?

    And in the final analysis, if one was to suppose materialism, it seems the answer to such questions would be a function of how “lucky” the universe is.

  155. Allen_MacNeill at 146

    This entire discussion has circled around an elephant in the room: that fact that we are here provides prima facie evidence that there is clearly something wrong with the “genetic entropy” hypothesis (unless one agrees with Dr. Sanford that the universe and everything in it is less than 10,000 years old). IOW, as DaveScot has pointed out, there must be at least one mechanism that compensates for the surprisingly rapid decay of the genome over time.

    May I encourage you to reread Sanford’s arguments and reexamine your assumptions and conclusions. I read Sanford’s arguments as being independent of the age of the earth.

    Sanford (Ch 3 p 33) cites Muller 1950 that

    “one deleterious mutation per person per generation, long term genetic deterioration would be a certainty.”

    Sanford then cites the experimental literature showing the rates to be higher than that. Then he states (p 34)

    Even if we were to accept the lowest estimate (100 mutations), and further assumed that 97% of teh genome is perfect neutral junk, this would still mean that at least 3 additional deleterious mutations are occurring per person per generation.

    i.e., above Muller’s critical level.

    As a separate complementary argument, Sanford further states:

    “the human mitochondrial mutation rate has been estimated to be about 2.5 mutations, per nucleotide site, per million years. (Parsons et al, 1997) Assuming a generation time of 25 years and a mitochondrial genome size of 16,500 – this approaches one mitochondrial mutation per person per generation within the reproductive cell line Mitochondrial mutations, just by themselves, probably put us over the theoretical limit of one mutation per three children!

    i.e. based on the cumulative rate of mitochondrial mutation accumulation per generation – presumably AFTER all error correcting methods.

  156. J at 154
    Good observation on the paradox of error correction. On you query:

    1. What is the ratio of beneficial mutations to detrimental but not deadly (i.e., not culled by NS) mutations?

    See bornagain77′s citing Sanford

    “I have seen estimates of the ratio of deleterious to beneficial mutations which range from one thousand to one, up to one million to one. The best estimates seem to be one million to one (Gerrish and Lenski, 1998). The actual rate of beneficial mutations is so extremely low as to thwart any actual measurement (Bataillon, 2000, Elena et al, 1998)

    See: Gerrish, P.J., and Lenski, R. 1998, The fate of competing beneficial mutations in an asexual population. Genetica 102/103: 127-144.
    Bataillon, T. 2000, Estimation of spontaneous genome-wide mutation rate parameters: whither beneficial mutations? Heredity 84:497-501.
    Bataillon (2000) states:

    In all the studies, the mean fitness or mean of fitness related traits declined over time, suggesting that the net effect of spontaneous mutation is indeed deleterious (an exception is Shaw et al., 1999). Mean decline of the fitness components of MA lines ranged from 0.1% to 1-2% per generation.

    See also: Elena, S. F. et al. 1998. Distribution of fitness effects caused by random insertion mutation in Escherichia Coli. Genetica 102/103:349-358
    On your query:

    2. What is the relative magnitude of the average benefit provided by a beneficial mutation in comparison to the magnitude of the average damage caused by a detrimental mutation?

    Sanford in Genetic Entropy shows fitness from “benefits” to be swamped by “detrimental mutations” using literature based population dynamics models.

  157. DLHl,

    What are Sanford’s arguments put in simple terms? I have the book which I got 15 pages in to it and never continued. Maybe I will start it up again and see what it says.

    If he says that genomes are deteriorating then what evidence does he have of this outside of his own calculations? There is certainly no evidence of this in the world around us. Everything looks fit to me.

    Or maybe I am missing the essential message of Sanford’s work.

  158. jerry at 157
    Yes, please read through Sanford. In summary,

    Natural Selection cannot remove numerous near neutral mutations.

    Harmful mutations are far more prevalent than beneficial mutations.

    Consequently, harmful mutations accumulate as a “genetic load” decreasing fitness.

    Inherited diseases are one evidence of increasing degradation. e.g., see:
    Tay Sachs Disease at NIH & at Wikipedia

    Sanford cites numerous publications showing genetic evidence as well as population genetics models.

    Sanford’s conclusion: “The Emperor has no clothes”

    Careful analysis, on many levels, consistently reveals taht the Primary Axiom is absolutely wrong.

    On whether Genomic Entropy exists, besides genetic evidence, Sanford graphs the biblical records of declining lifespan Y as generations X since Noah.

    Fitting the data to the “line of best fit” reveals an exponential curve following the formula Y=5029.2 * X^-1.43. The curve fits the data ver well – having a correlation coefficient of 0.90. This curve is consistent with the concept of genomic degeneration caused by mutation accumulation.

    A similar graph is posted at: Lifespans from Noah to Abraham
    http://www.worldwideflood.com/.....r_noah.gif
    Table 2: Lifespans from Noah to Abraham
    Ancient genealogies: Cooper and others have traced the genealogies of royal houses have been traced back much older than other historical records. See: After the Flood, Bill Cooper (1995 New Wine Press, PO Box 17, Chichester, West Sussex PO20 6YB England, ISBN: 1 874367 40 X) These provided ancient records that give further support to the above data.

  159. DLH,

    You are certainly entitled to your opinions. But I hope you realize that no one is going to take this seriously except those who hold a certain religious ideology.

    I have read several book and watched dozen of videos on ancient history in recent years and have been to Greece, Roman provinces and other places in the ancient world and not once was there any indication of unusually long life amongst anyone. There is the occasional really old person but we stil have a couple veterans from WWI.

    If anything the people of ancient times died young because of bad sanitation. They have documents from Greek and Roman times of families’ life spans and none indicate anything any different than today. Socrates was considered an old man at 70.

    We are living longer today than any society in the history of mankind and people are quite active into their 70′s and sometimes 80′s and this is a counter trend to history. The reason for this is better nutrition, more active lives and modern medicine.

    I am not an expert on how mutations would have affected life spans but I would have predicted a much different chart. One that shows little degrading at first and then an accelerating effect as the mutations accumulated to drastic proportions.

    So I would turn your chart upside down and backwards. This would show a slow decline at first and then an accelerated decline as time went on. Such a chart does show doom and only a generation or two ahead.

  160. Jerry at 159
    Please read Sanford and address the quantitative referenced arguments he presents.

    This equation and graph fits the recorded lifespan data for that period up to 2100 BC, not my opinion. In terms of shape, it is very similar to Crow’s fitness decline model which Sanford shows in Figure 10b. Sanford notes

    Schoen et al. 1998 have modeled almost identical fitness decline curves which arise from mutation accumulation.

    Crow, J. F. 1997. The high spontaneous mutation rate: is it a health risk? PNAS 94:8380-8386.
    Schoen, D. J. et al. 1998. Deleterious mutation accumulation and the regeneration of genetic resources. PNSAS 95:394-399.

    If you think the shape should be different from the historic data, or models by Crow, or Schoen et al., please provide the model and data to support it.

    You informally appear to refer to a later period.
    Greece: cf Thermistocles 525 BC
    Roman republic -500 BC on.

    If you have other data for the period before 2100 BC, please provide it.

    PS For further ancient records see: Mike Gascoigne History – From Creation to Modern Times

  161. Allen_MacNeill at 126

    . . . we need something that could be called the “phenome”. That is, the sum total of all of the structural and functional components by means of which organisms construct and operate themselves.

    Some consequences of your “phenome” focus is that there may well be variations in the “phenome” separate from the genome.

    Furthermore, there may well be errors in replication of the phenome independent of errors in the genome.

    See Jonathan Wells and Paul Nelson Homology: A Concept in Crisis Origins & Design 1997 theapologiaproject.org
    That gives examples of variations beyond the genome.

  162. DLH,

    Recorded data by who? Do you have multiple sources for this data? When was it recorded? Where is the archeological records to back this up?

    I once asked a biblical scholar who was excavating sites in Israel about the stories in the bible and external sources for similar information. Specifically, about the existence of David because I thought I had recently seen something about a find that was relevant to him. She mentioned that there was a recent find of an excavation that mentioned David in it. And so far that was the only external reference to him she knew and that neither David or Solomon have little reference outside the bible. She also said there is more and more evidence for some additional events but so far the evidence is slim.

    So I would be careful about using a book with little outside verification as a source for anything scientific. It may go down well with Christians but not even amongst all Christians and probably not at all with the outside community.

    By the way I accept the bible as guide to how to lead one’s life and accept the new testament as a mostly accurate description of what happened. I believe that all the people did exist and that most of the stories are probably portrayals of things that happened in the past but like all oral stories probably got modified over time. I would not use them as an accurate genealogy. The new testament was written down in the lives of the witnesses and as such has a lot more believability as to accuracy of the events.

    As I said a few weeks ago on another thread. I prefer Galileo’s quote, that the bible tells you “How to go to heaven, not how the heavens go” nor is it the basis for any other scientific theories including the ages of the various people or the age of the earth and universe.

    You certainly can believe what you want, but it cannot be offered it up as science or as completely accurate history.

    I briefly looked at the Cook and Schoen articles and could not find anything to support your biblical chart. If you want to claim authority to the bible as as source for human life spans, you will have a very narrow audience. You will be preaching to the choir.

  163. As I said a couple times I am currently reading a book by Henry Gee about the genome and in it are several things relevant to the discussion on this thread. For example, some researchers have examined in detail the gestation of the fruit fly and have correlated various parts of the genome with each development during gestation. So it does not seem there is any need to go outside the genome to seek an explanation for what happens during gestation.

    There is a very complicated network that ensures the gestation of the fruit fly and three researchers received the Nobel prize for analyzing this network and thousands of its interactions: Eric F. Wieschaus, Edward B. Lewis, Christiane Nüsslein-Volhard. This is part of what is known as evo devo.

    One interesting thing is that Gee says all the enzymes necessary for development are already in the egg before fertilization and only later will enzymes needed by transcription be created. So this sounds like everything is still within the genome for gestation since these enzymes must have originated by transcription at some time prior to fertilization.

    Also he implies that shape and position of the egg and later sub-divisions affects the environment and what may affect transcription at the various times of gestation. Also the number of ways that the various genes can be activated by external forces can create a very elaborate sequence of using various parts of the genome to create a myriad of effects.

    Someone said 2^25,000 is a lot of states and various combinations can have different effects on what is going on. So there might be enough in the genome to create an extremely complex product. But what picks the particular states or if this is really how it is done is still a mystery.

    So according to current theory of evolution there is no need to go to these extra dimensions as proposed by Jablonka. But the current theory is still deficient on what causes the variation and how much has actually been introduced by naturalistic means into the gene pools of populations. Based on what I am reading, this is still the achilles heel of all naturalistic views of evolution.

    Gee waxes on about how wonderful the whole idea of sexual reproduction is because it produces so much variety in the world and not just a series of clones that all look alike. He almost forgets he is a Darwinist and this process is blind and has no objective. He is in love with the process but does not seem to realize that this incredibly complicated system had to have someone who designed it.

    Natural selection and sexual reproduction are truly great design but only because someone set them up but then gave them the variation for these processes to produce all the wonderful richness we see in the world today. No natural source of variation has ever been demonstrated. Without a source of variation, natural selection and sexual reproduction might be very sterile processes.

  164. Jerry at 162
    Cook and Schoen references show declining fitness models with similar shape to the Crow declining fitness curve shown by Sanford and the declining lifespan graph shown by Sanford and above.

    The data is obviously from the biblical genealogies.

    It provides supporting evidence from ancient records – where the authors probably never dreamed about declining fitness from mutations and population genetics models.

    The ancient royal genealogies evidence strict verbatim oral records carried down though many generations before written documents. They provide complementary data lending credence to it.

    like all oral stories probably got modified over time.

    Please read these books by Cooper and Gascoine before dissing them out of hand. Why throw out the data that exists? – because it is politically incorrect? Incredulity does not invalidate the data.

    PD Does finding David’s palace count as evidence?

  165. DLH, Allen, Jerry et al:

    I have observed the back-forth over the past few days.

    It is clear that, while the discussion on Sandford’s genetic entropy thesis and associated questions over mutation rates is an interesting one, it is largely a side-issue.

    For, the real “elephant in the middle of the room” is how we get TO the genomes and epigenetics that allow for the sort of system that is being perturbed by mutations.

    First, at origin of life. For that, let Leslie Orgel speak, even from beyond the grave (noting that the same objection obtains for his own favoured RNA world type scenarios, as Shapiro pointed out in his own earlier Sci Am article):

    Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own . . . .

    The prebiotic syntheses that have been investigated experimentally almost always lead to the formation of complex mixtures. Proposed polymer replication schemes are unlikely to succeed except with reasonably pure input monomers. No solution of the origin-of-life problem will be possible until the gap between the two kinds of chemistry is closed. Simplification of product mixtures through the self-organization of organic reaction sequences, whether cyclic or not, would help enormously, as would the discovery of very simple replicating polymers. However, solutions offered by supporters of geneticist or metabolist scenarios that are dependent on “if pigs could fly” hypothetical chemistry are unlikely to help . . .

    Then, as Meyer aptly pointed out [and as was cited at 90 above], at body-plan level biodiversity [bearing in mind that the Cambrian "revolution" shows body plans "first" in the record]:

    One way to estimate the amount of new CSI that appeared with the Cambrian animals is to count the number of new cell types that emerged with them (Valentine 1995:91-93) . . . the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types . . . New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information. Thus an increase in the number of cell types implies (at a minimum) a considerable increase in the amount of specified genetic information. Molecular biologists have recently estimated that a minimally complex single-celled organism would require between 318 and 562 kilobase pairs of DNA to produce the proteins necessary to maintain life (Koonin 2000). More complex single cells might require upward of a million base pairs. Yet to build the proteins necessary to sustain a complex arthropod such as a trilobite would require orders of magnitude more coding instructions. The genome size of a modern arthropod, the fruitfly Drosophila melanogaster, is approximately 180 million base pairs (Gerhart & Kirschner 1997:121, Adams et al. 2000). Transitions from a single cell to colonies of cells to complex animals represent significant (and, in principle, measurable) increases in CSI . . . .

    In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types, but also for the origin of new body plans . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes–the very stuff of macroevolution–apparently do not vary. In other words, mutations of the kind that macroevolution doesn’t need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don’t occur.6

    In short, it is plain that there is no credible evolutionary materialistic chance + necessity only dynamic process to originate cell level life systems with their functionally specified complex information far beyond the credible upper bound for random walks, from initial conditions in plausible prebiotic “soups” of whatever sort.

    Then, there is no credible evolutionary materialist mechanism for the onward origination of the diversity of body plans we see today and in the fossil record.

    In short, the evolutionary cascade from hydrogen to humans, however confidently put, rests on the a priori exclusion of intelligence — the only known observed source of FSCI, rather than on evidence and non- question- begging logic.

    Students and the public have a right to know that. One that is being too often suppressed.

    GEM of TKI

    PS: Jerry — you would be well advised to bear in mind the extraordinary degree to which contemporary modernist theology and in that general context NE archaeology is far too often driven by a priori commitments to selectively hyperskeptical, secularism-driven assumptions and assertions. (And in that context, there is a lot more archaeological support for the Bible, both NT and OT, than is likely to be admitted in today’s intensely polarised, militantly secularist academic climate.)

  166. kairosfocus,

    Two things.

    First, this discussion is about the modern synthesis and what may replace it and has nothing to do with OOL. We all understand the OOL problem more or less but the issue here in this thread is the mechanism for change of multicellular life.

    Dr. MacNeill shed some light on the objections to the modern synthesis by many evolutionary biologists and a lot of the discussion has revolved around that. Some others have introduced Sanford’s ideas which are YEC ideas and as such should receive special scrutiny before they are accepted.

    Second, this is not a site for Evangelicals to press their religious views as science. If they want to it is up to the moderators to say it is ok or not but I will continue to call the science as I see it and will continue to dispute any claims using the bible as scientific evidence for either biology or cosmology.

    To me it perverts the basic objectives of this site. Read the manifesto in the upper right hand corner and substitute ” fundamentalist religious” for “materialist” and see if the essence of this declaration changes much.

    Many here do not agree with me when I say that I believe the YEC’s are a problem for ID credibility. Where I live fundamentalist Christians especially YEC’s are very suspect. And I am being kind with that description of people’s attitudes toward their beliefs. Now they have the right to say “we don’t care” but I am talking about decent religious people who hold these views.

    I have no problem with DLH’s article on the archeological findings recently in Jerusalem and await what they find with further excavation. But I believe that wishful thinking driven by ideological reasons will never replace hard evidence. I am as tough on the Darwinists as I am on others who use ideology to advance their position. I object to Darwinists, YEC’s and TE’s because each uses religion or religious like motivation to justify their science. And this is supposed to be a site for science.

    Do you care if I offend the Darwinists here because I press then on the basis of their beliefs? If not then you should not care if I offend those who let religion rule their beliefs about science either. To me they are guilty of the same sin.

  167. Jerry at 165

    “We all understand the OOL problem more or less but the issue here in this thread is the mechanism for change of multicellular life.”

    Do we? Does anyone? OOL is Darwinism’s Achilles heel. You would do well to heed kairosfocus in reemphasizing it. Without abiogenesis, Darwinism has no foundation to stand on.

    “Some others have introduced Sanford’s ideas which are YEC ideas and as such should receive special scrutiny before they are accepted.”

    Are they? Dig into his work and I think you will find his arguments are based on works published by evolutionists – and are independent of YEC. Yes Sanford’s OOL model will be used to try to dismiss those arguments. And obviously Sanford would see such evidence as supportive of his origin ideas. But foundationally, he takes evolutionists models and shows that they disprove the Primary Axiom. Do NOT dismiss Sanford. I believe the flood of genetic evidence on mutations will be the quantitative data that drowns evolution. And thus Sanford’s collection of evolutionists models is very important for that.

    “Second, this is not a site for Evangelicals to press their religious views as science.”

    Check your presumptions or impressions vs the data. I showed you recorded data with supporting evidence from numerous ancient royal genealogies, not my beliefs, (of which you know little.)

    “will continue to dispute any claims using the bible as scientific evidence“

    That exposes your bias. Look at the data separate from your colored glasses.
    The majority of all 18th and 19th century objections to the bible have been refuted by archeological evidence.

  168. 168

    Mr. Nelson,
    Can you say where the link to the actual Sandwalk blog post is?

    sincerely,
    d. grey

  169. dennis grey
    From the second link in the main post, see: fascinating long piece by journalist Suzan Mazur about an upcoming (July 2008) evolution meeting at the Konrad Lorenz Institute in Altenberg, Austria.

  170. William A. Dembski, The Design Revolution (2004), pp. 41-43:

    Intelligent design needs to be distinguished from creation science or scientific creationism. The most obvious difference is that scientific creationism has prior religious commitments whereas intelligent design does not. Scientific creationism is commited to two religious presupositions and interprets the data of science to fit those presuppositions. Intelligent design, by contrast, has no prior religious commitments and interprets the data of science on generally accepted scientific principles. In particular, intelligent design does not depend on the biblical account of creation. The two presuppositions of scientific creationism are as follows:
    • There exists a supernatural agent who creates and orders the world.
    • The biblical account of creation recorded in Genesis is scientifically accurate.

    Proponents of scientific creationism treat the opening chapters of Genesis as a scientific text and thus argue a literal six-day creation, the existence of a historical Adam and Eve, a literal Garden of Eden, a catastrophic world-wide flood and so on. Scientific creationism takes the biblical account of creation in Genesis as its starting point and then attempts to match the data of nature to the biblical account.

    Intelligent design, by contrast, starts with the data of nature and from there argues that an intelligent cause is responsible for the specified complexity in nature…

    Scientific creationism’s reliance on narrowly held prior assumptions undercuts its status as a scientific theory…

  171. Jerry wrote:

    “No natural source of variation has ever been demonstrated. Without a source of variation, natural selection and sexual reproduction might be very sterile processes.”

    As I pointed out recently at my blog, we now know of at least 47 major natural sources of phenotypic variation:

    http://evolutionlist.blogspot......awman.html

  172. Kairosfocus wrote:

    “…mutations of the kind that macroevolution doesn’t need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don’t occur.”

    Not true. Indeed, mutations in hox genes are well known, and are the basis for many emerging models of major phenotypic change. For example, a mutation in the FOX2P hox gene (mostly inactivating it) is one of the primary reasons for the phenotypic anatomical and functional differences between humans and chimpanzees (and all other primates). Another mutation in another hox gene (BMP4) also contributed to the evolution of human vocal apparatus, shortening the muzzle and modifying the attachment points of the muscles used in speech. Also, it is interesting to note that both of these mutations inactivated the original functions of these genes, rather than “enhancing” them. They were, in other words, deleterious mutations, which nevertheless set the stage for the evolution of human speech.

  173. kairosfocus wrote:

    “Then, there is no credible evolutionary materialist mechanism for the onward origination of the diversity of body plans we see today and in the fossil record.”

    Untrue. Sean Carroll’s book, Endless Forms Most Beautiful contains a concise and lucid description of precisely this process, supported by a growing mountain of empirical data.

  174. DLH wrote:

    “OOL is Darwinism’s Achilles heel.”

    On the contrary, the problem of the origin of life has virtually nothing to do with evolutionary biology. Darwin did not mention it directly in any of his published works, and never speculated publically on the subject at all.

    My own position on this problem is that, given the immensely long period of time that has elapsed since the origin of life, the rocks that were formed during this period no longer exist at the Earth’s surface (they are either buried so deeply as to be inaccessible, or have been destroyed by tectonic subduction). Furthermore, molecules do not fossilize, and so speculation about the chemical origins of life will always remain precisely that: speculation, unsupported by direct empirical evidence.

    As all of the participants in my summer seminar on evolution and design at Cornell agreed (including the ID proponents), the question of the origin of life has virtually no bearing on the origin of phenotypic variation or mechanisms of descent with modification, both of which are the core of evolutionary biology. Indeed, even the ID proponents in the seminar agreed that “Darwinism” (i.e. the theories proposed by Darwin himself, and modified by evolutionary biologists since then) are not affected in any way by the debate over the origin of life, nor will they be if this debate is ever resolved on the basis of future empirical discoveries.

    Therefore, I agree with Newton: “I make no hypotheses”, and do not address questions of the origin of life as an evolutionary biologist.

  175. DLH wrote:

    “Without abiogenesis, Darwinism has no foundation to stand on.”

    Untrue. Once again, virtually none of the theories of evolutionary biology depend in any way on the resolution to the question of the origin of life. Daniel Dennett pointed this out in Darwin’s Dangerous Idea when he pointed out that Darwin “started out in the middle” by proposing a theory for descent with modification (what we now call “evolution”) and the origin of of adaptations. His proposal did not address the origin of life at all, as a brief rereading of the summary of his argument in chapter 14 of the Origin of Species indicates:

    http://darwin-online.org.uk/co.....;pageseq=1

    Again, I agree with Jerry: disputes over the origin of life are diversions from the real questions about descent with modification and the origin of adaptations, which were and are the core subjects of the theories of evolutionary biology.

  176. Sorry; the link that I posted to chapter 14 of the Origin of Species was incorrect. Here is the correct link:

    http://darwin-online.org.uk/co.....ageseq=477

  177. Here’s a reference for FOX2P:

    Nature
    Posted: 22 August 2002
    Title: Molecular evolution of FOXP2, a gene involved in speech and language
    Summary: Two normal copies of FOXP2 are necessary for language articulation. Alterations in amino acid sequence and nucleotide polymorphisms implement FOXP2 selection for human evolution. The gene encodes a protein of 715 amino acids and is classified as a forkhead transcription factor.
    Only three amino acids differ between comparisons of human and mouse FOXP2. Two of the three amino-acid differences between humans and mice occurred on the human lineage after the separation from the common ancestor with the chimpanzee. A change in amino acid 325 suggests a potential site for phosphorylation by a protein kinase. A study of 91 individuals revealed only one discrepancy in amino acid sequence. Researchers hypothesized that the gene is responsible for orofacial movements. They hypothesize that the gene may be responsible for the expansion of modern humans.

  178. Again, note that mutations in hox genes such as FOX2P and BMP4 cause surprisingly large phenotypic changes, a phenomenon totally unanticipated by the “modern evolutionary synthesis.” This, again, is why I have asserted that the “modern synthesis” (sometimes referred to as neo-Darwinism) has been replaced by a much more robust and empirically grounded theory explaining the major features of what Darwin called “descent with modificati — in a word, evolution.

  179. J:

    Thanks for the quote from Dr. Dembski. Precisely my point, and stated better than I could have.

  180. Dr. MacNeill,

    I am well aware of your 47 models of variation. In fact I mention them quite frequently. I don’t claim to understand all of them but I bet if each was explained in lay terms, it would be easy to know what to look for in changes in the genome and in the subsequent expanded gene pool.

    However, what are the documented cases of variation creation by these 47 methods and the species creation they led to. I realize all may have happened but what has been the pay out. Have there been any instances of novelty creation by any of these processes? Which of these processes would lead to bats and their sonar, birds and their wings, giraffes and their special blood pressure system, birds and their special oxygen delivery system, mammals and their four chambered hearts and warm bloodiness, humans and their long childhood development.

    I have read Sean Carroll’s book and diid not find anything that pointed to any source of variation that would lead to the complexity we see. It is an interesting book. At one point he mentioned that it would take 10,000 pages of small print to list the instructions on how to make a human. Breath taking complexity. He explained how the complexity in a species probably happens during gestation but not why or how the system to do this came about other than speculation.

    I find evo devo as supportive of ID because it has to resort to incredible complexity to explain just how it works, but cannot really explain how everything arose. Other than to beg the question and point to some magical unknown species that preceded the Cambrian Explosion that had the Hox genes, Pax genes and other tools that led to everything. Nearly all of Behe’s Irreducible Complexity examples arose during the Cambrian Explosion or in the magical unknown creature.

    Thanks for taking all the time to answer our queries. If you have tine, look at the thread on gene expression and comment if appropriate. There is not much debate going there but there are attempts to try to understand what controls gene expression and epigenesis has been mentioned and discussed a little.

  181. Allan_MacNeill:

    On the contrary, the problem of the origin of life has virtually nothing to do with evolutionary biology.

    This is a cop out! Ever since it finally dawned on the OOL community that the first self-replicating organism wasn’t DNA based, a huge chunk of pre-DNA life is solidly the responsiblility of the evolutionary biologist to figure out.

    Either the first self-replicating life actually was DNA based, and therefore was an act of creation, or there is an evolutionary path from simple replicator to DNA based life. If the latter, then it is the responsibility of the evolutionary biology community to figure out at least a feasible path. The fact that much has been lost in such ancient history may well make it impossible to determine whether the hypothetical path is the path that nature chose, it still is the responsibility of the evolutionary biologist to figure out if, and where, such a path exists.

    I reject the “its not my issue” argument on this one.

  182. Allen_MacNeill:

    I understand that your position about the OOL question is the only possible if one wants not to address the question of design. That’s formally acceptable, but totally wrong in the wider context of trying to understand if our scientific models of reality are supported by reality itself, which after all is the main purpose of science. So, your position is tenable if you restrict your aim to what you call “evolutionary biology”, but that means only that you are artificially separating evolutionary biology from science itself.

    My view of the question is as follows: OOL is a scientific problem, one of the biggest problems in science. You correctly point to a series of aspects which could be obstacles to its analysis, and I can partially agree. But science has not to stop in front of obstacles. After all, scientists are every day debating about even more difficult topics (such as the big bang, which after all is the origin of everything), even if their models are certainly, at present, not accurate.

    The problem with OOL is not that we have no clue, from fossils or anything else, of how it happened among many possible models. The problem is that we have no possible model (unless you accept any of those suggested as possible, but in that case you do have an opinion, and we can discuss it). And science has really a duty to address facts (life exists, after all) for which it has no possible model. That’s how great scientific theories, such as relativity and quantum mechanics, have started.

    So, where is the relevance of all that to darwinism?

    It’s very simple. Even if the majority of the scientific community insists to remain obstinately blind to that fact, it is a fact that there exists a model which can offer a perfectly rational and scientifically sound scenario to start explaining both OOL and evolution (and, probably, many other things). That model is called ID, it has been proposed and carried on by perfectly serious, intelligent and reliable scientists like Dembski, Behe and others, and is there for anyone unprejudiced to consider.

    The ID model states that both OOL and evolution of biological complexity (and, if we want to widen the perspective, fine tuning of the fundamental constants in the known universe) “cannot” in any way be explained by any known deterministic model. The ID model gives definite reasons for that, and performs a detaile analysis of those reasons.

    The ID model states that, anyway, we do have one model, derived from empirical experience, which allows to conceive a causal explanation of those facts in a rational context: that model is design. Nothing else can do that.

    Design is an empirical reality. It is observed in human artifacts. It has specific recognizable characteristics, as Dembski has been arguing for years. CSI is a definite, very important concept. No reasoning individual should easily dismiss it.

    Design is recognizable, and design is recognizable in the highest degree in biological information.

    Facts are:

    1) Biological information had to appear where no biological information was present (OOL).

    2) Biological information had to “increase”, to generate the complexity and diversity we can observe today.

    None of those two points can be explained with any model which does not include ID. Both become perfectly amenable to scientific thought if the ID scenario is assumed.

    So, in my opinion, OOL “is” absolutely relevant to darwinism, unless one tries to artificailly restrict the scientific thought to categories which do not communicate. It is relevant, because we have one theory, and only one, which can in principle explain both, and no single other theory which can explain each of them, even separately.

    Indeed, I understand that you and others are trying to affirm, in different ways, that such a theory which can explain evolution does exist. I appreciate your attempts, but cannot agree with them. It is interesting, for instance, that your approach is forced in some way to reduce the “theory of evolution” more to a form of natural history (decription of events) than to a real causal theory (mathematical/logical modeling of the causes of events). Please, note that I have always spoken of mathematical/logical modeling. I agree that some empirical sciences may not have an explicit mathematical model, but I know of no one, even the most rarefied (think of psychoanalisis, for instance), which doesn’t have a definite logical causal model. Because that’s what science is. It observes facts in nature (natural history) and builds up mathematical/logical models for their possible causes (scientific theories).

    The theories of evolution which we have, those who refute the ID scenario, all of them, do have logical models behind them, but they are inconsistent and not tenable. The same is valid for OOL theories.

    ID theory has a definite logical “and” mathematical model behind itself. It is perfectly rational and sound. It is consistent, in accord with data, and allows a scientific scenario to start exploring in the right perspective fundamental unsolved scientific problems.

    In the light of that, only obstination and prejudice can prevent the scientific community from accepting ID as a perfectly valid and very important scientific theory, to be discussed “and” pursued.

  183. Allen_MacNeill:

    About FOXP2. I just comment from what you posted, I am not an expert of the subject.

    It seems that the abstract you posted shows the typical methodology of evo-devo: homologies and sequence changes are “observed” (I have no problems with that, indeed that’s a very useful gathering of facts); and then, the most surprising theoretical models are hypothesized, and often the only substance in the hypothesis is that it fits the preexisting theory.

    That leap is evident in the last sentences of the abstract, which follows what is just a simple enunciation of very simple facts:

    “Researchers hypothesized that the gene is responsible for orofacial movements. They hypothesize that the gene may be responsible for the expansion of modern humans”.

    Obviously, I should read the whole article, but I don’t know if I can get access. We’ll see.

    In the meantime, I’ll tell you what I think about hox genes and similar, which are indeed the only solid piece of evidence evo-devo is based upon.

    Hox genes are certainly interesting and important. They are final effectors in very important regulation procedures. They probably act as transcription regulators, and it is obvious that the very complex, and still vastly non understood, network of transcription factors is the effector system through which all nuclear regulations are obtained.

    Still, discovering an effector molecule does not in any way mean that we understand the “regulation” behind it. I cite here a simple sentence from an online article about hox genes, just to start the discussion:

    “But, one must again ask a question: if all animals utilise this common conserved mechanism with the same or similar genes for development, why don’t all animals look exactly alike?

    The key determining factors are (1) concentration ; (2) location ; (3) timing ; and (4) target gene specificity” (Gareth Brady)

    Hox genes are effectors involved in spatial control of the body plan. That does not mean that they “realize” spatial control. Rather, the information network which controls body plans very intelligently utilizes hox genes (and probably a lot of other effector tools) to realize its programs, finely tuning their “(1) concentration ; (2) location ; (3) timing ; and (4) target gene specificity” to attain specific results. Again, we don’t know where the information is. The simple fact that alterations in the final effectors brings gross alterations in the final result does not imply that the effector is sufficient for the result: it just shows that it is necessary.

    That logical trick, of implicitly passing necessity for sufficiency, is uniformly widespread in all darwinist thought. It is a logical trick, and nothing else. We don’t know how the information controlling body plans works. We just know that tampering with the final pointers obvioulsy determines gross changes in the final result. In the same way, if you change the value of a single important system variable in a complex software (let’s say an operating system, let’s say windows xp), the results can really be devastating, but you are not authorized to say that, as you understand the role of that single variable, you understand the whole software behind it, or even worse, that such a software does not exist.

  184. H’mm:

    Some very interesting and revealing comments overnight.

    On a few select points:

    1] Jerry, 166: this discussion is about the modern synthesis and what may replace it and has nothing to do with OOL. We all understand the OOL problem more or less but the issue here in this thread is the mechanism for change of multicellular life.

    I must respectfully beg to disagree.

    The critical issue that underpins both is the mechanism that most cogently explains the origin of bio-functionally specified, complex information. And, the failure of he evolutionary materialist cascade at OOL, as DLH pointed out in 167, is a major crack in the foundation of NDT and other similar materialistic [non-agent, chance + necessity only] theories of origins:

    OOL is Darwinism’s Achilles heel . . . Without abiogenesis, Darwinism has no foundation to stand on.

    2] this is not a site for Evangelicals to press their religious views as science.

    Excuse me, but that sounds rather strawmannish.

    There is, first, an issue of genetic entropy on the table that (including discussion of credible mutation rates) is about deterioration of the genome through corruption by noise. Someone has noted that one may put the deterioration of lifespan reported in biblical accounts on a plot that does in fact fit that pattern. Interesting but not primary.

    Perhaps, you mean to speak to my PS — and observe that this means that this is not a major point but it is relevant enough to note — at 165, on how evolutionary materialistic and rationalist, secularist assumptions and assertions have distorted the process of much of theology and archeology in recent decades.

    That is an unfortunate fact, as you could easily see by following up the link. Indeed, the article on the discovery of David’s palace, linked by an earlier commenter, underscores the point. So, an informed reader should beware of such biases in reading the assertions of those who rely on that scholarship.

    3] Where I live fundamentalist Christians especially YEC’s are very suspect. And I am being kind with that description of people’s attitudes toward their beliefs. Now they have the right to say “we don’t care” but I am talking about decent religious people who hold these views

    Let’s see some of why, citing a recent Moderator for the United Church of Jamaica and Grand Cayman, written but a few short weeks after the 9/11 terrorist attacks:

    The human tragedy in USA has also served to bring into sharp focus the use of terror by religious fanatics/fundamentalists. Fundamentalism or fundamentalists are terms that are applicable to every extreme conservative in every religious system . . . . During the twentieth century in particular we have seen the rise of militant expression of these faiths by extreme conservatives who have sought to respond to what they identify as ‘liberal’ revisions that have weakened the fundamentals of their faith . . . They opt for a belligerent, militant and separatist posture in their public discourse that can easily employ violence to achieve their goals. [Gleaner, Sept. 26, 2001]

    One does not pander to such prejudice that would tar rationalism- and evolutionary materialism- rejecting, Bible-believing Christians with a smear-word that then allows them to draw out utterly unwarranted and even slanderous inferences and dismiss all they have to say. On the contrary, one identifies, exposes and corrects such bigotry and slander.

    4] what about “Evangelicals . . . press[ing] their religious views as science”?

    Sometimes that happens and it is a category confusion to put theology as science.

    So, while some may argue that we have sufficient evidence to hold say Genesis as record of history to be explained rather than dismissed by science, that is sufficiently in dispute that one would be begging the question at stake to try that. [Cf how say Paul reasoned when he came to Athens in Ac 17, relative to the ideas and facts that were a common-place to that culture. he then pointed out the critical instability int he foundation of the worldview of the day, and from that argued for a fair-minded consideration of alternatives. For too many, he only got a closed-minded dismissal.]

    But equally category confusion goes the other way, and in much more damaging ways: for instance, consider the attempt to redefine science as in effect the best materialistic explanation of the cosmos from hydrogen to humans.

    That is philosophical question-begging, not science.

    5] Allen, 172: Kairosfocus wrote: “…mutations of the kind that macroevolution doesn’t need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don’t occur.”

    First and foremost, this is an interesting mis-perception of the status of the CITED remarks.

    For, it in fact excerpts Meyer’s peer-reviewed, closing summary of the comments by McDonald, a researcher writing in the peer-reviewed literature:

    McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes–the very stuff of macroevolution–apparently do not vary. In other words, mutations of the kind that macroevolution doesn’t need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don’t occur.

    Other commenters have already addressed the basic on-the-merits problem with the attempted rebuttal. I excerpt, for instance, GP at 184:

    the abstract you [Allen] posted shows the typical methodology of evo-devo: homologies and sequence changes are “observed” (I have no problems with that, indeed that’s a very useful gathering of facts); and then, the most surprising theoretical models are hypothesized, and often the only substance in the hypothesis is that it fits the preexisting theory.

    I will note in addition that my [and Meyer's] context also clearly identifies that by body plan divergences I was speaking in the main to phylum and sub-phylum level differentiation in the context of the fossil record’s Cambrian life revolution. What is the mechanism capable of accounting for the hundreds of megabytes of additional DNA , dozens of times over within 5 – 10 MY on the usual timeline and on this one small planet, to move to these phyla and sub-phyla by chance + necessity only without exhausting probabilistic resopurces?

    Of that – the issue in the main – we find nowhere the faintest trace.

    6] 173, Sean Carroll’s book, Endless Forms Most Beautiful contains a concise and lucid description of precisely this process, supported by a growing mountain of empirical data.

    Again, a poorly founded dismissal of the underlying issue: where does the biofunctional information in DNA and associated systems and structures come from at origin?

    I note for instance Jerry at 181:

    . . . what are the documented cases of variation creation by these 47 methods and the species creation they led to. I realize all may have happened but what has been the pay out. Have there been any instances of novelty creation by any of these processes? Which of these processes would lead to bats and their sonar, birds and their wings, giraffes and their special blood pressure system, birds and their special oxygen delivery system, mammals and their four chambered hearts and warm bloodiness, humans and their long childhood development.

    I have read Sean Carroll’s book and did not find anything that pointed to any source of variation that would lead to the complexity we see.

    Absent specific well-documented dynamical cause-effect chain specifying evidence [preferably mathematical but logical will do] we remain for good reason skeptical on claimed mountains of evidence. And, remember I am principally asking on the difference between say a starfish and a trilobite or a turtle, not the relatively minor differences between say humans vs chimps.

    [Even those differences run into probabilistic resources problems so we need to see sufficient details that we can see, dynamically -- not in a just-so ad hoc story, how chance + necessity account for the scale of difference. 2% or whatever estimate you wish of 3 bn base pairs is 60 mns, or about 120 Mbits, which would have to be accounted for within 10 MY, and on earth.]

    7] My own position on this problem [OOL] is that, given the immensely long period of time that has elapsed since the origin of life, the rocks that were formed during this period no longer exist at the Earth’s surface (they are either buried so deeply as to be inaccessible, or have been destroyed by tectonic subduction). Furthermore, molecules do not fossilize, and so speculation about the chemical origins of life will always remain precisely that: speculation, unsupported by direct empirical evidence.

    That is tantamount to saying that yours is not a scientific, evidence-controlled view but a faith-commitment. Such is your right, but you and others do not have a just right to then pass this or similar views off as the only credible and “scientific” view.

    By sharpest contrast, that FSCI is in all cases where we do know the causal story directly is the product of agency IS an empirical observation.

    I am thus well-warranted to hold that in all cases, absent a convincing reason to see other wise, FSCI is the product of agency given the principles of statistical thermodynamics issues attaching to finding intricate functional configurations in config spaces well beyond 10^300 cells. Given the complexity of life at cellular level, inference to design is an empirically anchored alternative, and thus a superior and credibly scioentific explanation of OOL.

    The same basic point obtains for things that DO find themselves int eh fossil record, i.e. body-plan level biodiversity.

    8] the question of the origin of life has virtually no bearing on the origin of phenotypic variation or mechanisms of descent with modification, both of which are the core of evolutionary biology

    That is a matter of how the debate has been framed. Once we look a the underlying crucial issue, origin of functionally specified complex information, there is a very direct relevance as the two are instances of the same problem.

    Indeed, GP is tellingly apt in 183:

    I understand that your position about the OOL question is the only possible if one wants not to address the question of design. That’s formally acceptable, but totally wrong in the wider context of trying to understand if our scientific models of reality are supported by reality itself, which after all is the main purpose of science. So, your position is tenable if you restrict your aim to what you call “evolutionary biology”, but that means only that you are artificially separating evolutionary biology from science itself . . . .

    where is the relevance of all that to darwinism?

    It’s very simple. Even if the majority of the scientific community insists to remain obstinately blind to that fact, it is a fact that there exists a model which can offer a perfectly rational and scientifically sound scenario to start explaining both OOL and evolution (and, probably, many other things). That model is called ID, it has been proposed and carried on by perfectly serious, intelligent and reliable scientists like Dembski, Behe and others, and is there for anyone unprejudiced to consider . . . .

    OOL “is” absolutely relevant to darwinism, unless one tries to artificailly restrict the scientific thought to categories which do not communicate. It is relevant, because we have one theory, and only one, which can in principle explain both, and no single other theory which can explain each of them, even separately.

    Okay, let’s discuss onward.

    GEM of TKI

  185. Allen et al.
    In keeping with the initial thread, I would like to learn more about the hard data of what modern biology/ biochemistry/ genomics/ proteomics etc have been finding that would be the reasons for gathering the Altenburg 16 together to formulate the next “modern synthesis” equivalent.

    Similarly Allen stated:

    Rather, they absolutely require the presence of membranes for their function, and so until such membranes are constructed (either spontaneously in the OOL, or artificially in the laboratory), the “creation” of life that relies on such assemblies for their energy is quite literally impossible.

    What other features of self reproducing cells do you see as essential?

    Then we can examine how successful various theories are in explaining the data on origin of self reproducing life and subsequent development of observed biochemical complexity, and where they need to be altered or extended as the Altenberg 16 are apparently considering.

  186. Allen

    Once again, virtually none of the theories of evolutionary biology depend in any way on the resolution to the question of the origin of life.

    Alrighty then. So you don’t have a problem with the first life being the created forms Adam and Eve and every other living thing in the Garden of Eden then evolution proceeding from there mostly as devolution from originally perfect forms.

    If you do have a problem with that then obviously you have some commitment to some other OOL story and thus your statement about biology having no vested interest in OOL is false. The entire neo-Darwinian story is built around simpler forms becoming increasingly complex over time. To draw a line of demarcation at the point where the basic machinery of life that enables free living cells to exist and say the story has no commitments before that point is absurd and is handily demonstrated by the fact that any hypothesis wherein the first cells contained all the complexity that exists today and evolution is the story of how that complexity unfolded in a prescribed sequence is roundly rejected because it doesn’t fit the simple to complex model that underpins neo-Darwinism.

  187. kairosfocus asked:

    “…where does the biofunctional information in DNA and associated systems and structures come from at origin?”

    The answer is simple: nobody knows. There is no direct or indirect empirical evidence either way, and as I have argued above, there seems to be little or no prospect of such evidence becoming available. The best we will ever have is (perhaps) some laboratory models that suggest how it might have happened. To me, this isn’t empirical science, according to the standards that I have learned and to which most scientists adhere.

    For the very same reason, I consider any ID argument based on analogy (e.g. “it looks designed, ergo it must designed) to be entirely without logical foundation. Science works via inductive reasoning, not argument by analogy, and therefore until someone publishes empirical results that clearly support a prediction flowing from an ID hypothesis, ID is not science, but speculation.

    And, to anticipate the usual objections, check out any recent issue of any of the myriad journals on evolutionary biology for examples of precisely the kind of empirical research to which I am referring. None of the articles published on ID in refereed journals contains original empirical research. Not one. Ergo, they are not science, but speculation, no different in logical force from the various hypotheses for the spontaneous origin of life from non-living material.

  188. kairosfocus wrote:

    “I was speaking in the main to phylum and sub-phylum level differentiation in the context of the fossil record’s Cambrian life revolution.”

    Once again we are discussing subjects about which there is not (and almost certainly will never be) empirical evidence: that is, the genetic regulation of the body plans exhibited in the Burgess shale and other Cambrian fossils. Homeotic genes do not fossilize, ergo we will never have direct empirical evidence of what genetic regulatory processes may have led to the appearance of the various body plans that appear in the fossil record. The best we can do is to argue by analogy to those processes we can investigate today using empirical methods.

    Evolutionary developmental biologists have discovered an immense amount about how homeotic gene regulatory mechanisms produce both the body plans we observe, and changes in those body plans as the result of the 47 mechanisms of phenotypic variation that I have listed at my blog. ID “scientists” have produced no empirically verifiable alternative explanations. Until they do, their speculation is not science, as it is not based on observable empirical evidence, but rather metaphysical speculation.

    Science is entirely about the formulation of testable hypotheses, followed by the formulation of testable predictions on the basis of those hypotheses, followed by experimental tests of those predictions, followed by statistical analysis of the results of such tests, and the publication of such results and the discussion of their relevance to the original hypothesis. Cite for me one example of this method being applied to an ID hypothesis. Until you can do this, I assert that what you are talking about is not science.

  189. DLH wrote:

    “In keeping with the initial thread, I would like to learn more about the hard data of what modern biology/ biochemistry/ genomics/ proteomics etc have been finding that would be the reasons for gathering the Altenburg 16 together to formulate the next “modern synthesis” equivalent.”

    I have already suggested reading Jablonka and Lamb’s new book Evolution in Four Dimensions, to which I would add Elliot Sober and David Sloan Wilson’s book Unto Others: The Evolutionary Psychology of Unselfish Behavior and Lynn Margulis’s book Acquiring Genomes (coauthored with her son, Dorion Sagan). I am myself writing a new evolutionary biology textbook on the subject (now tentatively entitled Evolution: The Continuing Revolution, but that won’t be out for at least two years. Until then, you will have to follow up on what’s happening in the field, which means perusing the pages of such journals as Evolution and Quarterly Review of Biology, available at any college or university library (or online, but usually for an exorbitant fee).

  190. DLH asked:

    “What other features of self reproducing cells do you see as essential?”

    As I tell my students (usually during the first week of lectures of introductory biology), a living cell needs a bare minimum of three structural/functional features:

    1) A selectively permeable plasma membrane, enclosing a quantity of cytosol that includes all of the materials necessary to assemble and operate the cell;

    2) at least one DNA molecule, containing the genetic information by means of which the various subassemblies of the cell are assembled and operated; and

    3) several ribosomes, by means of which the genetic information carried in the DNA can be translated into those various subassemblies.

  191. Dr McNeill:

    First, I appreciate your willingness to acknowledge that there is a major empirical data gap relative to the usual evolutionary materialist origins scenarios.

    Second, in re your: I consider any ID argument based on analogy (e.g. “it looks designed, ergo it must designed) to be entirely without logical foundation. Science works via inductive reasoning, not argument by analogy

    I have just a moment.

    I note to you that I like many others here at UD, work on a routine basis with digital data strings of enormous complexity, isolated in configuration spaces. When I observe that e.g. DNA is such a digital string of isolated functional information, I am observing an empirical fact. [That one may code digitally using monomers is not in principle different from using alphanumeric symbols, or magnetic states or currents or voltages etc. Indeed, in my always linked, for purposes of illustration, I suggest a digital system based on the pips on the faces of strings of dice.]

    This is a fact long since noted, for instance by Sir Francis Crick in his March 19, 1953 letter to his son, Michael:

    “Now we believe that the DNA is a code. That is, the order of bases (the letters) makes one gene different from another gene (just as one page of print is different from another).”

    When I therefore apply basic analyses of phase/configuration space to DNA, I am not reasoning by mere analogy [which BTW is a form of inductive reasoning . . . which category of reasoning, per Lord Russell's Inductive Turkey (who made a bad mistake about being fed every morning at 9 am one certain Christmas Eve . . . the issue being how many material points of comparison obtain) is always in principle defeatable], I am addressing a known observed characteristic of digital data.

    And, I am highly confident that a config space corresponding to 300 – 500,000 much less 3 – 4 bn cells, will incredibly isolate islands of functionality relative to the power of random walk based state space searches, however reinforced by hill climbing algorithms.

    I know that intelligent agents routinely go to such islands of functionality based on understanding of functional requisites; but that is a wholly different order of causation from chance + necessity only.

    That is why the OOL issue is so important, and it is why the origin of major body plans is so important, as they are in material part characterised by increments of functionally specified, complex digital information that on the gamut of our observable cosmos are comfortably well beyond the reach of chance + necessity only.

    GEM of TKI

  192. PS: This discussion and this one will help us clarify the strengths and limitations of reasoning by analogy.

  193. DLH:

    “What other features of self reproducing cells do you see as essential?”

    Here is my input:

    For life to be present, a lot of extraordinarily unlikely features have to be present, at the same time, and perfectly associated in any living structure. I would like to strees that, in my opinion, we should bring again the emphasis on the concept of “life”, however difficult it may be, and not only on the concept of “self-reproduction”. The absolute dominance of self-reproduction against life is one of the sad consequences of darwinist ideology. Computer programs can self-reproduce, but they are nor alive. Life id the real deal. Self-reproduction is important because it maintains life. The same concept of evolution is less important. Life is important in itself, independently of its potentiality to evolve. Even if life had remained limited to bacteria and archea, still it would be astonishing, and would deserve to be understood and admired.

    But let’s get back to the “essentials”:

    1) The membrane. I agree, that’s fundamental. But ir is important to consider that the membrane is not important in itself, as a physical structure. The membrane, in living beings, is essentially a very complex, dynamic and functional separation. Its purpose is to separate the outside from the inside, so that the inside can be different from the ouside. It is an active organ, which builds incredible difference between the microcosm of the cytoplasm and the macrocosm of the external environment, usually a fluid. One of the main activities of the cell membrane, in most living cells, is to extrude Na exchanging it with K (the sodium pump). That creates a stunning diversity bewteen intracellular fluid and extracellular fluid, using a lot of the cell’s energy to that purpose. So, the inner life of the cytoplasm can go on in a completely “artificial” environment, intelligently created with great expense of energy (see after).

    2) The negative entropy. However one thinks about the second law and biology (one of the hottest topics here at UD, it seems), it cannot be denied that living cells are concentrated islands of extreme negative entropy. Indeed, they are so improbable that it is really a miracle that they exist. Without addressing here the fundamental problem of information in relation to the second law, recently discussed here at UD, it is important to remember that anyway such a high state of negative entropy is achieved only through a huge expenditure of energy, which takes us to the next point.

    3) The energy production. All living cells have to produce and utilize an incredible and continuous quantity of energy to ensure the maintanance of the negative entropy, of which the difference between inside and outside is a very good example. Although I am no expert of comparative biology, I believe that most living beings derive such energy by one of two basci methods:

    a) Photosynthesis (energy from the sun)
    b) Degradation of organic molecules from other living beings (chemical energy)

    It is important to notice that the flow of energy has to be constant: living systems cannot tolerate even momentary interruptions of that flow, which has often to be maintained through storage systems (glycogen, fats). Moreovere, the almost ubiquitous ATP synthase, well known here at UD, provides an elegant way to transform and transfer energy through the various systems.
    4) Systems far from equilibrium. Living beings are the most extreme systems far from equilibrium. Those systems are extremely difficult to model from a mathemathical point of view. That’s why if anybody affirms that he can explain and model deterministically what happens in a living being, he is simply lying. That’s why any assumption tha living beings strictly obey known physical laws, and only those laws, is, indeed, an assumption, and cannot be verified experimentally. In other words, we don’t really know how living systems work from a physical perspective. Almost everything is still to be discovered. Biophysics is still in its cradle, and a better understanding and modeling of systems far from equilibrium is the first step to go beyond the gross approximation of biochemistry.

    5) Procedural information. More attention should be given to the problem of how cells manage the static information stored in their genomes. In other words, attention has been given up to now mainly to the gross effector information (protein sequences), and not to the procedures necessary to activate, measure, control, verify and inhibit that information in ordered sequences and models. But those procedures have to be there, because no life is possible without them. Where are they? How are they implemented?

  194. Allen_MacNeill:

    “Science is entirely about the formulation of testable hypotheses, followed by the formulation of testable predictions on the basis of those hypotheses, followed by experimental tests of those predictions, followed by statistical analysis of the results of such tests, and the publication of such results and the discussion of their relevance to the original hypothesis. Cite for me one example of this method being applied to an ID hypothesis. Until you can do this, I assert that what you are talking about is not science.”

    Again, I have to disagree about your epistemological views. Very briefly:

    The first, and main, activity of science is to create models which can explain the known facts. Obviously, another important activity is to gather facts so that models can be proposed for them. Finally, a third important activity is to gather new facts whic may support or not support existing models.

    In other words, experimental science gathers facts, either generically (any new fact is useful), or specifically with the intent to test a model.

    On the contrary, theoretical science is busy working with models: creating them, adapting them, refuting them. Theoretical science utilizes facts, but utilizes them creatively. All the pertinent facts were known when Einstein created his theory of relativity, but nobody had yet interpreted them the way he did.

    The predictions, so much and so often extolled as if they were the only mark of science, are indeed a corollary, important and useful, but not always available. The theory of relativity shook the scientific world “before” the first experiment could support it. Why? Because it explained known facts better than the classical theories. Quantum mechanincs was created because scientists had to explain the black body radiation, which was a fact well known in advance, and not a prediction.

    Predictions are very important when experimental confirmation is available. Einstein and Bohr debated for years about what had to become an experiment decades after.

    What I mean is:

    Darwinian theory and ID are two theoretical paradigms which try to explain known facts: the emergence of life and its “evolution”. Both make predictions, but maby of those predictions cannot be confirmed directly by experiments, because they refer to facts which happened in the distant past. Anyway, as our knowledge of the present world improves, it is likely that new facts may confirm or disconfirm both theories. Indeed, as we have often affirmed here, many of the new knowledge in biology definitely supports the ID scenario, starting from all the new acquisitions about the genome (non coding DNA, etc.) and passing through each new layer of complexity which is dicovered, and which cannot be explained by the darwinian theory.

    About experimentation, I will repeat here what I have often said on this blog: there is no reason that experiments have to be realized by ID biologists (they are so few, and they have no resources). Experiments can well be realized by evolutionary biologists, and still support ID, even if the researchers who performed them think differently. Experiments, as I have already said, give us facts, and facts belong to everybody. Scientific works give us both facts and interpretations. We can take the first, and refute the second.

    Darwinism and ID are general theories, general scenarios. They are, indeed, alternative scenarios. Therefore, any new fact is, inevitably, in favour of one or the other.

    The great superiority of ID as a general scenario becomes ever more obvious with each new biological discovery. But, if you are not available to really discuss the ID arguments for their intrinsic value, we can have no real confrontation on the things that really matter.

  195. gpuccio at 195
    Excellent observations on how science has operated in the past in fitting better models to existing facts.

    gpuccio at 194
    Thanks for the input on critical features.

    Recommend using “Complex Specified Information” rather than “negative entropy”. (Entropy increases from zero.)

    “That’s why if anybody affirms that he can explain and model deterministically what happens in a living being, he is simply lying.”

    Without evidence of conscious moral failure, lets be charitable and suggest they are “uninformed” or “deluded” rather than “lying” or “wicked”.

    {DLH PS See kairosfocus at 204 explaining the origin of “negative entropy” and a proper explanation of the “negative.” }

  196. 1) A selectively permeable plasma membrane, enclosing a quantity of cytosol that includes all of the materials necessary to assemble and operate the cell;

    2) at least one DNA molecule, containing the genetic information by means of which the various subassemblies of the cell are assembled and operated; and

    3) several ribosomes, by means of which the genetic information carried in the DNA can be translated into those various subassemblies.–Allen MacNeill

    Where do you think that genetic information comes from?

    How do you think it is encoded onto the DNA?

    From my ID PoV I see DNA as a disk- that is the coding side of the DNA is encoded with the information much like a computer system’s hard-drive is encoded. That data is used as needed and directed both internally and externally.

    And right before reproducing the information on the coding side is parallel loaded to the template side.

    Once complete that information is transferred to the newly created coding side.

    But how does the information get there in the first place? I think that is a universal mystery.

    One more question- Given what you posted about those requirements what is your position on non-telic abiogenesis?

  197. Science works via inductive reasoning, not argument by analogy, and therefore until someone publishes empirical results that clearly support a prediction flowing from an ID hypothesis, ID is not science, but speculation.

    Does the non-telic position even offer a hypothsis?

    Can we see it?

    Has someone published empirical results that clearly demonstrate that non-telic processes can cobble together all the parts required to meet minimal self-reproduction?

    Has someone published empirical results that clearly demonstrate that non-telic processes can account for the bacterial flagellum?

    Has someone published empirical results that clearly demonstrate that non-telic processes can account for the physiological and anatomical differences observed between chimps and humans?

    I would say if the standards are EQUALLY applied something has to give.

  198. Dr. MacNeill,

    you said

    “Science is entirely about the formulation of testable hypotheses, followed by the formulation of testable predictions on the basis of those hypotheses, followed by experimental tests of those predictions, followed by statistical analysis of the results of such tests, and the publication of such results and the discussion of their relevance to the original hypothesis. Cite for me one example of this method being applied to an ID hypothesis. Until you can do this, I assert that what you are talking about is not science.”

    I maintain that thousands of ID studies are done every year. They are just not identified as such. For example, much of the work done on malaria is ID research. Lenski’s research at Michigan State on bacteria is ID research. All the work that is mapping the genomes of various species is ID research.

    Of course this may sound absurd but just because a study does not have ID as its stated objectives does not mean that it isn’t ID research.

    Now what is the basis of my claims. It is the work of Behe in his book “The Edge of Evolution.” In it he claims that naturalistic methods with large numbers of reproductive events, billions x billions of events, will not produce novelty in the genome. That is an hypothesis that flows from the ID assumption that naturalistic means can not produce novelty. He defines novelty in a couple different ways very conservatively.

    So far the research for uni-celled organisms has supported this proposition. Studies of prokaryotes, single celled eukaryotes and viruses (not cellular but highly reproductive) have all supported this hypothesis. Each study of a genome of a bird, fish, mammal, insect also has the potential to falsify or support this hypothesis.

    Thus, each study which is sequencing the various species of an order, family or genera has the potential to support or falsify this hypothesis. These studies also have the potential of supporting or falsifying the viability of your 47 engines of variation.

    So ID is on the line at this very moment in thousands of research studies all over the world. If genome after genome comes back and does not indicate any additions to the gene pool of any demonstrative change in complexity then one of ID’s basic assumption is supported.

    When all the canine species, chichlid species, feline species etc. have been compared and analyzed for differences then one can make a conclusion as to whether this ID hypothesis is valid or not. This will never be done by researchers with an ID objective because it would never be approved. But ID will have access to the results to see if all this research validates Behe’s claims.

  199. DLH:

    “Without evidence of conscious moral failure, lets be charitable and suggest they are “uninformed” or “deluded” rather than “lying” or “wicked””

    OK, you’re right, I apologize. I got carried away. Maybe it’s the bad influence Dawkins has on my personality!

    Anyway, “wicked” has some fascination, hasn’t it? :-)

  200. If one is trying to define life, then one is in good company of those who have failed. In Noam Lahav’s book titled “Biogenesis” he lists 48 different definitions by experts in the field and none are consistent with each other. Robert Hazen who is active in OOL research has assessed them all and has essentially said there is no good definition of life.

    A biology book may offer a definition of life but then again it may not. Here are a few:

    Purves et al 2004 – an organized genetic unit capable of metabolism, reproduction and evolution.

    Miller and Levine – no definition but characteristics; made of cell, grows, obtain and uses energy, responds to environment, can reproduce

    Campbell and Reece – 6 edition – refuses to give a definition.

    John Maynard Smith said life was “any population of entities which has the properties of multiplication, heredity and variation”

    Nasa defines it as

    “a self sustained chemical system capable of under going Darwinian evolution”

    Wikipedia defines it as

    “Life is a condition that distinguishes organisms from inorganic objects, i.e. non-life, and dead organisms, being manifested by growth through metabolism, reproduction, and the power of adaptation to environment through changes originating internally.”

    Be the first to define life. What are the necessary conditions to define life?

  201. on “life”
    Life appears to be regulated and that regulation is critically important.
    Regulation in turn requires sensing, feedback, control and amplification.

    Regulation is a central factor in engineering and consequentially a natural expectation from ID theory.

  202. Allen_MacNeill at 189 and 190

    Stephen C. Meyer compiled an excellent review of the data and models in:
    Intelligent Design: The Origin of Biological Information and the Higher Taxonomic Categories
    Proceedings of the Biological Society of Washington, May 18, 2007 (which by the way was peer reviewed by four credentialed reviewers)

    From his specialty of history of science, Meyer addresses the issues of what is / is not science in:
    The Scientific Status of Intelligent Design: The Methodological Equivalence of Naturalistic and Non-Naturalistic Origins Theories;
    Science and Evidence for Design in the Universe (Ignatius Press) November 13, 2005.

    If you wish to comment on definitions of science and whether evolution and/or ID is science, please address Meyer’s arguments.

  203. Okay:

    First, GP, a brilliant job in 194, in response to nos. 188 – 189.

    On a few points of note:

    1] Science is . . .

    Science is indeed in large part about inference to best current explanation, and retroductive, unifying explanation of diverse phenomena is as important and often at least as powerful as prediction.

    Some would indeed argue that prediction is a subset of such empirical explanation, i.e providing a unifying construct that points to as yet non-instantiated empirical data. That is, the logic in basic form has structure, where T – theory, O – observation of fact, P – prediction of not yet observed fact:

    T –> {O1, O2, . . . On} AND {P1, P2, . . . Pm},

    where the marker between O’s and P’s is set temporally and sometimes financially. [Recall here the unbuilt super-collider that was going to be the lifetime employment programme for a lot of physicists . . . pardon my hints of cynicism.]

    However, there is a further factor, as — as GP hints at — domains in science interact.

    Namely, there are also points where theories have bridges (B) to other domains in science and associated bodies of accepted theory. Thus, we extend the basic model:

    T –> {O1, O2, . . . On} AND {P1, P2, . . . Pm} AND {B1, B2, . . . Bk}

    The classic current a case in point would be quantum physics which unifies across a very large cluster of domains across several entire fields of science and associated technologies, brilliantly. Never mind its own gaping inner challenges.

    Now, too, let us observe: when a bridge to another established domain in science opens up, all at once there is the major potential for cross-checks across entire domains.

    Thus, the opening of a bridge is fraught with potential for confirmation and disconfirmation, as all at once whole new domains of fact and associated theories are exposed to mutual cross-examination. If there is mutual coherence and support, then it lends our confidence in the underlying constructs in both domains a greatly enhanced weight of credence. [For instance, think here on the import of key bridging concepts such as atoms, energy, particles such as electrons, the wave concept, and now information.] But, on the other hand, where there is incoherence, we then have to look at the weights of the relevant alternative explanations and come to conclusions on where the changes need to be made.

    That is a major reason why I take the design inference seriously, as the progress of molecular scale biology over the past 60 or so years has revealed elements of a complex, in part digitally based information system at the core of cell based life. Onward, that bridges to an even more established domain of science, thermodynamics. One may deny the bridges but they plainly are there and it boils down to this: the current dominant chance + necessity only paradigm in biology is deeply challenged to account coherently for the information systems and content at the core of cell based life.

    Now, there is an alternative paradigm, design, that can. But it is controversial as it cuts across major worldview level commitments of many leading practitioners in the sciences. So, we now see a major political dust-up taking place, across entire domains of science and also in the education system and wider culture, where key dominant elites have embedded in key elements of the evolutionary materialist paradigm in their worldviews and life/culture agendas.

    Also, while I would not go so far as to say that life inherently and inevitably has such a digital information system at its core, it certainly is relevant to observed bio-physical, cell-based life.

    That brings up my own thoughts on the issue of life . . .

    2] On Life

    I agree with those who point out that we do not know necessary and sufficient conditions to define life, nor can we find an agreed genus and differentia framework that absorbs all accepted cases without serious exception. That leaves us with family resemblance to commonly accepted cases, and notes on oddities that stick out.

    I even seem to recall a Sci Am article from about 15 years ago on how there is some sort of seaweed that does not seem to have cells in the conventional sense. And of course, I am not convinced that we may properly restrict the phenomena or expressions of life to the strictly biophysical. For instance, is mind an expression of life? If so, it has very unusual properties and may point beyond the simply biophysical. Recall too that attempts to account for mind on biophysically based chance + necessity only founder on the shoals of self-referential incoherence.

    Having noted that, GP has raised several key considerations that have interesting bridges: life as observed embeds serious energy-flow constraints and associated information systems and structures that exploit some very clever chemistry, polymer science and physics, etc.

    3] The bridge from life to information and thermodynamics . . .

    The above brings to bear all that we know about information and communication systems and associated issues on information generation and the implications of noise and , onward thermodynamics considerations as they affect information issues. It also explains why so many information science and/or technology practitioners are engaged on the ID side at this blog, as they are practically experienced in what is now opening up: a bridge from biology to information and even [statistical] thermodynamics issues.

    That poses a major empirical test of the soundness of biology theories, i.e. coherence across such a bridge to other domains of science, and the classical NDT-based thought on origins (including extensions to OOL) is not faring well at it. So, the bioscience establishment now finds itself seriously challenged to effectively address links from their major biological models to information science and associated onward links to thermodynamics considerations.

    (BTW, “negentropy” is due to Brillouin, who defined a form of information metric by observing that – k ln w, i.e. negative Boltzmann entropy [s = k ln w], has the properties of a measure of information. Since the underlying metric is logarithmic, taking the negative has the effect of being the mathematical reciprocal, as opposed to a negative value of entropy as such; fractional numbers have negative logarithms. He was building on earlier work on Maxwell’s Demon. Thaxton et al use this in their foundational ID discussion, TMLO chs 7 – 9. This can be found in excerpt in my always linked, appendix 1. I link TMLO ch 8 here.)

    Okay again . . .

    GEM of TKI

  204. Off Topic
    gpuccio at 200

    Anyway, “wicked” has some fascination, hasn’t it?

    A deadly fascination – frequently entrapping and destroying those who venture close – especially those without authority over it.
    Be warned.

  205. For further discussion on how the Origin Of Life (OOL) is the “Achilles heel” of neo-Darwinism, see:
    Does neo-Darwinian theory include the origin Of Life?, particularly DLH #88

    Some consider OOL “part of” neo-Darwinian “modern” evolution, others insist that it is separate. . .

  206. Dave Scott at 112, 129, 133, 145 and Allen MacNeill at 102, 114, 143
    bFast at 118, 121, 136
    Here is a response from John Sanford:

    “Regarding Allen’s comments:

    1. The estimates of mutation rates in the literature are AFTER repair enzyme activity (without repair enzmes, we would all be dead!). Repair enymes can only fix mutations “while the paint is still wet”. Beyond that – repair enzymes can not discern which nucleotides are mutant and which are not. The reality of high mutation rates (after repair) is not really contestible. We can only see and measure those mutations that did not get repaired. Furthermore, the hypothetical divergence of the chimp/human genomes requires mutation rates (after repair) of at least 50-100 mutations per individual per generation. Without high mutation rates evolutionary theory does not work. Repair enzymes do not impact our results.

    2. My modeling allows for recombination. When we stop recombination – extinction is much faster. We in no way overlooked recombination.

    3. We do not require that the genome is the sole source of biological information – we are simply testing the neo-Darwinian model in terms of how the genome arose and how it can (not) be preserved. We show that neo-Darwinian theory is easily falsifiable.

    4. If Allen feels I have “massaged the numbers”, I am happy to run the program with any numbers he honestly feels reflect biological reality.

    The modeling program we are developing is entirely adjustable, and has no built-in bias. It is purely an accounting system! We even can do runs with zero mutations, or use only beneficial mutations – whatever Allen feels is biologically realistic.

    5. My models have no relation to my religious views on the age of the earth. We are ONLY examining the mechanics of mutation-selection.”

  207. [...] Designers are apparently some of the most vigorous bloggers on evo, and Paul Nelson’s column on the Altenberg story for Uncommon Descent generated 206 [...]

  208. “For the very same reason, I consider any ID argument based on analogy (e.g. “it looks designed, ergo it must designed) to be entirely without logical foundation.”

    To consider logic, Dr McNeill, to be co-terminous with empirical science, is surely the acme of illogicality. The vast body of empirical science, although ultimately derived from the free-ranging discursions of the great paradigm-changers, is proximately the product of minuscule, incremental steps.

    This is possible since it relates to the basest of all the dimensions of our human existence, namely, the material world – Mr McGoo’s specialism, if you like, so to cast it as the sovereign form of knowledge, which seems to be the norm among secular fundamentalists, is folly.

    The great paradigm-changers of the last century, Einstein, Planck, Bohr and Godel were all, at the very least, not pantheists, but panentheists, ipso facto, convinced of Intelligent Design. Godel was a devout Lutheran.

    This conviction was primordial, fundamental to their thinking, your perspective on empirical science as being ‘the tops’, being dubbed by Einstein as ‘naive realism.’
    Physics today has come up against a wall of paradoxes, not counter-intuitive, but counter-rational, and the only way to make progress is to incorporate them – which they duly do, the ‘naive realists’ no less than those with an, at minimum, panentheistic world-view.

    Yet the narrative-upholders of empirical science steadfastly refuse to countenance the fact that physicists are facing mysteries which fly in the face of logic as arbitrarily as any Christian or any other religious mystery. No paradox is less opaque and imponderable than any other. They all defy our reason absolutely.

    If you could travel through time and speak to each one of those great paradigm-changers, individually, what would you say to them, in order to convince them that they had ‘got the boot on the wrong foot’? That they were illogical not to consider empirical science as the ultimate form of knowledge?

    What reason do you have for contending that our universe was not designed, and by an awesome intelligence at that, since you expect the application of our intelligence to fathom its secrets.

    Planck pointed out that there are no eternal and immutable laws of nature. We have no evidence, can have no evidence, to suggest that what was true concerning the so-called, laws of nature, in the past, governing the physical world, would remain so in the future. So, straight away, the so-called, Christian Fundamentalists, have at least a fifty-fifty chance of being correct.

    But tell me, is it not the case that only an omniscient, omnipotent, personal God could cause light to hit an observer, travelling at a constant speed in the same direction, at its own absolute speed, irrespective of the speed at which that traveller is moving? What other agency could effect such a phenomenon?

  209. That is not just intelligent design; it is personalised, intelligent design. Go figure.

  210. [...] the way, Pigliucci was the organizer of the Altenberg 16, so he is no stranger to [...]

  211. [...] the way, Pigliucci was the organizer of the Altenberg 16, so he is no stranger to [...]

Leave a Reply