Home » Intelligent Design » The argument from incredulity vs. The argument from gullibility

The argument from incredulity vs. The argument from gullibility

On another blog, the following quotes from Intelligent Thought: Science versus the Intelligent Design Movement are listed approvingly:

“Evolutionary biology certainly hasn’t explained everything that perplexes biologists, but intelligent design hasn’t yet tried to explain anything at all.” –Daniel C. Dennett, Philosopher

“Not only is ID markedly inferior to Darwinism at explaining and understanding nature but in many ways it does not even fulfill the requirements of a scientific theory.” –Jerry A. Coyne, evolutionary biologist

“The geneticist Theodosius Dobzhansky famously declared, “Nothing in biology makes sense except in the light of evolution.” One might add that nothing in biology makes sense in the light of intelligent design.” –Jerry A. Coyne, evolutionary biologist

“The supernatural explanation fails to explain because it ducks the responsibility to explain itself.” —Richard Dawkins, evolutionary biologist

“What counts as a controversy must be delineated with care, as we want students to distinguish between scientific challenges and sociopolitical ones.” —Marc D. Hauser, evolutionary psychologist

“Incredulity doesn’t count as an alternative position or critique.” —Marc D. Hauser, evolutionary psychologist

Leaving aside ID, the subtext of these quotes is, “We’ve got a theory that has vast gaping holes, we don’t have a clue how the theory might fill the holes, but we still believe the theory accounts for what actually happened.” To challenge this is to be guilty of “an argument from incredulity,” in other words, of refusing to believe despite overwhelming evidence. Isn’t it rather that to accept this is to be guilty of “an argument from gullibility,” of believing despite the overwhelming absence of evidence?

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

68 Responses to The argument from incredulity vs. The argument from gullibility

  1. I think the greatest weakness of ID is in the failure (so far) of ID theory to elucidate and explain the ways in which it can add fruitfully to the overall scientific enterprise. In other words, one could easily say “so what. lets all agree that what appears designed is designed. from now on, we will use the word designed. Now what?” Of course, I don’t agree with this attitude, but I could easily see where it could be reached.

    Also, anti-ID people love to say “incredulity is not an argument”. However, incredulity, like curiosity, and desire, and hope and reason are all starting points from which an argument certainly can be built. Likewise, a total lack of incredulity is no basis for believing the outrageous claims of Darwinism.

  2. “I think the greatest weakness of ID is in the failure (so far) of ID theory to elucidate and explain the ways in which it can add fruitfully to the overall scientific enterprise.”

    I may be naieve about how science works, but why should this be the test? Should scientists believe in a falsehood because belief in the falsehood is “fruitful to the overall scientific enterprise”? Or is it the fruitfulness that proves/disproves the theory?

  3. The argument from incredulity objection, when pressed, usually turns into an argument from ignorance objection against ID (as is evident in the quotes above). This latter objection has never held up under scrutiny. I’ve addressed it briefly here:

    http://www.designinference.com.....gsofID.pdf (pp. 21-23)

    http://www.designinference.com.....cation.pdf (pp. 27-28)

    I expect I’ll be doing a full-scale treatment of this objection in the next few months.

  4. I love it! I always wondered about the argument from credulity, but the argument from gullibility sounds so much better and is, I think, so much more accurate.

  5. Let me offer yet another example from Intelligent Thought: Science versus the Intelligent Design Movement:

    “So long as the total odds against a planet’s evolving a life-form capable of anthropic reflection does not exceed the number of planets in the universe, we have an adequate and satisfying explanation for our existence.” – Richard Dawkins

    Is this not an argument from gullibility? Can someone offer me some analogies from, perhaps, coin tossing, dice, or poker?

    e.g., Yes, this player held a royal flush 5 hands in a row, but there have been, are, and will be poker hands going on all the time, so that is a perfectly adequate and satisfying explanation for this instance of a royal flush 5 hands in a row.

    I mean, setriously, is all that is required for “an adequate and satisfying explanation for our existence” the possibility that there are a lot of other planets?

  6. “So long as the total odds against a planet’s evolving a life-form capable of anthropic reflection does not exceed the number of planets in the universe, we have an adequate and satisfying explanation for our existence.” – Richard Dawkins

    An argument from gullibility?

  7. 7

    Somehow, the accusation that one is indulging one’s personal incredulity is taken at face value, and is supposed to invalidate whatever point one was working on. It’s used as a shaming device. It in the faith of Darwinism, it is some sort of cardinal sin.

    I consider my personal incredulity a natural asset, a fallible but important caution signal.

    I was told that most people do not really have a good natural grasp of very large numbers and therefore don’t realize what a billion years really means. I’m not so sure about that, but in any case it cuts both ways. If we do not have a good natural grasp of what a billion years can do, then we may just as easily attribute to vast stretches of time MORE creative power than it contains, as less.

  8. The comments from Dennet, Dawkins, Hauser, Coyne are all assertion and no substantiation. Those are exactly the sorts of statements “designed” to appeal to “fundamentalists”–in this case Darwinian Fundamentalists–who seek reassurance in the face of threats to their worldview.

    Empty assertions convince only the convinced.

    All anyone has to do to overthrow ID is simply show how specified complexity arises by chance and/or neccessity. No rhetoric needed.

    That the volume of the repetitious rhetoric has gone up the last several years and keeps going up is evidence that we are in the “fight stage” described by Ghandi (in another recent post on this blog): the end of the fight is near, certainly less than 20 years, perhaps less than 10 years.

    They can keep repeating these worn out talking points only so long and only so loud. After that, they will have simply nothing else to say. Darwinian evolution can take its place then alongside Ptolemaic Astronomy in the anals of Wikepdia under the heading “History’s Great Failed Ideas”.

  9. Isn’t it rather that to accept this is to be guilty of “an argument from gullibility,” of believing despite the overwhelming absence of evidence
    Yes

  10. Darn. I was reading comments in the admin view (I don’t see the article there), I read Bill and Tina’s comments about incredulity, and I was just about to make a comment saying to keep in mind the opposite of incredulity is gullibility. Then I see the title and whadaya know… Bill beat me to it. :-)

  11. ID is the only theory under which certain research could be validated.

    For example, it is observed that introns (‘junk’ to RM+NS) have Functional Sequence Complexity (FSC) of the same order as human langauge. Studies have revealed that unexpressed introns are informationally more dense than exon expressions.

    Anyone who has designed any sort of code knows the importance of embedding documentation in the source. When the code is compiled the documentation does not appear in the resulting binaries.

    Similarly, introns are not trascribed.

    Source code documentation contains information about the algorithms such as:
    - Meta-data (ontological descriptions)
    - Pseudo code (methodological descriptions)
    - Copyright information (intellecutal property notifications)

    If we believe that genetic code is designed by an intellect, then shouldn’t we be looking for the copyright information? This intellect would probably have considerable legal abilities, not to mention foresight.

    Exam question: What impact would this have on genetic patents?

  12. Re #5. Dawkin’s argument is reasonable. If there are only a few planets in the universe that have life then it is only on those planets that anything will be ask the question “what is the probability of a planet supporting life”. An analogy – suppose I programmed a computer to randomly generate poker hands and to stop and attract my attention (perhaps by a sound)whenever it dealt a royal flush. The buzzer goes and I wander over to the screen. I have an excellent explanation of why I see a royal flush and should not be at all surprised even though the chances of any given hand being a royal flush are tiny.

    Dawkin’s argument works if life arose on many planets or arose on just one planet. The argument explains everything thus it explains nothing. On the computer card dealer what if every hand was a royal flush would you think

    a) the odds of this are very small but not impossible
    b) the computer isn’t generating hands at random

    -ds

  13. Mung (point 5): You’re confusing two different things. One is the probability that a Royal Flush is ever observed, and the other is that the Royal Flush is observed in that hand. Dawkins’ point is that intelligent life has been observed somewhere (i.e. here), but he isn’t saying that there’s any prediction about where it will happen. The person who gets a Royal Flush will sit there and think “Wow! What is the probability of that happening!”. Of course, the probability of it happening at all is very high, the probability that it’s them is very low, but it has to happen to somebody.

    The same argument goes for intelligent life. If intelligent life arose at random, and there are enough planets where it could happen, then it will happen. That it happened on this planet, rather than any other, would be unlikely, but if intelligent life is going to appear, it has to appear somewhere.

    Your confusion is a common one: it’s called the Prosecutor’s fallacy.

    Bob

    Life arising at random is more like the card player in your example getting 100 Royal Flushes in a row. One is a fluke. 100 in a row means, to any objective person, the hands aren’t random. Your confusion is a common one. It stems from a psychological defense mechanism called denial. I hold out hope for you to come out of denial though since I almost got you to admit that, if it rains hard, bits of your arms won’t wash off and grow into new Bobs. :-)

  14. “Evolutionary biology certainly hasn’t explained everything that perplexes biologists, but intelligent design hasn’t yet tried to explain anything at all.”

    Not so. Rather than ‘ad hominem’ attacks, critics invoke ‘ad logicum’ attacks, by branding them as an arguments from incredulity, which they are not.

    “Not only is ID markedly inferior to Darwinism at explaining and understanding nature but in many ways it does not even fulfill the requirements of a scientific theory.”

    Darwinists (and lay people) equate the plethora of biologic data as evidence of evolution (based on ns/rm). ‘Understanding nature’ is not the same as establishing a mechanism for macroevolution.

    “The geneticist Theodosius Dobzhansky famously declared, “Nothing in biology makes sense except in the light of evolution.” One might add that nothing in biology makes sense in the light of intelligent design.”

    Dobzhansky’s proclamation is a good example of an argument from personal incredulity. The second part, which presumes the impossibility of an entity apart from our direct knowlege, is in effect, a statement of a negative, and therefore conjectural, untestable and therefore false.

    “The supernatural explanation fails to explain because it ducks the responsibility to explain itself.”

    Richard Dawkins thinks that the ‘Design Inference’ requires the aserter to then proceed to ‘explain’ the designer. Failing that, the inference is invalid. Oh, and let’s not forget, “who then made God?”

    “What counts as a controversy must be delineated with care, as we want students to distinguish between scientific challenges and sociopolitical ones.”

    ID is a theory based on complexity, synergy, order, purposeful rather than survival causations, aesthetics, and more. The ‘social and political’ challenges are man made, and rather than being made based on logic traceable by science, are based on personal edicts.

  15. correction/ changed ‘inference’ to ‘challenge’ in second sentence.

    Richard Dawkins thinks that the ‘Design Inference’ requires the aserter to then proceed to ‘explain’ the designer. Failing that, the challenge is invalid. Oh, and let’s not forget, “who then made God?”

  16. Is Richard Dawkins still bitter that his mother laughed when as a 5 year old he asked her “Who made God?”

    Richard fails to realise that not knowing where God came from, when God is not ruled by the Cosmos, is not the same as asking where natural life came from that is ruled by the Cosmos.

  17. tinabrewer:

    Actually the way in which ID is scientifically fruitful is that it has you asking questions which are _more pertinent_ to the phenomena. While knowing the creation mechanism might be interesting, when something is designed it shifts the research focus to different (and more applicable to the phenomena) questions, such as (but not limitted to):

    http://www.discovery.org/scrip.....038;id=259

    Now, a lot of scientific research already occurs under this set of questions. However, they are only warranted from a design inference, not from a Darwinian assumption. So, to the degree that these questions are being used in biology, the ID position is already being used without recognition.

    I presented an overview of what I thought of as the ID research program here:

    http://crevobits.blogspot.com/.....earch.html

  18. “Life arising at random is more like the card player in your example getting 100 Royal Flushes in a row. One is a fluke. 100 in a row means, to any objective person, the hands aren’t random.”

    Interesting – the chances of getting a Royal Flush are roughly 1 in 3 million (http://www.indepthinfo.com/pro.....lush.shtml) i.e. 1 in 10^7. So the chances of getting 100 Royal Flushes in a row are 1 in 10^9 i.e. about 1 in a billion. The Hubble space telescope estimates there are hundreds of billions of galaxies in the universe each of which contains billions of stars (http://imagine.gsfc.nasa.gov/d.....1127a.html). Assuming at least one planet/star on average then if the probability of life arising by chance on an randomly selected planet in the period since the universe began are about the same as 100 Royal Flushes in a row – then it has almost certainly arisen many, many times. :-)

  19. DS – I just realised by response to your comment on Royal Flushes was wrong. Publish it if you want. The chances of 100 Royal Flushes are of course 10^700 not 10^9.

  20. johnnyb: i will read the link when I get a chance today. I appreciate it. As I said, I don’t agree that ID is scientifically unfruitful; I was only meaning to say that as far as the public knowledge about the ID movement goes, the weakest link is definitely the sense of “so where does this help us scientifically”. To listen to some of the bigwigs who publicly oppose ID, all interest in nature and its mechanisms will close down and a new intellectual dark age will enter if anyone in school gets a whiff of the idea that we are not an accident…

  21. How do you get 10^700? I get 5.3 times 10^582

  22. “he same argument goes for intelligent life. If intelligent life arose at random, and there are enough planets where it could happen, then it will happen. That it happened on this planet, rather than any other, would be unlikely, but if intelligent life is going to appear, it has to appear somewhere.

    Your confusion is a common one: it’s called the Prosecutor’s fallacy.

    Bob .

    True to a point; on the other hand as a defense of abiogenesis it relies on the specifics of the maths – and complexity mulitplies up a lot faster than suitable planets add. This creationist (http://origins.swau.edu/papers.....fault.html) argues that it would take in the order of 10 to the 186th power years for the spontaneous generation of life on earth to become a better than even chance. Assuming those figures you’re going to need more suitable planets than there are atoms in the universe before a ‘lucky break’ scenario becomes reasonable. To refute him you’re either going to need a mechanism to bridge the probability gap or be able to show that his odds are wrong (estimates for the odds against the spontaneous generation of a cell from a ‘prebiotic soup’ have varied but have all been in the hundred+ orders of magnitude).

    Also, you’d have to concede that simply accepting fluke against enormous odds would have to be the biggest ‘science stopper’ of all. Surely once we’re aloud to say of things, ‘oh well, far out chances have to come up _somewhere_’ the quest for understanding has ended?

  23. Life arising at random is more like the card player in your example getting 100 Royal Flushes in a row. One is a fluke. 100 in a row means, to any objective person, the hands aren’t random

    No, you haven’t understood my point. It would be like someone getting 100 Royal Flushes in a row. Yes, very unlikely, but if there are enough people, it will happen.

    I wanted to correct the logical fallacy, without getting into what the numbers actually are: it wan’t my main point, and it’s obvious that I will disagree with most readers here. Any discussion of the actual numbers would have quickly degeneated into insults and mud-slinging.

    Bob

    I understood your point. You don’t understand the difference between very large numbers and infinity. The universe is finite in size and age. You’re right that if there are enough people, it will happen. The point is that there aren’t enough people because even though the number might be very large, it is finite and it isn’t big enough. -ds

  24. Colin DuCrane, “If we believe that genetic code is designed by an intellect, then shouldn’t we be looking for the copyright information?”

    I believe that some compyright information has already been detected. Denton presents the near perfect phylogenic tree that is rendered in the cytochrome C gene. A response to this annomoly was the molecular clock hypothesis. Yet recent studies of the molecular clock hypothesis show that, though the hypothesis offers some vailidity in dating, it cannot come close to explaining why the cytochrome C in an insect is different from that of a tree by the same amount as a man is. According to molecular clock theory, the cytochrome C in the insect should be drifting more rapidly than in man.

    The cytochrome C seems to have other surprises. “Despite these corrections, the rattlesnake cytochrome c sequence still more closely resembles human cytochrome c than it does that of any other protein we know.” Biochem J. 1991 Mar 15;274 ( Pt 3):825-31 I find this particular annomoly particularly interesting especially in light of the role that the snake plays in the Geneis account.

    I believe that if we look for such signatures of the inventor, we will find them.

  25. tinabrewer: I’m constantly reading articles about how scientists have “discovered” some biological working that helps them in some sort of design – for example, the bonding strength of some organism to a leaf or tree branch. What typically happens is that a scientist is studying the organism and then says “hey, thats interesting, it seems to work just like a …[insert human made machine here].. only better”. These are great discoveries, but possibly too few and too far between.

    What if, however, rather than stumbling on design, scientists went in with a design bias and said at the outset:

    - since the human circulatory system is designed, what can we learn from it to build more resilient water distribution systems, or cooling systems for factories.

    - since the electrical circuitry in the brain are designed, what can we learn from it to build more resilient electrical distribution systems to recover during catastrophic failure (hurricane, terrorism, other).

    - since the bacterial flagellum is designed, what can we learn from it to build more efficient motors that are self powering, and change direction in an instant.

    The reality is, if the BF runs at a really high RPM, and then can change direction is about 1 or 2 revolutions, shouldn’t we be asking the question: what can we learn from this to build better motors ourselves?

    It’s like all those funny science fiction movies where the military finds a crashed spaceship and tries to figure out how the aliens built it – and then discover velcro LOL! Well, in a somewhat similar way, what if we changed our perspective and said that the biological systems are designed, and made for exploration and reverse engineering. We might make alot more progress in some really useful and cool inventions.

  26. Tinabrewer, no doubt you are right that there is much engineering to be learned from biological examples, but it should be kept in mind that biological designs are often far from optimal. For example, photosynthesis has an efficiency of roughly 1% while recent prototypes of solar panels reach efficiencies exceeding 30%. Makes one wonder why oh why is the designer so wasteful?

    What is the source of the efficiency numbers for photosynthesis and solar cells? -ds

  27. A few comments on the material covered so far.

    Let us assume that doubters are guilty of an argument from incredulity. How does one explain this incredulity in a Darwinian world? Surely it must offer some reproductive advantage, as it seems to be a very widespread phenomenon.

    People seem to have missed one of my points in questioning the Dawkins argument. Since when is “it could have just happened by chance” an adequate and satisfying explanation? Since when does “by chance” explain anything at all, much less offer an adequate and satisfying explanation?

    At what point is “chance” no longer an adequate and satisfying explanation?

    Is Dawkins unwittingly validating the EF?

    If Dawkins feels chance is an adequate and satisfying explanation for our existence, why does he need chance plus natural selection, and why are creationists wrong to use chance arguments against evolution? Why can we chance be an adequate and satisfying explanation for our existence, while anti-chance arguments are disqualified?

    Dawkins does not practice what he preaches, and that more than anything should cause us to question the validity of his argument and the rationality of his position.

    For any feature of the biological world, we should be able to state that it exists, and given that it exists, and that there are billions and billions of planets, we have an adequate and satisfying explanation for that feature. Who needs evolutionary biology, evolutionary biologists, and therefore, who needs Richard Dawkins?

  28. My sources for efficiency photosynthesis: personal communication (Freeman Dyson, yes that one) and this lecture note here:

    patzek.berkeley.edu/E11/Photosynthesis.pdf

    Which says it is based on:

    Energy, Plants and Man by David Walker, Oxy Graphics, England, Second Edition, 1993,
    Distributed in North America by University Science Books, 20 Edgehill Road, Mill Valley.

    Wikipedia gave me the 30% on protoype solar panels.

    Of course it might still be argued that the 1% of natural photosynthesis represents a local optimum, which would not be entirely unexpected from an evolutionary NS+RM viewpoint.

    I found a number of references saying photosynthesis efficiency is almost 35% in optimal conditions. The maximum acheived for solar cells is about 20%. One thing to keep in mind about photosynthesis is that its efficiency grows as light intensity diminishes. This is critical for plants that grow under dense forest canopies. Solar cells do not exhibit that characteristic. -ds

  29. ds, I guess the 35% your biochemistry book refers to is the theoretical maximum yield of the chemical reaction: 2H2O+CO2+8ph->CH2O+O2+H2O. If CH2O~112kcal/mol and a red light photon~42kcal/mol, then 112/(8*42)=0.33. But there are practical factors such as photorespiration that reduce the efficiency in reality way below that maximum (see link posted before)
    Apparently, maximum efficiency occurs at 20% of full sunlight. That might be good for plants under dense canopies, but it doesn’t seem that great for all plants/algae.

    The efficiency of the primary chemical reaction is close to 100%. Only about 43% of the energy in sunlight is in usable wavelengths which brings down the chemical efficiency to 35% (stored as ATP). The 1% number you refer to is calories in food crops. Even so, sugar cane is 8% efficient which is far more than 1%. Furthermore, your comparison to solar cells does not include losses in transmission and storage but you are including those for photosynthesis. Immediately available energy from either (the best) photovoltaics or photosynthesis appear to be quite comparable. -ds

  30. Also, when considering optimal designs, consider other things the plants are doing. Therefore, increasing photosynthesis may degrade other processes further, which would not be ideal.

  31. On the issue of optimal design, I have often heard it said that all design is a matter of trade-offs which take into account the basic goal of the machine and the constraints of materials and environment. it seems to me so one-sided and artificial to isolate a particular function, (say, photosynthesis) and use our own particular desires in replicating that system as some sort of gold standard by which we feel entitled to judge that the design is “sub-optimal”.

  32. Raevmo: “Tinabrewer, no doubt you are right that there is much engineering to be learned from biological examples, but it should be kept in mind that biological designs are often far from optimal. For example, photosynthesis has an efficiency of roughly 1% while recent prototypes of solar panels reach efficiencies exceeding 30%. Makes one wonder why oh why is the designer so wasteful?”

    So many of these “disteleological” arguments, such as the one above, suffer the same flaw; “optimization” is considered solely in terms of a single variable. The same sort of tunnel vision (no pun intended) can be seen in the complaint that the eye is “wired backwards”.

    A plant that is only one percent efficient photosynthetically seems to survive and reproduce just fine, thank-you. Maybe a higher efficiency would produce unsustainable levels of toxins, or require an inordinate amount of input nutrients to sustain the higher level. Plants must exist under a vast range of conditions (such as the level of sunlight, as DS has noted) and compete against each other, insects and animals. It is not the leaf’s sole purpose to do photosynthesis (though, admittedly, it is a (and perhaps the)major one).

    The idea that we could have done it better is laughable. The eye would likely be blind, and the plant dead in a week, given the “improvements” that have been suggested.

    The proper question is not “is the design optimal”, but “is it designed”.

  33. Re #31. We can never prove that there isn’t some bizarre advantage to an apparently poor design in an organism – but some features really do seem bizarrely inefficient. I like the example that Jerry Coyne gives in the book that started this thread. The nerve that connects the brain to the larynx loops round the aorta. Furthermore – it does this in the giraffe – a journey of fifteen feet to go the one foot from the giraffe brain to the giraffe larynx.

    some features really do seem bizarrely inefficient

    That sounds like an argument from incredulity to me. -ds

  34. …it should be kept in mind that biological designs are often far from optimal. For example, photosynthesis has an efficiency of roughly 1% while recent prototypes of solar panels reach efficiencies exceeding 30%. Makes one wonder why oh why is the designer so wasteful?

    Why is a 1% efficiency in photosynthesis wasteful? What, precisely, is being wasted, and how?

  35. Mung: “Why is a 1% efficiency in photosynthesis wasteful? What, precisely, is being wasted, and how?

    All processes have less than 100% efficiency. This observation is formalized as the Second Law of Thermodynamics.

    Plants absorb light in the 400 to 700 nm range, or about 45% of the available light. Quantum considerations require the absorption of eight to ten photons to fix each CO2 molecule with an efficiency of 25%, for an 11% theoretical efficiency. However, other factors such as reflection, respiration and less than optimal lighting conditions, results in a practical efficiency of about 3% to 6%. Much of this is utilized by the plant itself. If you measure fixed carbon, efficiency is only about 1% (wheat) to 3% (sugarcane).

    The rest of the energy is lost, mostly as residual heat.

    I read that sugar cane is 8% efficient at fixing carbon. This isn’t comparable to solar cell efficiency as that is the raw electrical output. The electricity must be stored in order to be comparable and there will be losses incurred there. If, for instance, using lead-acid battery, one must also calculate the energy required to manufacture the battery as the photosynthetic competition has to manufacture its own storage device. -ds

  36. “Where’s the Beef?” – The Little People.

    I sense a collective desire on this blog to move beyond doctrinal debates and show some results of efforts in ID theory.

    I would certainly donate some time to a seti@home type of distributed computing project based on measuring “conservation of information” in genome data.

    seti@home http://setiathome.ssl.berkeley.edu/

    I found some genome@home type projects, but they seem to all be RM+NS predictors.

    genome@home http://www.stanford.edu/group/pandegroup/genome/
    folding@home http://folding.stanford.edu/
    rosetta@home http://boinc.bakerlab.org/rosetta/

    The results of such a project could be very revealing indeed.

  37. Sorry about the split infinitive in the last post.
    I just want to boldly go where no one has gone before…

  38. Just to point out, the plant builds itself. Last time I checked, if you purchase a solar panel, it won’t make new ones just by sitting it outside in the sun.

  39. Re #3 above. I have just finished reading the 2005 paper on specification. A couple of minor points and one deeper concern:

    On page 11 (and elsewhere) “because (R) was constructed by flipping a coin, it is very likely that this is the shortest description of (R)” i.e. for most bits strings the shortest description is simply to copy the string. This isn’t true. Virtually all bit strings can be compressed using an algorithm such as Huffman compression (http://www.prepressure.com/tec.....uffman.htm) – that is how computer compression algorithms work.

    On page 22 credit and ATM card numbers actually have a more redundancy than simply specifying the system number – so the number of possible numbers is a lot less than 1 in 10^14 e.g. the last digit is a check digit.

    The deeper concern. In section 2 Fisherian hypothesis testing – the rejection area is *not* necessarily defined as an area where the pdf is below a certain level or above a certain level. This is easily seen by considering the case of one-tailed testing. The deeper point here is that the null hypothesis is defined in relation to an alternative hypothesis which might give a better explanation of the results in the rejection area. That is how we decide between one-tailed and two-tailed tests. The alternative hypothesis is just ignored in section 2. This problem turns up again in appendendum 2. The addendum claims that Bayesian analysis is parasitic on the Fisherian approach, but actually it is the other way round.

    Cheers

  40. Whoops – sorry – another error. Re my post #39 above. The string R was randomly generated therefore it is not compressible by any any algorithm. The other two comments stand.

  41. Re #38 – And how! More importantly though, a design inference does not require perfection. Archaeologists, particularly, frequently recognize artifacts as artifacts where the design could clearly be improved upon.

  42. Mark —

    Between your post and your correction I wasn’t sure what exactly you were trying to say, so I thought I’d correct this in case you hadn’t already:

    “Virtually all bit strings can be compressed using an algorithm such as Huffman compression”

    This is false. There are many bit strings which cannot be compressed. See FAQ entry #9 here:

    http://www.faqs.org/faqs/compression-faq/part1/

  43. Johnnyb

    You are right. It is many years since I learned about compression algorithms and I made a stupid mistake. Itt is interesting that nowadays we represent more and more of the world in the form of bits (photographs, sounds, video) and it is almost always compressible without loss of information. The truly random bit string – where the chances of the next bit being 1 or 0 are fixed and not a function of the preceding bits – is rare. I am not sure if it relates to the paper though.

    Cheers

    I spent some time writing compression algorithms. The most common compression algorithms for sound and graphics are “lossy“. Information is lost in jpeg, mpeg, and mp3 for example. What may or may not be lost is a perceivable amount of quality. The telephone companies, which have a BIG monetary interest in every fraction of a percent more voice they can shove down existing pipelines, have a standard called “toll quality” they use in determining how much (and what type) of information loss is acceptable. Real-time voice compression on personal computers (many moons ago) is what I focused on for two-way conversations over a packet-switched X.25 network with 9600 baud modems on each end. -ds

  44. As Stu Harris commented on a previous UD thread: “What’s wrong with an argument from personal incredulity anyway? If I find someone’s proposed explanation for something to be incredulous, what is necessarily wrong with that? It can’t always be due to my lack of imagination, it’s just as possible that it’s due to a bad explanation. It’s up to the one making the proposition to go beyond my rational incredulity, my skepticism, and convince me of their argument, and change my inference to the best explanation. In the case of the proponents of Darwinism, it’s up to them to show the truth of their explanation for evolution and not just make appeals to imagination.”

    I added that one commentator made the following observation: Imagine that a mathematician came up with a new theorem but had not proven it. A colleague challenges the theorem, saying that it doesn’t make sense to him. The first mathematician replies, “Just because you are personally incredulous about my theorem doesn’t make it false!” Would we expect this argumentation to convince the mathematics community of the validity of the theorem, and to base a new branch of mathematics upon it?

    Faith in Darwinian mechanisms to explain all of life really does demonstrate gullibility when one considers all of the obvious, gaping, evidential and logical holes in the theory.

  45. ds: “I read that sugar cane is 8% efficient at fixing carbon.

    Eight percent of the absorbed light. Most of the light is reflected. Plants are green, after all.

    From the point-of-view of the plant, the 3% to 6% efficiency of what is available to the plant for its own metabolism probably best addresses Mung’s query about the nature of waste.

    The color of an object is the light that it doesn’t absorb. Green plants absorb all visible light frequencies EXCEPT green. Black objects absorb it all. White objects reflect it all. This is very basic physics that you should have learned in the sixth grade. -ds

  46. Dave S writes:

    Dawkin’s argument works if life arose on many planets or arose on just one planet. The argument explains everything thus it explains nothing. On the computer card dealer what if every hand was a royal flush would you think

    a) the odds of this are very small but not impossible
    b) the computer isn’t generating hands at random

    No one does a better job of showing how vapid and silly this line of reasoning is than Alvin Plantinga of Notre Dame. In his review of Dennett’s tome Darwin’s Dangerous Idea, Plantinga writes:

    And
    given infinitely many universes, Dennett thinks, all the possible distributions of
    values over the cosmological constants would have been tried out; [ 7 ] as it
    happens, we find ourselves in one of those universes where the constants are such
    as to allow for the development of intelligent life (where else?).
    Well, perhaps all this is logically possible (and then again perhaps not). As a
    response to a probabilistic argument, however, it’s pretty anemic. How would this
    kind of reply play in Tombstone, or Dodge City? “Waal, shore, Tex, I know it’s a
    leetle mite suspicious that every time I deal I git four aces and a wild card, but
    have you considered the following? Possibly there is an infinite succession of
    universes, so that for any possible distribution of possible poker hands, there is a
    universe in which that possibility is realized; we just happen to find ourselves in
    one where someone like me always deals himself only aces and wild cards
    without ever cheating. So put up that shootin’ arn and set down ‘n shet yore yap,
    ya dumb galoot.” Dennett’s reply shows at most (‘at most’, because that story
    about infinitely many universes is doubtfully coherent) what was never in
    question: that the premises of this argument from apparent design do not entail its
    conclusion. But of course that was conceded from the beginning: it is presented as
    a probabilistic argument, not one that is deductive valid. Furthermore, since an
    argument can be good even if it is not deductively valid, you can’t refute it just by
    pointing out that it isn’t deductively valid. You might as well reject the argument
    for evolution by pointing out that the evidence for evolution doesn’t entail that it
    ever took place, but only makes that fact likely. You might as well reject the
    evidence for the earth’s being round by pointing out that there are possible worlds
    in which we have all the evidence we do have for the earth’s being round, but in
    fact the earth is flat. Whatever the worth of this argument from design, Dennett
    really fails to address it.

  47. My response to the “infinite number of tries” argument is this:

    Did you know that if you take the entire text of the King James Bible, and ASCIIise it, you get a huge number? Did you know that that exact number is found, unabridged, in the sequence that is pi?

  48. Zzachriel: “Most of the light is reflected. Plants are green, after all.

    ds: “The color of an object is the light that it doesn’t absorb.

    The colors it doesn’t absorb are reflected. What part of “Plants are green.” did you not understand? Most of the available sunlight is not absorbed.

    ds: “Black objects absorb it all.

    There is no perfect absorption, but reasonably correct. Green plants cannot access 100% of the available solar energy as they reflect about 55% of the available light energy.

    Chlorophyll absorption spectrum
    http://www.mbari.org/staff/ryjo/cosmos/Cabs.html
    (Draw a line across the top of the graph at 100%. The area between the absorption line and the 100% line represents lost photons. The graph could be skewed to account for the increased energy of the blue light, but it still gives you the general idea.)

    [sigh] This isn’t introductory physics. Telling me “plants are green” does not support the statement that most of the light is not absorbed. Only a small portion (green light) is actually reflected. In fact most of the visible spectrum is absorbed and utilized. The next part of your lesson on the physics of light is that there is considerable energy in the non-visible wavelengths from infrared to ultraviolet. We’ll discount lower and higher frequencies than those because they aren’t commonly called “light” but there is also energy in microwave and lower frequences as well as soft x-ray and higher frequences of electromagnetic radiation. It’s not that the plant doesn’t absorb at infrared and ultraviolet but that cholorphyll doesn’t untilize it. If there are still parts of this you don’t understand go somewhere else for the answers and come back when you know more. -ds

  49. Re Dave’s comment on #43. It is true that the most common algorithms are lossy. Neverthless most real life bit strings can be compressed to some extent using a non-lossy algorithm e.g. PKZIP. I am not quite sure if there any deeper implications to this. It does suggest that true randomness is rather rare in reality – but so what?

    Compressability and information content are synonymous. An uncompressable stream is carrying as much information as physically possible. Compressibily of DNA was one of my first questions when I started investigating the ID claims. The implications are important as high compressibility means the data channel is wasteful of important resources. More DNA generally means a bigger cell, longer cell division time, and more energy required to divide. Random mutation IMO would be likely to result in high compressibility because it wouldn’t tend to find algorithmic solutions to data redundancies while a competent designer would recognize the redundancies and eliminate them. There may be mixtures of both in any given genome. You should probably read the following paper for yourself before I say more. -ds

    A Compression Algorithm for DNA Sequences
    and Its Applications in Genome Comparison

  50. Current video compression standards in use, including MPEGx and VC1, are not only lossy, but highly lossy depending on parametric considertions and source video content such as motion, scene transition rates, and noise content (any noise in the source causes compression efficiency to drop). But Mark Frank may be right in the sense that most people think of “information” as a subjective mass of “something”. In the subjective sense, these algorithms were designed to remove information from the source video content correlating to features that most people would not notice anyway. A simple example is that the human visual system is much more sensitive to fine detail in black and white than they are in color, so literally 75% of the color information in the video sequence is simply removed (thrown out by the MPEG2 algorithms) before any compression happens. Most casual observers would not notice this. Then the compression algorithms go to work and ultimately can reduce the amount of information contained in the transmitted video by up to a factor of 1:7 (or more if lower quality video output is acceptable). To gain this level of compression, the algorithms account for what features of video the human being is most sensitive to, and then they essentially remove or reduce the least important ones first (color information, for example). In the end, the final video output you see on your nifty plasma TV contains an order of magnitude (or more) more noise than the original, but it is located spectorally in frequency regions we are least likely to notice, or spacially in areas of the video that might be hidden or unnoticeable. I personally find the outputs of most video compression and decompression algorithms quite objectionable. Compression artifacts such as cosine transform block boundaries and high frequency ringing at sudden black/white transitions are two of the most noteable. But worst than those are the motion artifacts that come about as a result of the algorithm actually dropping entire frames of video at the transmitting end and then having to try to reproduce them at the receiving end by interpolation.

    One important aspect of video compression of this type that may relate to the discussion at hand is that the resulting data set, post compression, is highly random and sensitive to transmission errors. The code has been designed to be as insensitive as possible, but conceivably the loss of just a few bits out of a set of millions could result in the loss of seconds of video. Anyone who’s tried to watch a scratched up DVD is well familiar with the results.

    But to try to flip back now to the discussion at hand. Saying that DNA contains information is like saying there is a book in the library of congress. I’m not an expert on DNA in any sense of the imagination – and I will certainly read DS’s references above. But I do understand that DNA contains hard-coded information. That the information it contains is highly compressed seems obvious given the sensitivity of it’s decoders to errors in the code. That the code must be copied and have resulting errors corrected before being transcribed supports this idea. Those of you who are more familiar with the code than I am can do a much better job summing up the coding in DNA than I, but I will say it is extraordinary – and it makes MPEG video compression look like child’s play in comparison.

    One commentor in this blog noted that we should be looking for hidden copyrights or messages from the designer. I feel this is a worthy area of discussion. What other coding features would we expect to see if a designer was involved? If the designer was just tinkering and had no end-game in mind then it might be hard to imagine. But if there was an ultimate plan, then one would expect to see time markers (either elapsed time or codes that keep track of how many generations have come before), genes that could only be activated by certain environmental triggers, or self-modifying sequences that are triggered by such things as the environment or age (age in the sense of how old this particular DNA sequence is, not the age of the organism it’s in). Any other ideas?

    Did anyone notice the article “Unfinished Symphony” in Nature, 11 May, 2006 about the epigenic code? Is this yet another code structure embedded in/on DNA that would lead one to a design inference?

    Actually, depending on how the video was recorded, the color information is vastly reduced right from the word go. I was a video equipment tech 30 years ago. In NTSC video (standard U.S. broadcast TV – this is from memory so it might be off a little) it began with B/W only, no color. The b/w signal is carried on a 14 mhz amplitude modulated subcarrier. Sound is carried on a 1.5mhz frequency modulated subcarrier. When color was introduced the broadcast had to continue working on b/w tv’s so to carry the color information they added a phase modulated 3.57 mhz subcarrier which b/w sets ignored since they had no phase modulation detectors. 3.57mhz is only about 25% of 14 mhz so there’s where you get your reduced resolution for color in broadcast video and they could get away with only 25% the bandwidth required for the luminance (color is chrominance). In school they taught us that color is splashed on with a wide brush while b/w with is drawn with a fine brush because the eye is far more sensitive to brightness compared to color discrimination. -ds

  51. Oops. I meant “epigenetic”, not “epigenic”.

  52. Zachriel: “Plants absorb light in the 400 to 700 nm range, or about 45% of the available light.

    ds: “In fact most of the visible spectrum is absorbed and utilized.

    Chlorophyll can only utilize certain colors of the visible spectrum (though other pigments can absorb other visible colors and pass the energy back to chlorophyll). But as I mentioned at the beginning of the exchange, visible light only constitutes about ~45% of available solar incident radiation.

    ds: “It’s not that the plant doesn’t absorb at infrared and ultraviolet but that cholorphyll doesn’t untilize it.

    That’s correct. This excess radiation is typically reradiated as waste heat. Which I believe was Mung’s original query.

    You’re getting more and more wrong. This is the last exchange I’m allowing on this. Plants do not absorb 500nm radiation (green, visible light) so 400 to 700nm is wrong. It tried to tell you this in the first response. Evidently you just don’t get it. The 45% figure is correct but this is 45% of ALL solar radiation reaching the ground not just visible light. Of visible light plants absorb it all except green. I also tried to tell you there’s more than just visible light to consider (ultraviolet and infrared are the major ones that reach the ground). I could spoonfeed this stuff to you if you’d stop making faces and spitting it out. -ds

  53. Probability distributions over cosmological constants and “deep” arguments derived from them are all cr*p. The number and values of cosmological constants are a function of the very specific mathematical model we like about the universe today, a model that will no doubt be rejected in the future. Maybe there will be a model without any dimensionless constants that can generate all the constants of the current model, and it will be meaningless to talk about any probability distributions over constants and how “likely” the current universe is.

    The whole multiverse idea is pseudoscientific nonsense because there’s no way to test it. There’s only one universe that can be observed, measured, and analyzed. Certain physical constants could have taken on a number of different values when the universe was picoseconds old. Minute variations in some of those would have made it impossible for life as we know it to exist. I take the fine tuning argument as an uninteresting given – the universe was evidently designed and only pseudoscientific infinite multiverse theories can begin to dispute it. -ds

  54. Given the current model of the universe it’s probably true that small variations would have made life (as we know it) impossible, but there’s no way of knowing right now what the possible variations are, so it seems meaningless to talk about “fine tuning” and possible “intelligent choice” of constants if we don’t know what the “tuning ranges” are of the so-called constants. We don’t even really know if the constants are really constant over time, that’s just an assumption. The current model of the universe appears to be deeply flawed given that cosmologists have to postulate on an almost daily basis different amounts of unobserved “dark matter” to make the observations fit the model.

    Is it meaningless to talk about a cake recipe if we don’t know the range of choices in the ingredients? -ds

  55. This is the last exchange I’m allowing on this.

    Hopefully that won’t exclude the following.

    This excess radiation is typically reradiated as waste heat. Which I believe was Mung’s original query.

    My query had to do with the question of how efficiency was being defined for this particular discussion. I find the argument that some process only makes use of some small percentage of an available resource indicates that that process is inefficient to be unconvincing. To me, efficiency has to do with what the process does with that small percentage that it actually does something with.

    Let me provide an analogy. My refrigerator is full of food. It is all available to me at this moment. Just because I don’t empty my refrigerator at every meal it doesn’t mean I am being inefficient with the food that I eat.

  56. 56

    –”I added that one commentator made the following observation: Imagine that a mathematician came up with a new theorem but had not proven it. A colleague challenges the theorem, saying that it doesn’t make sense to him. The first mathematician replies, “Just because you are personally incredulous about my theorem doesn’t make it false!” Would we expect this argumentation to convince the mathematics community of the validity of the theorem, and to base a new branch of mathematics upon it?”–Comment by GilDodgen — May 29, 2006 @ 8:42 am

    This is a poor analogy. A theorem is not a theorem without the proof. What you are refering to is properly called a conjecture.

    Since there’s no proof of evolution doesn’t it then follow that it is conjecture? -ds

  57. “The geneticist Theodosius Dobzhansky famously declared, “Nothing in biology makes sense except in the light of evolution.” One might add that nothing in biology makes sense in the light of intelligent design.” –Jerry A. Coyne, evolutionary biologist
    [emphasis mine]

    Wow! Did I read that right?! Nothing?! Nature is simply littered with the appearance of purpose, and NOTHING makes sense in the light of ID?!!! Unbelievable! And to think this comes from a professor of biology from the University of Chicago! We should really be encouraging these people to continue to speak out against ID; they’re digging the grave for their own “theory”!!! I love it!!!

    (Just got in from out of town. Sorry if this has already been skewered in the above comments, but I’m in a hurry and don’t have time to read through all of them right now. I just HAD to comment on this one right now, though!)

  58. To understand (somewhat) Coyne’s statement that “nothing in biology makes sense in the light of ID” it might help to know that he studies the speciation process. In particular, the accumulation of reproductive barriers between diverging populations. I read he and Orr’s text on the subject recently, and I would recommend it to anyone who wants to know more about the observational and experimental data evolutionary biologists struggle to account for. Remember that even if a new nonDarwinian paradigm ultimately emerges, it must not only show why Darwinism is *incorrect*, but, in order to be successful, it must also *account for*, via alternative means, all the empirical data previously dealt with via Darwinism. Speciation/reproductive barrier formation definitely makes the “to do” list. It is exceedingly difficult–though perhaps not impossible–to translate these phenomenon to ID-language. One would (possibly) have to accept front-loading of the universe/life such that these reproductive barriers were engineered from the outset so that speciation would occur. Although I guess it would depend on how much you think your designer dictated in their design vs. how much was left to chance during the unfolding of the design. The way I see it, if ID ultimately seizes the reigns of biological academia, there will be a long list of things on the docket for adequate explanation. For example: speciation and reproductive barrier formation, genetic disease, cancer, vestigial organs/limbs/bones, balancing selection (e.g. sickle-cell allele/malaria), and so on. These and many other concepts flow rather naturally from darwinian theory, but would appear–at least to me–awkward to handle from a design perspective. So while darwinism is repeatedly demonstrated to have numerous gaps–some more or less gaping than others, some more or less admitted than others–I think it’s important to remember that for a scientific/philosophic movement to ultimately be successful, it must not only show how the opposing theory is *wrong*, it must ultimately reveal *how* it came to be mistaken, as well as *account* for all the data previously “explained” by the opposing theory. If I remember my intellectual history correctly, it was St. Aquinas who championed this general method of argumentation. As wise and powerful today as it was then. Once the one blind man understands he has been examining an elephant, he should be able to articulate clearly to his blind friend just why it was that he, the friend, was under the false impression that they were adjacent to a large, snake-like creature… Dethroning Darwinism is just the very first step. What comes next? What’s on the agenda? If the answer entails metaphysical speculation about the possible attributes of the designer, as opposed to positive and fruitful research paradigms, then I predict the reign of ID will be very short-lived. The march of human progress has been marked by an ever-increasing intolerance of metaphysics.

  59. dougmoran: “One important aspect of video compression of this type that may relate to the discussion at hand is that the resulting data set, post compression, is highly random and sensitive to transmission errors. The code has been designed to be as insensitive as possible, but conceivably the loss of just a few bits out of a set of millions could result in the loss of seconds of video. Anyone who’s tried to watch a scratched up DVD is well familiar with the results”

    Compression algorithms like H.263 are highly optimized for video over telephone line applications where the error rate can be higher than normal. A H.263 bit-stream is highly error-proof against casaul bit errors during commmunication. A few loss of bits causes few noticable glitchs in the video. Bitstream is designed in a very smart mannner; when a few bits are lost in a block the next successive block is detected and displayed. One frame contains many 16×16 sub-blocks, so what you may notice is only a glitch in a distorted 16×16 block somewhere in the frame.

    You can randomly delete, add and duplicate portions of code from a H.263 bitstream and the result is still decodable and viewable. It highly resembles the way the data is arranged in the DNA. The code in DNA should also be a bit-stream rather than a data with a word-based arrangement. We can also speculate that it is compressed and encoded resemblening the way video is encoded in a bit-stream.

  60. Great Ape

    One would (possibly) have to accept front-loading of the universe/life such that these reproductive barriers were engineered from the outset so that speciation would occur.

    Front loading is the only thing that makes sense of it all.

    Although I guess it would depend on how much you think your designer dictated in their design vs. how much was left to chance during the unfolding of the design.

    Perhaps there is a progressively limited range of options and the environment provides triggers. Start out by assuming that life on earth began with a phylogenetic stem cell with the potential within it to become all the species that ever lived. The external environment provides triggers or checkpoints for the next stage diversification/specialization. Assume also that these are one-way transitions – no going backward. Thus phylogeny mirrors ontogeny only on different time scales. Random mutation and natural selection are still at work but natural selection serves almost exclusively in the role of maintaining the status quo. Random mutations that aren’t immediately seriously deleterious accumulate in a species genome until it goes extinct. Again, this process mirrors the process of individual organisms aging and dying only on different timescales.

    The way I see it, if ID ultimately seizes the reigns of biological academia, there will be a long list of things on the docket for adequate explanation. For example: speciation and reproductive barrier formation, genetic disease, cancer, vestigial organs/limbs/bones, balancing selection (e.g. sickle-cell allele/malaria), and so on.

    All elegantly explained by the phylogenetic stem cell described above. There are three questions raised by this.

    1) Where did the initial phylogenetic stem cell come from?

    2) How were unexpressed phylogenetic stem cell potentials preserved until expression?

    3) Wouldn’t the genome have to be impractically large?

    On the first question let’s say what’s good for the modern synthesis is good for the front loading theory and say the origin of the first living cell is outside the scope of the theory for now. We’ll just leave the option open that it might have been designed and in any case probably didn’t originate on the earth in the scant time between the earth’s formation and the first appearance of life.

    On the second question, if you ask any programmer or hardware designer that is concerned with fidelity in data copying he will tell you that you can trade off the speed and/or resources required to copy data for whatever degree of fidelity it takes to meet your requirements. There are many error detection algorithms which may be employed and none of them even approach the complexity of processes we already observe in cellular machinery and programming. Overly large amounts of resources are required for error correction but this could be skipped by simply aborting the copy if an error is detected.

    On the third question it might not be as gigantic as you think when organized into reusable component libraries. Proteins would be library functions in this case. Discounting trivial differences that don’t drastically compromise function there really aren’t all that many different proteins used by living things. The key is which are used, when, and how. Different body plans as well can be modularized and stored in less storage space than one might think. Now, keeping in mind we’ve only measured the genome size of a tiny fraction of extant organisms, we’ve already found a single free living cell (amoeba dubia) that contains 200 times as much DNA as a human cell. Is that big enough? Maybe. But even if it isn’t, we don’t know what’s the largest practical size – that’s just the upper bound in the small number of organisms we’ve tested and it’s freaking huge – at least big enough to specify 200 phyla as complicated as mammals and that’s with no special attempt made to reduce the storage requirements by consolidating common elements into code/data libraries.

    These and many other concepts flow rather naturally from darwinian theory, but would appear–at least to me–awkward to handle from a design perspective.

    As I have outlined, it flows even more naturally from a designed front loaded phylogenetic stem cell.

    So while darwinism is repeatedly demonstrated to have numerous gaps–some more or less gaping than others, some more or less admitted than others–I think it’s important to remember that for a scientific/philosophic movement to ultimately be successful, it must not only show how the opposing theory is *wrong*, it must ultimately reveal *how* it came to be mistaken, as well as *account* for all the data previously “explained” by the opposing theory.

    How it came to be wrong is simple enough. It’s a Victorian theory that sounded good in Victorian times when the universe was generally acknowledged to be infinitely old and cells were thought to be simple blobs of protoplasm that could arise spontaneously. Since then there are still elephant size problems with the theory and an array of ad hoc explanations for how it could possibly all work that instead of filling one book on the preservation of favored races and another on Mendelian inheritance it’s an amount of information and scientific specialties so large it boggles the mind.

  61. ID and natural selection aren’t mutually exclusive. In fact, natural selection is such a straightforward principle that some have called it a tautology. The rejection of darwinism typical of IDists doesn’t entail holding the position that natural selection accounts for nothing at all, simply that it doesn’t account for the generation of complex biological information. I can quite imagine that a thouroughly ‘non-darwinian’ design-based paradigm will still call on natural selection to explain quite a lot, and that whatever happens Darwin will remain a recognized heavyweight of science’s history.

    For now the claims are simply that a) design inferences can be empirically sound, b) such inferences can be soundly made about certain biological features, and c) that this is as legitimately ‘scientific’ (whatever that might mean) as any other empirically-based inference. These are, essentially, the claims that IDists are making and that ID critics are disputing.

    That aside, I think it’s a mistake to expect that a mature ID-based biology paradigm would operate anything like the way the darwinian paradigm does. Darwin envisaged a process of evolution that continued through the present; ID is open to the possibility of such a process but does not mandate it. For Darwin, the history of life is essentially a question of biology, for ID it’s (IMO) more a question of history. For Darwin therefore all the processes of life’s origin should be accessible to us, for ID this simply may not be the case; the reality of historical enquiry is that we can say only as much as the evidence permits us to say.

  62. Great_ape,

    A well written and thought out comment. I have not read the rest of the comments on this post since it has been a busy weekend but I thought it might be best to answer your post without knowing what came before. My basic observation is that ID proponents and Darwinist talk past each other. For example,

    Evolution is a 4 tier theory.

    The first tier is the origin of life or how did a cell and DNA, RNA and proteins arise. Quite a sticky issue with no sensible answer by science. Lots of speculation and wishful thinking but nothing that makes sense. A high percentage of ID concerns are in this tier.

    The second tier is how did a one cell organism form multi-cell organisms and this include how did such complex organisms as the eye arise as these multi-cell organisms arose. How, did brains, limbs, digestive systems, neurological systems arise. These are immensely complicated but get little discussion except it all happened over time. We have all seen the “it must have evolved” comment. This is also an important area for ID but not Darwinists. Irreducible complexity operates in this tier.

    The third tier is the one that gets the most debate in the popular press and that is how did one species arise from another species when there are substantial functional differences between them. How did birds get wings to fly, how did land creatures develop oxygen breathing systems or how did man get opposable thumbs or such a big brain and why such a long time for children to develop. There is lots of speculation but no hard evidence. An occasional fossil is brought up to show the progression ignoring the fact that there had to be several thousand if not millions of other steps for these progressions of which only a handful have been found. Here the ID and the Darwinist are sometimes on common ground.

    The fourth tier is what Darwin observed on his trip on the Beagle and what most of your examples are in your comment, namely micro-evolution and can be explained by basic genetics, occasion mutations, environmental pressures and yes, natural selection. Few disagree on this fourth tier including those who call themselves Intelligent Design proponents yet this is where all the evidence is that is used to persuade everyone that Darwinism is a valid theory. The evidence in this tier is used to justify the first three tiers because the materialist needs all four tiers to justify their philosophy of life but the relevance of the evidence in tier 4 for the other tiers is scant at best.

    So to sum up, my experience is that ID concentrates on tier 1 and 2, a little bit on tier 3 and are not concerned at all with tier 4. The only thing in your comments that was not tier 4 or micro-evolution was the vestigial organs/limbs/bones. It seems the main defense of Darwinism these days is not the evidence of the theory itself but the shortcomings of the designer or the lack of perfection of the design.

    Thank you again for your comment. I learn every time I read what you write and wish there would be more like you on this blog to challenge ID proponents like myself.

    Good response, Jerry. You and Great Ape are both exemplary! -ds

  63. There are many logical and empirical objections I could think of against the “phylogenetic stem cell theory”, but a very compelling one to me is that natural selection would quickly destroy the hopeful monster. “simply aborting the copy if an error is detected” represents a huge waste of resources. An anti-abortion mutation (a pro-life mutation if you like) that would disable the very genes that cause abortions would have an enormous advantage. It would produce many more copies and therefore rapidly increase in frequency until the abortion monster was gone forever.

    Aborting the copy would be done during meiosis. Not really that much waste of resources. There shouldn’t be any more hopeful monsters than there are when human stem cells diversify. Think of the parallels with ontogeny. There are ways of ensuring that the abortion mechanism can’t be disabled. Plus it wouldn’t be an advantage as random mutation & natural selection only serve to maintain the status quo, eventually cause extinctions, and not be of any use in creating fitter organisms. I don’t think you’re getting in the spirit of opening up to engineered solutions and/or you don’t have enough of the appropriate systems design knowledge to know what solutions exist. Plus you’re still thinking that RM+NS is generally capable of creative evolution. -ds

  64. Jerry

    Solid post all the way around. I would only raise issue with a fairly minor point.

    You said: “The only thing in your comments that was not tier 4 or micro-evolution was the vestigial organs/limbs/bones. It seems the main defense of Darwinism these days is not the evidence of the theory itself but the shortcomings of the designer or the lack of perfection of the design.”

    I would just point out that vestigal body parts are used as evidence in favor of Darwinian mechanisms because we would expect to see them if Darwin was right. Gnerally, scientists don’t make anything an arguement against design because if there is a designer there is no way to know its intentions hence any speculation about quality or method of design is meaningless. I have read such anti-design arguments used by evolutionists in passing moments of humor but not in serious discussion.

  65. “Aborting the copy would be done during meiosis. Not really that much waste of resources.”
    Meiosis? But early forms of life wouldn’t have any meiosis, would they?. So it would have to be during mitosis, and there would be considerable loss.

    Well, let’s suppose you’re right. But this kind of abortion doesn’t happen in contemporary organisms as far as I know. So when did it stop and why?

    But early forms of life wouldn’t have any meiosis, would they?

    Based upon what? The front loading theory begins with a complex genome. Certainly meiosis would then be possible right from the word go. Organisms that duplicate solely via mitosis usually reproduce in very large numbers so the even a high percentage of abortions shouldn’t be much of a problem. The survivors will just eat the abortions. Waste not want not. Also, if all organisms are hobbled equally by the abortion mechanism it doesn’t matter as the playing field is level.

    But this kind of abortion doesn’t happen in contemporary organisms as far as I know. So when did it stop and why?

    Does diversification continue forever in ontogeny or does it stop when the program is complete? Large scale evolution may no longer be in progress. Maybe we’re the end of the line. The terminal product of evolution. Personally, I believe we’re at a paradigm changeover in evolution. Preprogrammed organic evolution is ended. Technological evolution driven by rational man (which proceeds orders of magnitude faster) is beginning.

  66. I appreciate the positive and constructive responses to my post. Perhaps I’m just naive in my youth, but I think people can disagree on a number of issues and yet still manage to have a fruitful dialogue that everyone benefits from.

    ds, something akin to your phylogenetic stem cell idea would almost certainly need to be true for a front-loading scenario to hold. How much information such a phylogenetic stem cell needs, however, would likely be contingent on the extent of the designer’s abilities. If, for instance, the designer also designed the physical universe itself, then much of the information could conceivably be stored implicitly in the fine-tuning of numerous physical constants, etc. In software terminology, you could say the OS was hard-coded in the underlying firmware. This would take some of the information burden off genomes. Just a thought.

    The amoeba genome truly mystifies me at this point. If it turns out to hold information-rich sequence (as opposed to a massive transposable element explosion, for example) it will make a lot of us stop and scratch our heads. Right now the general feeling seems to be “probably a bunch of repetitive crap in there taking up space…someone ought to check that out…” Unfortunately, the amoeba–and the fire-bellied newt, for that matter–aren’t high on the sequencing list. (At least sense last I checked.) Even the humble crayfish has a genome twice the size of our own (true for P. clarkii, at least). Of course, it seems to have been around about 250myrs in more or less its present form so maybe they have a trick or two that we don’t know about.

    This sort of old lineage phenomenon, however, does appear to contradict one aspect of your (ds) phylogenetic stem cell theory. If I understood correctly, you held that natural selection can only serve to preserve information/status quo and that, eventually, lineages accumulate deleterious mutations that ultimately lead to their extinctions. There are some lineages, however, that seem to have persisted quite a long time. Signatures of common descent (e.g. shared viral and transposon insertions, for example) suggest that some mammalian clades extend back well over 100myrs. And then there are those crayfish, which seem to have been crawling around for over 200myrs. So I don’t necessarily see evidence for an unavoidable entropic death by mutation. Although maybe we’re thinking in different timescales… Also, are you arguing that if a lineage yields an offshoot lineage that expresses some latent aspect of the phylogenetic stem cell, the entropic clock is reset for the new clade?

    As for why darwinists believe as they do, I think the answer will depend a lot upon just how you define Darwinism or “darwinian fundamentalism” as a position, which in turn is related to Jerry’s thoughtful post. I basically agree with the tiers as Jerry laid them out, and I also agree we often talk past each other because we’re unclear at which level we are agreeing/disagreeing at. (So much depends on what we *think* each other means by “evolutionist,” “ID-proponent,”darwinist”,”darwinian fundamentalist” yet these are all used rather loosely.) As Jerry indicated, many of the ID folks are comfortable with tier 4, some with 3, very few with 2, and obviously none with one. Unfortunately, you’re a big tent so while an average ID proponent and an evolutionist could have a reasonable conversation concerning tiers 3 & 4, an ID-YEC might interject that common descent is unsubstantiated hogwash and “evilution,” as a whole, has no redeeming scientific qualities whatsoever. So I think the big tents, on both sides, add to the general confusion as to who holds what position exactly. (I know I would have ejected Dawkins and Dennet from my tent long ago if given the opportunity.) Ultimately the question is: is the ID movement anti-the-overextension-of-darwinism to Jerry’s tiers 1&2 or, as some in your tent would appear to have it, anti-evolution itself. Those are two very different things, yet I hear both messages at various times.

    Personally, it is becoming increasingly clear to me–and yes, it should have been obvious all along–that the extension of Darwinism to tiers 1 & 2 is far more a *philosophical* position than a scientific one. It is a philosophical position I am sympathetic to, but a philosophical position nevertheless. I think if more biologists realize and/or admitted RM+NS for 1&2 is a philosophical stance, we’d have a lot more common ground to work from.

  67. great_ape

    If, for instance, the designer also designed the physical universe itself, then much of the information could conceivably be stored implicitly in the fine-tuning of numerous physical constants

    True. We really don’t know if the universe is entirely deterministic or not. It appears to be deterministic at scales greater than quantum but indeterminate below it. We don’t have a complete theory of everything and many physicists believe quantum uncertainty is an illusion caused by hidden variables. Thus if the universe is entirely deterministic there’s no such thing as random and everything that happened couldn’t have happened any other way – it was all set up to play out this way at the instant of the big bang and maybe before that.

    That said, there still may be a loophole in a deterministic universe – free will in sentient living things. This is going off the science beat but imagine for a moment that you’re a omniscient entity in a deterministic universe. Wouldn’t it suck to know everything that’s going to happen? To never be surprised? That would be awfully boring IMO. Perhaps that entity would invent a way to introduce non-determinism into the universe? Just a thought. Possibly a thought that was determined 14 billion or more years ago! Or possibly not!

    The amoeba genome truly mystifies me at this point.

    If nothing else it demonstrates ipso facto that organisms can survive and prosper in a competitive world while carrying around a gigantic genome that competitors aren’t burdened with. How long they can survive is a valid question. Maybe amoeba dubia is on the fast track towards extinction because of it. On the other hand very large genomes seem to be over represented in so-called living fossils. The living fossils with huge genome factoid tends to support the front loading hypothesis as one might expect that if evolution isn’t finished some extant organisms still carry the potential for further diversity. Or maybe they’re backups in case something catastrophic happens and evolution has to start over again.

    Right now the general feeling seems to be “probably a bunch of repetitive crap in there taking up space…someone ought to check that out…”

    Probably. But it does need to be checked out. The other thing to keep in mind is that only 10% of the estimated 10 million species in the world have been named, very few of the named species have had their DNA weighed, far fewer than that have had their genome sequenced, the ones that have been sequenced are weighted towards smaller genomes, and even in the smallest genomes we barely have the first clue about the working details. So the “checking it out” is no small task.

    Of course, it seems to have been around about 250myrs in more or less its present form so maybe they have a trick or two that we don’t know about.”

    That’s nothing. Some pine trees and water lillies have genomes scores of times larger than our own.

    This sort of old lineage phenomenon, however, does appear to contradict one aspect of your (ds) phylogenetic stem cell theory.

    Not really. I said eventually leads to extinction. Some persist longer than others. I didn’t place any bound on what’s the longest possible persistence. The fact remains that some 99% of all species that ever lived are extinct. There are likely many factors that contribute but I don’t think the factors can be confidently limited to external environment. Senescence leading to extinction appears to be a cause, possibly the leading cause if something else doesn’t wipe them out first.

    Also, are you arguing that if a lineage yields an offshoot lineage that expresses some latent aspect of the phylogenetic stem cell, the entropic clock is reset for the new clade?

    I didn’t mention it but it seems like there would be a mechanism in place to preserve unexpressed potentials as long as the course of evolution had not terminated and/or for disaster recovery.

  68. …photosynthesis has an efficiency of roughly 1%…

    More like 31% for red light and 20% for blue light.

Leave a Reply