Uncommon Descent Serving The Intelligent Design Community

The argument from incredulity vs. The argument from gullibility

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

On another blog, the following quotes from Intelligent Thought: Science versus the Intelligent Design Movement are listed approvingly:

“Evolutionary biology certainly hasn’t explained everything that perplexes biologists, but intelligent design hasn’t yet tried to explain anything at all.” –Daniel C. Dennett, Philosopher

“Not only is ID markedly inferior to Darwinism at explaining and understanding nature but in many ways it does not even fulfill the requirements of a scientific theory.” –Jerry A. Coyne, evolutionary biologist

“The geneticist Theodosius Dobzhansky famously declared, “Nothing in biology makes sense except in the light of evolution.” One might add that nothing in biology makes sense in the light of intelligent design.” –Jerry A. Coyne, evolutionary biologist

“The supernatural explanation fails to explain because it ducks the responsibility to explain itself.” —Richard Dawkins, evolutionary biologist

“What counts as a controversy must be delineated with care, as we want students to distinguish between scientific challenges and sociopolitical ones.” —Marc D. Hauser, evolutionary psychologist

“Incredulity doesn’t count as an alternative position or critique.” —Marc D. Hauser, evolutionary psychologist

Leaving aside ID, the subtext of these quotes is, “We’ve got a theory that has vast gaping holes, we don’t have a clue how the theory might fill the holes, but we still believe the theory accounts for what actually happened.” To challenge this is to be guilty of “an argument from incredulity,” in other words, of refusing to believe despite overwhelming evidence. Isn’t it rather that to accept this is to be guilty of “an argument from gullibility,” of believing despite the overwhelming absence of evidence?

Comments
Jerry @62, Well thought out summary. Mung @68, So apparently the majority of plant evolution was from a period when the sun was red. -Q (mischievous smile)Querius
May 5, 2022
May
05
May
5
05
2022
10:47 AM
10
10
47
AM
PDT
...photosynthesis has an efficiency of roughly 1%...
More like 31% for red light and 20% for blue light.Mung
June 4, 2006
June
06
Jun
4
04
2006
10:36 AM
10
10
36
AM
PDT
great_ape If, for instance, the designer also designed the physical universe itself, then much of the information could conceivably be stored implicitly in the fine-tuning of numerous physical constants True. We really don't know if the universe is entirely deterministic or not. It appears to be deterministic at scales greater than quantum but indeterminate below it. We don't have a complete theory of everything and many physicists believe quantum uncertainty is an illusion caused by hidden variables. Thus if the universe is entirely deterministic there's no such thing as random and everything that happened couldn't have happened any other way - it was all set up to play out this way at the instant of the big bang and maybe before that. That said, there still may be a loophole in a deterministic universe - free will in sentient living things. This is going off the science beat but imagine for a moment that you're a omniscient entity in a deterministic universe. Wouldn't it suck to know everything that's going to happen? To never be surprised? That would be awfully boring IMO. Perhaps that entity would invent a way to introduce non-determinism into the universe? Just a thought. Possibly a thought that was determined 14 billion or more years ago! Or possibly not! The amoeba genome truly mystifies me at this point. If nothing else it demonstrates ipso facto that organisms can survive and prosper in a competitive world while carrying around a gigantic genome that competitors aren't burdened with. How long they can survive is a valid question. Maybe amoeba dubia is on the fast track towards extinction because of it. On the other hand very large genomes seem to be over represented in so-called living fossils. The living fossils with huge genome factoid tends to support the front loading hypothesis as one might expect that if evolution isn't finished some extant organisms still carry the potential for further diversity. Or maybe they're backups in case something catastrophic happens and evolution has to start over again. Right now the general feeling seems to be “probably a bunch of repetitive crap in there taking up space…someone ought to check that out…” Probably. But it does need to be checked out. The other thing to keep in mind is that only 10% of the estimated 10 million species in the world have been named, very few of the named species have had their DNA weighed, far fewer than that have had their genome sequenced, the ones that have been sequenced are weighted towards smaller genomes, and even in the smallest genomes we barely have the first clue about the working details. So the "checking it out" is no small task. Of course, it seems to have been around about 250myrs in more or less its present form so maybe they have a trick or two that we don’t know about." That's nothing. Some pine trees and water lillies have genomes scores of times larger than our own. This sort of old lineage phenomenon, however, does appear to contradict one aspect of your (ds) phylogenetic stem cell theory. Not really. I said eventually leads to extinction. Some persist longer than others. I didn't place any bound on what's the longest possible persistence. The fact remains that some 99% of all species that ever lived are extinct. There are likely many factors that contribute but I don't think the factors can be confidently limited to external environment. Senescence leading to extinction appears to be a cause, possibly the leading cause if something else doesn't wipe them out first. Also, are you arguing that if a lineage yields an offshoot lineage that expresses some latent aspect of the phylogenetic stem cell, the entropic clock is reset for the new clade? I didn't mention it but it seems like there would be a mechanism in place to preserve unexpressed potentials as long as the course of evolution had not terminated and/or for disaster recovery.DaveScot
May 31, 2006
May
05
May
31
31
2006
03:45 AM
3
03
45
AM
PDT
I appreciate the positive and constructive responses to my post. Perhaps I'm just naive in my youth, but I think people can disagree on a number of issues and yet still manage to have a fruitful dialogue that everyone benefits from. ds, something akin to your phylogenetic stem cell idea would almost certainly need to be true for a front-loading scenario to hold. How much information such a phylogenetic stem cell needs, however, would likely be contingent on the extent of the designer's abilities. If, for instance, the designer also designed the physical universe itself, then much of the information could conceivably be stored implicitly in the fine-tuning of numerous physical constants, etc. In software terminology, you could say the OS was hard-coded in the underlying firmware. This would take some of the information burden off genomes. Just a thought. The amoeba genome truly mystifies me at this point. If it turns out to hold information-rich sequence (as opposed to a massive transposable element explosion, for example) it will make a lot of us stop and scratch our heads. Right now the general feeling seems to be "probably a bunch of repetitive crap in there taking up space...someone ought to check that out..." Unfortunately, the amoeba--and the fire-bellied newt, for that matter--aren't high on the sequencing list. (At least sense last I checked.) Even the humble crayfish has a genome twice the size of our own (true for P. clarkii, at least). Of course, it seems to have been around about 250myrs in more or less its present form so maybe they have a trick or two that we don't know about. This sort of old lineage phenomenon, however, does appear to contradict one aspect of your (ds) phylogenetic stem cell theory. If I understood correctly, you held that natural selection can only serve to preserve information/status quo and that, eventually, lineages accumulate deleterious mutations that ultimately lead to their extinctions. There are some lineages, however, that seem to have persisted quite a long time. Signatures of common descent (e.g. shared viral and transposon insertions, for example) suggest that some mammalian clades extend back well over 100myrs. And then there are those crayfish, which seem to have been crawling around for over 200myrs. So I don't necessarily see evidence for an unavoidable entropic death by mutation. Although maybe we're thinking in different timescales... Also, are you arguing that if a lineage yields an offshoot lineage that expresses some latent aspect of the phylogenetic stem cell, the entropic clock is reset for the new clade? As for why darwinists believe as they do, I think the answer will depend a lot upon just how you define Darwinism or "darwinian fundamentalism" as a position, which in turn is related to Jerry's thoughtful post. I basically agree with the tiers as Jerry laid them out, and I also agree we often talk past each other because we're unclear at which level we are agreeing/disagreeing at. (So much depends on what we *think* each other means by "evolutionist," "ID-proponent,"darwinist","darwinian fundamentalist" yet these are all used rather loosely.) As Jerry indicated, many of the ID folks are comfortable with tier 4, some with 3, very few with 2, and obviously none with one. Unfortunately, you're a big tent so while an average ID proponent and an evolutionist could have a reasonable conversation concerning tiers 3 & 4, an ID-YEC might interject that common descent is unsubstantiated hogwash and "evilution," as a whole, has no redeeming scientific qualities whatsoever. So I think the big tents, on both sides, add to the general confusion as to who holds what position exactly. (I know I would have ejected Dawkins and Dennet from my tent long ago if given the opportunity.) Ultimately the question is: is the ID movement anti-the-overextension-of-darwinism to Jerry's tiers 1&2 or, as some in your tent would appear to have it, anti-evolution itself. Those are two very different things, yet I hear both messages at various times. Personally, it is becoming increasingly clear to me--and yes, it should have been obvious all along--that the extension of Darwinism to tiers 1 & 2 is far more a *philosophical* position than a scientific one. It is a philosophical position I am sympathetic to, but a philosophical position nevertheless. I think if more biologists realize and/or admitted RM+NS for 1&2 is a philosophical stance, we'd have a lot more common ground to work from.great_ape
May 30, 2006
May
05
May
30
30
2006
10:28 PM
10
10
28
PM
PDT

"Aborting the copy would be done during meiosis. Not really that much waste of resources."
Meiosis? But early forms of life wouldn't have any meiosis, would they?. So it would have to be during mitosis, and there would be considerable loss.

Well, let's suppose you're right. But this kind of abortion doesn't happen in contemporary organisms as far as I know. So when did it stop and why?

But early forms of life wouldn't have any meiosis, would they?

Based upon what? The front loading theory begins with a complex genome. Certainly meiosis would then be possible right from the word go. Organisms that duplicate solely via mitosis usually reproduce in very large numbers so the even a high percentage of abortions shouldn't be much of a problem. The survivors will just eat the abortions. Waste not want not. Also, if all organisms are hobbled equally by the abortion mechanism it doesn't matter as the playing field is level.

But this kind of abortion doesn't happen in contemporary organisms as far as I know. So when did it stop and why? Does diversification continue forever in ontogeny or does it stop when the program is complete? Large scale evolution may no longer be in progress. Maybe we're the end of the line. The terminal product of evolution. Personally, I believe we're at a paradigm changeover in evolution. Preprogrammed organic evolution is ended. Technological evolution driven by rational man (which proceeds orders of magnitude faster) is beginning. Raevmo
May 30, 2006
May
05
May
30
30
2006
03:00 PM
3
03
00
PM
PDT
Jerry Solid post all the way around. I would only raise issue with a fairly minor point. You said: "The only thing in your comments that was not tier 4 or micro-evolution was the vestigial organs/limbs/bones. It seems the main defense of Darwinism these days is not the evidence of the theory itself but the shortcomings of the designer or the lack of perfection of the design." I would just point out that vestigal body parts are used as evidence in favor of Darwinian mechanisms because we would expect to see them if Darwin was right. Gnerally, scientists don't make anything an arguement against design because if there is a designer there is no way to know its intentions hence any speculation about quality or method of design is meaningless. I have read such anti-design arguments used by evolutionists in passing moments of humor but not in serious discussion.ftrp11
May 30, 2006
May
05
May
30
30
2006
12:40 PM
12
12
40
PM
PDT

There are many logical and empirical objections I could think of against the "phylogenetic stem cell theory", but a very compelling one to me is that natural selection would quickly destroy the hopeful monster. "simply aborting the copy if an error is detected" represents a huge waste of resources. An anti-abortion mutation (a pro-life mutation if you like) that would disable the very genes that cause abortions would have an enormous advantage. It would produce many more copies and therefore rapidly increase in frequency until the abortion monster was gone forever.

Aborting the copy would be done during meiosis. Not really that much waste of resources. There shouldn't be any more hopeful monsters than there are when human stem cells diversify. Think of the parallels with ontogeny. There are ways of ensuring that the abortion mechanism can't be disabled. Plus it wouldn't be an advantage as random mutation & natural selection only serve to maintain the status quo, eventually cause extinctions, and not be of any use in creating fitter organisms. I don't think you're getting in the spirit of opening up to engineered solutions and/or you don't have enough of the appropriate systems design knowledge to know what solutions exist. Plus you're still thinking that RM+NS is generally capable of creative evolution. -ds

Raevmo
May 30, 2006
May
05
May
30
30
2006
07:54 AM
7
07
54
AM
PDT

Great_ape,

A well written and thought out comment. I have not read the rest of the comments on this post since it has been a busy weekend but I thought it might be best to answer your post without knowing what came before. My basic observation is that ID proponents and Darwinist talk past each other. For example,

Evolution is a 4 tier theory.

The first tier is the origin of life or how did a cell and DNA, RNA and proteins arise. Quite a sticky issue with no sensible answer by science. Lots of speculation and wishful thinking but nothing that makes sense. A high percentage of ID concerns are in this tier.

The second tier is how did a one cell organism form multi-cell organisms and this include how did such complex organisms as the eye arise as these multi-cell organisms arose. How, did brains, limbs, digestive systems, neurological systems arise. These are immensely complicated but get little discussion except it all happened over time. We have all seen the "it must have evolved" comment. This is also an important area for ID but not Darwinists. Irreducible complexity operates in this tier.

The third tier is the one that gets the most debate in the popular press and that is how did one species arise from another species when there are substantial functional differences between them. How did birds get wings to fly, how did land creatures develop oxygen breathing systems or how did man get opposable thumbs or such a big brain and why such a long time for children to develop. There is lots of speculation but no hard evidence. An occasional fossil is brought up to show the progression ignoring the fact that there had to be several thousand if not millions of other steps for these progressions of which only a handful have been found. Here the ID and the Darwinist are sometimes on common ground.

The fourth tier is what Darwin observed on his trip on the Beagle and what most of your examples are in your comment, namely micro-evolution and can be explained by basic genetics, occasion mutations, environmental pressures and yes, natural selection. Few disagree on this fourth tier including those who call themselves Intelligent Design proponents yet this is where all the evidence is that is used to persuade everyone that Darwinism is a valid theory. The evidence in this tier is used to justify the first three tiers because the materialist needs all four tiers to justify their philosophy of life but the relevance of the evidence in tier 4 for the other tiers is scant at best.

So to sum up, my experience is that ID concentrates on tier 1 and 2, a little bit on tier 3 and are not concerned at all with tier 4. The only thing in your comments that was not tier 4 or micro-evolution was the vestigial organs/limbs/bones. It seems the main defense of Darwinism these days is not the evidence of the theory itself but the shortcomings of the designer or the lack of perfection of the design.

Thank you again for your comment. I learn every time I read what you write and wish there would be more like you on this blog to challenge ID proponents like myself.

Good response, Jerry. You and Great Ape are both exemplary! -ds jerry
May 30, 2006
May
05
May
30
30
2006
06:56 AM
6
06
56
AM
PDT
ID and natural selection aren't mutually exclusive. In fact, natural selection is such a straightforward principle that some have called it a tautology. The rejection of darwinism typical of IDists doesn't entail holding the position that natural selection accounts for nothing at all, simply that it doesn't account for the generation of complex biological information. I can quite imagine that a thouroughly 'non-darwinian' design-based paradigm will still call on natural selection to explain quite a lot, and that whatever happens Darwin will remain a recognized heavyweight of science's history. For now the claims are simply that a) design inferences can be empirically sound, b) such inferences can be soundly made about certain biological features, and c) that this is as legitimately 'scientific' (whatever that might mean) as any other empirically-based inference. These are, essentially, the claims that IDists are making and that ID critics are disputing. That aside, I think it's a mistake to expect that a mature ID-based biology paradigm would operate anything like the way the darwinian paradigm does. Darwin envisaged a process of evolution that continued through the present; ID is open to the possibility of such a process but does not mandate it. For Darwin, the history of life is essentially a question of biology, for ID it's (IMO) more a question of history. For Darwin therefore all the processes of life's origin should be accessible to us, for ID this simply may not be the case; the reality of historical enquiry is that we can say only as much as the evidence permits us to say.BenK
May 30, 2006
May
05
May
30
30
2006
05:53 AM
5
05
53
AM
PDT

Great Ape

One would (possibly) have to accept front-loading of the universe/life such that these reproductive barriers were engineered from the outset so that speciation would occur.

Front loading is the only thing that makes sense of it all.

Although I guess it would depend on how much you think your designer dictated in their design vs. how much was left to chance during the unfolding of the design.

Perhaps there is a progressively limited range of options and the environment provides triggers. Start out by assuming that life on earth began with a phylogenetic stem cell with the potential within it to become all the species that ever lived. The external environment provides triggers or checkpoints for the next stage diversification/specialization. Assume also that these are one-way transitions - no going backward. Thus phylogeny mirrors ontogeny only on different time scales. Random mutation and natural selection are still at work but natural selection serves almost exclusively in the role of maintaining the status quo. Random mutations that aren't immediately seriously deleterious accumulate in a species genome until it goes extinct. Again, this process mirrors the process of individual organisms aging and dying only on different timescales.

The way I see it, if ID ultimately seizes the reigns of biological academia, there will be a long list of things on the docket for adequate explanation. For example: speciation and reproductive barrier formation, genetic disease, cancer, vestigial organs/limbs/bones, balancing selection (e.g. sickle-cell allele/malaria), and so on.

All elegantly explained by the phylogenetic stem cell described above. There are three questions raised by this.

1) Where did the initial phylogenetic stem cell come from?

2) How were unexpressed phylogenetic stem cell potentials preserved until expression?

3) Wouldn't the genome have to be impractically large?

On the first question let's say what's good for the modern synthesis is good for the front loading theory and say the origin of the first living cell is outside the scope of the theory for now. We'll just leave the option open that it might have been designed and in any case probably didn't originate on the earth in the scant time between the earth's formation and the first appearance of life.

On the second question, if you ask any programmer or hardware designer that is concerned with fidelity in data copying he will tell you that you can trade off the speed and/or resources required to copy data for whatever degree of fidelity it takes to meet your requirements. There are many error detection algorithms which may be employed and none of them even approach the complexity of processes we already observe in cellular machinery and programming. Overly large amounts of resources are required for error correction but this could be skipped by simply aborting the copy if an error is detected.

On the third question it might not be as gigantic as you think when organized into reusable component libraries. Proteins would be library functions in this case. Discounting trivial differences that don't drastically compromise function there really aren't all that many different proteins used by living things. The key is which are used, when, and how. Different body plans as well can be modularized and stored in less storage space than one might think. Now, keeping in mind we've only measured the genome size of a tiny fraction of extant organisms, we've already found a single free living cell (amoeba dubia) that contains 200 times as much DNA as a human cell. Is that big enough? Maybe. But even if it isn't, we don't know what's the largest practical size - that's just the upper bound in the small number of organisms we've tested and it's freaking huge - at least big enough to specify 200 phyla as complicated as mammals and that's with no special attempt made to reduce the storage requirements by consolidating common elements into code/data libraries.

These and many other concepts flow rather naturally from darwinian theory, but would appear–at least to me–awkward to handle from a design perspective.

As I have outlined, it flows even more naturally from a designed front loaded phylogenetic stem cell.

So while darwinism is repeatedly demonstrated to have numerous gaps–some more or less gaping than others, some more or less admitted than others–I think it’s important to remember that for a scientific/philosophic movement to ultimately be successful, it must not only show how the opposing theory is *wrong*, it must ultimately reveal *how* it came to be mistaken, as well as *account* for all the data previously “explained” by the opposing theory.

How it came to be wrong is simple enough. It's a Victorian theory that sounded good in Victorian times when the universe was generally acknowledged to be infinitely old and cells were thought to be simple blobs of protoplasm that could arise spontaneously. Since then there are still elephant size problems with the theory and an array of ad hoc explanations for how it could possibly all work that instead of filling one book on the preservation of favored races and another on Mendelian inheritance it's an amount of information and scientific specialties so large it boggles the mind.

DaveScot
May 30, 2006
May
05
May
30
30
2006
05:46 AM
5
05
46
AM
PDT
dougmoran: "One important aspect of video compression of this type that may relate to the discussion at hand is that the resulting data set, post compression, is highly random and sensitive to transmission errors. The code has been designed to be as insensitive as possible, but conceivably the loss of just a few bits out of a set of millions could result in the loss of seconds of video. Anyone who’s tried to watch a scratched up DVD is well familiar with the results" Compression algorithms like H.263 are highly optimized for video over telephone line applications where the error rate can be higher than normal. A H.263 bit-stream is highly error-proof against casaul bit errors during commmunication. A few loss of bits causes few noticable glitchs in the video. Bitstream is designed in a very smart mannner; when a few bits are lost in a block the next successive block is detected and displayed. One frame contains many 16x16 sub-blocks, so what you may notice is only a glitch in a distorted 16x16 block somewhere in the frame. You can randomly delete, add and duplicate portions of code from a H.263 bitstream and the result is still decodable and viewable. It highly resembles the way the data is arranged in the DNA. The code in DNA should also be a bit-stream rather than a data with a word-based arrangement. We can also speculate that it is compressed and encoded resemblening the way video is encoded in a bit-stream.Farshad
May 30, 2006
May
05
May
30
30
2006
05:34 AM
5
05
34
AM
PDT
To understand (somewhat) Coyne's statement that "nothing in biology makes sense in the light of ID" it might help to know that he studies the speciation process. In particular, the accumulation of reproductive barriers between diverging populations. I read he and Orr's text on the subject recently, and I would recommend it to anyone who wants to know more about the observational and experimental data evolutionary biologists struggle to account for. Remember that even if a new nonDarwinian paradigm ultimately emerges, it must not only show why Darwinism is *incorrect*, but, in order to be successful, it must also *account for*, via alternative means, all the empirical data previously dealt with via Darwinism. Speciation/reproductive barrier formation definitely makes the "to do" list. It is exceedingly difficult--though perhaps not impossible--to translate these phenomenon to ID-language. One would (possibly) have to accept front-loading of the universe/life such that these reproductive barriers were engineered from the outset so that speciation would occur. Although I guess it would depend on how much you think your designer dictated in their design vs. how much was left to chance during the unfolding of the design. The way I see it, if ID ultimately seizes the reigns of biological academia, there will be a long list of things on the docket for adequate explanation. For example: speciation and reproductive barrier formation, genetic disease, cancer, vestigial organs/limbs/bones, balancing selection (e.g. sickle-cell allele/malaria), and so on. These and many other concepts flow rather naturally from darwinian theory, but would appear--at least to me--awkward to handle from a design perspective. So while darwinism is repeatedly demonstrated to have numerous gaps--some more or less gaping than others, some more or less admitted than others--I think it's important to remember that for a scientific/philosophic movement to ultimately be successful, it must not only show how the opposing theory is *wrong*, it must ultimately reveal *how* it came to be mistaken, as well as *account* for all the data previously "explained" by the opposing theory. If I remember my intellectual history correctly, it was St. Aquinas who championed this general method of argumentation. As wise and powerful today as it was then. Once the one blind man understands he has been examining an elephant, he should be able to articulate clearly to his blind friend just why it was that he, the friend, was under the false impression that they were adjacent to a large, snake-like creature... Dethroning Darwinism is just the very first step. What comes next? What's on the agenda? If the answer entails metaphysical speculation about the possible attributes of the designer, as opposed to positive and fruitful research paradigms, then I predict the reign of ID will be very short-lived. The march of human progress has been marked by an ever-increasing intolerance of metaphysics.great_ape
May 29, 2006
May
05
May
29
29
2006
10:01 PM
10
10
01
PM
PDT

“The geneticist Theodosius Dobzhansky famously declared, “Nothing in biology makes sense except in the light of evolution.” One might add that nothing in biology makes sense in the light of intelligent design.” –Jerry A. Coyne, evolutionary biologist [emphasis mine]

Wow! Did I read that right?! Nothing?! Nature is simply littered with the appearance of purpose, and NOTHING makes sense in the light of ID?!!! Unbelievable! And to think this comes from a professor of biology from the University of Chicago! We should really be encouraging these people to continue to speak out against ID; they're digging the grave for their own "theory"!!! I love it!!!

(Just got in from out of town. Sorry if this has already been skewered in the above comments, but I'm in a hurry and don't have time to read through all of them right now. I just HAD to comment on this one right now, though!)

crandaddy
May 29, 2006
May
05
May
29
29
2006
05:58 PM
5
05
58
PM
PDT

--"I added that one commentator made the following observation: Imagine that a mathematician came up with a new theorem but had not proven it. A colleague challenges the theorem, saying that it doesn’t make sense to him. The first mathematician replies, “Just because you are personally incredulous about my theorem doesn’t make it false!” Would we expect this argumentation to convince the mathematics community of the validity of the theorem, and to base a new branch of mathematics upon it?"--Comment by GilDodgen — May 29, 2006 @ 8:42 am

This is a poor analogy. A theorem is not a theorem without the proof. What you are refering to is properly called a conjecture.

Since there's no proof of evolution doesn't it then follow that it is conjecture? -ds dennis grey
May 29, 2006
May
05
May
29
29
2006
05:57 PM
5
05
57
PM
PDT
This is the last exchange I’m allowing on this.
Hopefully that won't exclude the following.
This excess radiation is typically reradiated as waste heat. Which I believe was Mung’s original query.
My query had to do with the question of how efficiency was being defined for this particular discussion. I find the argument that some process only makes use of some small percentage of an available resource indicates that that process is inefficient to be unconvincing. To me, efficiency has to do with what the process does with that small percentage that it actually does something with. Let me provide an analogy. My refrigerator is full of food. It is all available to me at this moment. Just because I don't empty my refrigerator at every meal it doesn't mean I am being inefficient with the food that I eat.Mung
May 29, 2006
May
05
May
29
29
2006
05:27 PM
5
05
27
PM
PDT

Given the current model of the universe it's probably true that small variations would have made life (as we know it) impossible, but there's no way of knowing right now what the possible variations are, so it seems meaningless to talk about "fine tuning" and possible "intelligent choice" of constants if we don't know what the "tuning ranges" are of the so-called constants. We don't even really know if the constants are really constant over time, that's just an assumption. The current model of the universe appears to be deeply flawed given that cosmologists have to postulate on an almost daily basis different amounts of unobserved "dark matter" to make the observations fit the model.

Is it meaningless to talk about a cake recipe if we don't know the range of choices in the ingredients? -ds Raevmo
May 29, 2006
May
05
May
29
29
2006
04:45 PM
4
04
45
PM
PDT

Probability distributions over cosmological constants and "deep" arguments derived from them are all cr*p. The number and values of cosmological constants are a function of the very specific mathematical model we like about the universe today, a model that will no doubt be rejected in the future. Maybe there will be a model without any dimensionless constants that can generate all the constants of the current model, and it will be meaningless to talk about any probability distributions over constants and how "likely" the current universe is.

The whole multiverse idea is pseudoscientific nonsense because there's no way to test it. There's only one universe that can be observed, measured, and analyzed. Certain physical constants could have taken on a number of different values when the universe was picoseconds old. Minute variations in some of those would have made it impossible for life as we know it to exist. I take the fine tuning argument as an uninteresting given - the universe was evidently designed and only pseudoscientific infinite multiverse theories can begin to dispute it. -ds

Raevmo
May 29, 2006
May
05
May
29
29
2006
03:44 PM
3
03
44
PM
PDT

Zachriel: "Plants absorb light in the 400 to 700 nm range, or about 45% of the available light."

ds: "In fact most of the visible spectrum is absorbed and utilized.

Chlorophyll can only utilize certain colors of the visible spectrum (though other pigments can absorb other visible colors and pass the energy back to chlorophyll). But as I mentioned at the beginning of the exchange, visible light only constitutes about ~45% of available solar incident radiation.

ds: "It’s not that the plant doesn’t absorb at infrared and ultraviolet but that cholorphyll doesn’t untilize it."

That's correct. This excess radiation is typically reradiated as waste heat. Which I believe was Mung's original query.

You're getting more and more wrong. This is the last exchange I'm allowing on this. Plants do not absorb 500nm radiation (green, visible light) so 400 to 700nm is wrong. It tried to tell you this in the first response. Evidently you just don't get it. The 45% figure is correct but this is 45% of ALL solar radiation reaching the ground not just visible light. Of visible light plants absorb it all except green. I also tried to tell you there's more than just visible light to consider (ultraviolet and infrared are the major ones that reach the ground). I could spoonfeed this stuff to you if you'd stop making faces and spitting it out. -ds Zachriel
May 29, 2006
May
05
May
29
29
2006
01:48 PM
1
01
48
PM
PDT
Oops. I meant "epigenetic", not "epigenic".dougmoran
May 29, 2006
May
05
May
29
29
2006
12:57 PM
12
12
57
PM
PDT

Current video compression standards in use, including MPEGx and VC1, are not only lossy, but highly lossy depending on parametric considertions and source video content such as motion, scene transition rates, and noise content (any noise in the source causes compression efficiency to drop). But Mark Frank may be right in the sense that most people think of "information" as a subjective mass of "something". In the subjective sense, these algorithms were designed to remove information from the source video content correlating to features that most people would not notice anyway. A simple example is that the human visual system is much more sensitive to fine detail in black and white than they are in color, so literally 75% of the color information in the video sequence is simply removed (thrown out by the MPEG2 algorithms) before any compression happens. Most casual observers would not notice this. Then the compression algorithms go to work and ultimately can reduce the amount of information contained in the transmitted video by up to a factor of 1:7 (or more if lower quality video output is acceptable). To gain this level of compression, the algorithms account for what features of video the human being is most sensitive to, and then they essentially remove or reduce the least important ones first (color information, for example). In the end, the final video output you see on your nifty plasma TV contains an order of magnitude (or more) more noise than the original, but it is located spectorally in frequency regions we are least likely to notice, or spacially in areas of the video that might be hidden or unnoticeable. I personally find the outputs of most video compression and decompression algorithms quite objectionable. Compression artifacts such as cosine transform block boundaries and high frequency ringing at sudden black/white transitions are two of the most noteable. But worst than those are the motion artifacts that come about as a result of the algorithm actually dropping entire frames of video at the transmitting end and then having to try to reproduce them at the receiving end by interpolation.

One important aspect of video compression of this type that may relate to the discussion at hand is that the resulting data set, post compression, is highly random and sensitive to transmission errors. The code has been designed to be as insensitive as possible, but conceivably the loss of just a few bits out of a set of millions could result in the loss of seconds of video. Anyone who's tried to watch a scratched up DVD is well familiar with the results.

But to try to flip back now to the discussion at hand. Saying that DNA contains information is like saying there is a book in the library of congress. I'm not an expert on DNA in any sense of the imagination - and I will certainly read DS's references above. But I do understand that DNA contains hard-coded information. That the information it contains is highly compressed seems obvious given the sensitivity of it's decoders to errors in the code. That the code must be copied and have resulting errors corrected before being transcribed supports this idea. Those of you who are more familiar with the code than I am can do a much better job summing up the coding in DNA than I, but I will say it is extraordinary - and it makes MPEG video compression look like child's play in comparison.

One commentor in this blog noted that we should be looking for hidden copyrights or messages from the designer. I feel this is a worthy area of discussion. What other coding features would we expect to see if a designer was involved? If the designer was just tinkering and had no end-game in mind then it might be hard to imagine. But if there was an ultimate plan, then one would expect to see time markers (either elapsed time or codes that keep track of how many generations have come before), genes that could only be activated by certain environmental triggers, or self-modifying sequences that are triggered by such things as the environment or age (age in the sense of how old this particular DNA sequence is, not the age of the organism it's in). Any other ideas?

Did anyone notice the article "Unfinished Symphony" in Nature, 11 May, 2006 about the epigenic code? Is this yet another code structure embedded in/on DNA that would lead one to a design inference?

Actually, depending on how the video was recorded, the color information is vastly reduced right from the word go. I was a video equipment tech 30 years ago. In NTSC video (standard U.S. broadcast TV - this is from memory so it might be off a little) it began with B/W only, no color. The b/w signal is carried on a 14 mhz amplitude modulated subcarrier. Sound is carried on a 1.5mhz frequency modulated subcarrier. When color was introduced the broadcast had to continue working on b/w tv's so to carry the color information they added a phase modulated 3.57 mhz subcarrier which b/w sets ignored since they had no phase modulation detectors. 3.57mhz is only about 25% of 14 mhz so there's where you get your reduced resolution for color in broadcast video and they could get away with only 25% the bandwidth required for the luminance (color is chrominance). In school they taught us that color is splashed on with a wide brush while b/w with is drawn with a fine brush because the eye is far more sensitive to brightness compared to color discrimination. -ds dougmoran
May 29, 2006
May
05
May
29
29
2006
12:51 PM
12
12
51
PM
PDT

Re Dave's comment on #43. It is true that the most common algorithms are lossy. Neverthless most real life bit strings can be compressed to some extent using a non-lossy algorithm e.g. PKZIP. I am not quite sure if there any deeper implications to this. It does suggest that true randomness is rather rare in reality - but so what?

Compressability and information content are synonymous. An uncompressable stream is carrying as much information as physically possible. Compressibily of DNA was one of my first questions when I started investigating the ID claims. The implications are important as high compressibility means the data channel is wasteful of important resources. More DNA generally means a bigger cell, longer cell division time, and more energy required to divide. Random mutation IMO would be likely to result in high compressibility because it wouldn't tend to find algorithmic solutions to data redundancies while a competent designer would recognize the redundancies and eliminate them. There may be mixtures of both in any given genome. You should probably read the following paper for yourself before I say more. -ds A Compression Algorithm for DNA Sequences and Its Applications in Genome Comparison Mark Frank
May 29, 2006
May
05
May
29
29
2006
11:19 AM
11
11
19
AM
PDT

Zzachriel: "Most of the light is reflected. Plants are green, after all."

ds: "The color of an object is the light that it doesn’t absorb."

The colors it doesn't absorb are reflected. What part of "Plants are green." did you not understand? Most of the available sunlight is not absorbed.

ds: "Black objects absorb it all."

There is no perfect absorption, but reasonably correct. Green plants cannot access 100% of the available solar energy as they reflect about 55% of the available light energy.

Chlorophyll absorption spectrum
http://www.mbari.org/staff/ryjo/cosmos/Cabs.html
(Draw a line across the top of the graph at 100%. The area between the absorption line and the 100% line represents lost photons. The graph could be skewed to account for the increased energy of the blue light, but it still gives you the general idea.)

[sigh] This isn't introductory physics. Telling me "plants are green" does not support the statement that most of the light is not absorbed. Only a small portion (green light) is actually reflected. In fact most of the visible spectrum is absorbed and utilized. The next part of your lesson on the physics of light is that there is considerable energy in the non-visible wavelengths from infrared to ultraviolet. We'll discount lower and higher frequencies than those because they aren't commonly called "light" but there is also energy in microwave and lower frequences as well as soft x-ray and higher frequences of electromagnetic radiation. It's not that the plant doesn't absorb at infrared and ultraviolet but that cholorphyll doesn't untilize it. If there are still parts of this you don't understand go somewhere else for the answers and come back when you know more. -ds Zachriel
May 29, 2006
May
05
May
29
29
2006
10:36 AM
10
10
36
AM
PDT
My response to the "infinite number of tries" argument is this: Did you know that if you take the entire text of the King James Bible, and ASCIIise it, you get a huge number? Did you know that that exact number is found, unabridged, in the sequence that is pi?bFast
May 29, 2006
May
05
May
29
29
2006
10:24 AM
10
10
24
AM
PDT
Dave S writes:
Dawkin’s argument works if life arose on many planets or arose on just one planet. The argument explains everything thus it explains nothing. On the computer card dealer what if every hand was a royal flush would you think a) the odds of this are very small but not impossible b) the computer isn’t generating hands at random
No one does a better job of showing how vapid and silly this line of reasoning is than Alvin Plantinga of Notre Dame. In his review of Dennett's tome Darwin's Dangerous Idea, Plantinga writes:
And given infinitely many universes, Dennett thinks, all the possible distributions of values over the cosmological constants would have been tried out; [ 7 ] as it happens, we find ourselves in one of those universes where the constants are such as to allow for the development of intelligent life (where else?). Well, perhaps all this is logically possible (and then again perhaps not). As a response to a probabilistic argument, however, it's pretty anemic. How would this kind of reply play in Tombstone, or Dodge City? "Waal, shore, Tex, I know it's a leetle mite suspicious that every time I deal I git four aces and a wild card, but have you considered the following? Possibly there is an infinite succession of universes, so that for any possible distribution of possible poker hands, there is a universe in which that possibility is realized; we just happen to find ourselves in one where someone like me always deals himself only aces and wild cards without ever cheating. So put up that shootin' arn and set down 'n shet yore yap, ya dumb galoot." Dennett's reply shows at most ('at most', because that story about infinitely many universes is doubtfully coherent) what was never in question: that the premises of this argument from apparent design do not entail its conclusion. But of course that was conceded from the beginning: it is presented as a probabilistic argument, not one that is deductive valid. Furthermore, since an argument can be good even if it is not deductively valid, you can't refute it just by pointing out that it isn't deductively valid. You might as well reject the argument for evolution by pointing out that the evidence for evolution doesn't entail that it ever took place, but only makes that fact likely. You might as well reject the evidence for the earth's being round by pointing out that there are possible worlds in which we have all the evidence we do have for the earth's being round, but in fact the earth is flat. Whatever the worth of this argument from design, Dennett really fails to address it.
DonaldM
May 29, 2006
May
05
May
29
29
2006
09:07 AM
9
09
07
AM
PDT

ds: "I read that sugar cane is 8% efficient at fixing carbon."

Eight percent of the absorbed light. Most of the light is reflected. Plants are green, after all.

From the point-of-view of the plant, the 3% to 6% efficiency of what is available to the plant for its own metabolism probably best addresses Mung's query about the nature of waste.

The color of an object is the light that it doesn't absorb. Green plants absorb all visible light frequencies EXCEPT green. Black objects absorb it all. White objects reflect it all. This is very basic physics that you should have learned in the sixth grade. -ds Zachriel
May 29, 2006
May
05
May
29
29
2006
07:28 AM
7
07
28
AM
PDT
As Stu Harris commented on a previous UD thread: "What’s wrong with an argument from personal incredulity anyway? If I find someone’s proposed explanation for something to be incredulous, what is necessarily wrong with that? It can’t always be due to my lack of imagination, it’s just as possible that it's due to a bad explanation. It’s up to the one making the proposition to go beyond my rational incredulity, my skepticism, and convince me of their argument, and change my inference to the best explanation. In the case of the proponents of Darwinism, it’s up to them to show the truth of their explanation for evolution and not just make appeals to imagination." I added that one commentator made the following observation: Imagine that a mathematician came up with a new theorem but had not proven it. A colleague challenges the theorem, saying that it doesn’t make sense to him. The first mathematician replies, “Just because you are personally incredulous about my theorem doesn’t make it false!” Would we expect this argumentation to convince the mathematics community of the validity of the theorem, and to base a new branch of mathematics upon it? Faith in Darwinian mechanisms to explain all of life really does demonstrate gullibility when one considers all of the obvious, gaping, evidential and logical holes in the theory.GilDodgen
May 29, 2006
May
05
May
29
29
2006
06:42 AM
6
06
42
AM
PDT

Johnnyb

You are right. It is many years since I learned about compression algorithms and I made a stupid mistake. Itt is interesting that nowadays we represent more and more of the world in the form of bits (photographs, sounds, video) and it is almost always compressible without loss of information. The truly random bit string - where the chances of the next bit being 1 or 0 are fixed and not a function of the preceding bits - is rare. I am not sure if it relates to the paper though.

Cheers

I spent some time writing compression algorithms. The most common compression algorithms for sound and graphics are "lossy". Information is lost in jpeg, mpeg, and mp3 for example. What may or may not be lost is a perceivable amount of quality. The telephone companies, which have a BIG monetary interest in every fraction of a percent more voice they can shove down existing pipelines, have a standard called "toll quality" they use in determining how much (and what type) of information loss is acceptable. Real-time voice compression on personal computers (many moons ago) is what I focused on for two-way conversations over a packet-switched X.25 network with 9600 baud modems on each end. -ds Mark Frank
May 29, 2006
May
05
May
29
29
2006
06:08 AM
6
06
08
AM
PDT
Mark -- Between your post and your correction I wasn't sure what exactly you were trying to say, so I thought I'd correct this in case you hadn't already: "Virtually all bit strings can be compressed using an algorithm such as Huffman compression" This is false. There are many bit strings which cannot be compressed. See FAQ entry #9 here: http://www.faqs.org/faqs/compression-faq/part1/johnnyb
May 29, 2006
May
05
May
29
29
2006
05:06 AM
5
05
06
AM
PDT
Re #38 - And how! More importantly though, a design inference does not require perfection. Archaeologists, particularly, frequently recognize artifacts as artifacts where the design could clearly be improved upon.BenK
May 29, 2006
May
05
May
29
29
2006
05:01 AM
5
05
01
AM
PDT
Whoops - sorry - another error. Re my post #39 above. The string R was randomly generated therefore it is not compressible by any any algorithm. The other two comments stand.Mark Frank
May 29, 2006
May
05
May
29
29
2006
04:28 AM
4
04
28
AM
PDT
1 2 3

Leave a Reply