Uncommon Descent Serving The Intelligent Design Community

Michael Egnor Responds to Michael Lemonick at Time Online

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a piece at Time Online, More Spin from the Anti-Evolutionists, senior writer Michael Lemonick attacks ID, the Discovery Institute, the signatories of the Dissent From Darwin list, and Michael Egnor in particular.

Dr. Michael Egnor (a professor of neurosurgery and pediatrics at State University of New York, Stony Brook, and an award-winning brain surgeon named one of New York’s best doctors by New York Magazine) is quoted: “Darwinism is a trivial idea that has been elevated to the status of the scientific theory that governs modern biology.” You can imagine the ire this comment would provoke from a Time science journalist.

The comments section is very illuminating as Dr. Egnor replies to and challenges Lemonick.

Egnor comments:

Can random heritable variation and natural selection generate a code, a language, with letters (nucleotide bases), words (codons), punctuation (stop codons), and syntax? There is even new evidence that DNA can encode parallel information, readable in different reading frames.

I ask this question as a scientific question, not a theological or philosophical question. The only codes or languages we observe in the natural world, aside from biology, are codes generated by minds. In 150 years, Darwinists have failed to provide even rudimentary evidence that significant new information, such as a code or language, can emerge without intelligent agency.

I am asking a simple question: show me the evidence (journal, date, page) that new information, measured in bits or any appropriate units, can emerge from random variation and natural selection, without intelligent agency.

Egnor repeats this request for evidence several times in his comments. Incredibly, Lemonick not only never provides an answer, he retorts: “[One possibility is that] your question isn’t a legitimate one in the first place, and thus doesn’t even interest actual scientists.”

Lemonick goes on to comment: “Invoking a mysterious ‘intelligent designer’ is tantamount to saying ‘it’s magic.'”

Egnor replies:

Your assertion that ID is “magic,” however, is ironic. You are asserting that life, in its astonishing complexity, arose spontaneously from the mud, by chance. Even the UFO nuts would balk at that.

It gets worse. Your assertion that the question, “How much biological information can natural selection actually generate?” might not be of interest to Darwinists staggers me. The question is the heart of Darwinism’s central claim: the claim that, to paraphrase Richard Dawkins, “biology is the study of complex things that appear to be designed, but aren’t.” It’s the hinge on which the argument about Darwinism turns. And you tell me that the reason that Darwinists have no answer is that they don’t care about the question (!).

More comments from Egnor:

There are two reasons that people you trust might not find arguments like mine very persuasive:

They’re right about the science, and they understand that I’m wrong.
or
They’re wrong about the science, and they’re evading questions that would reveal that they’re wrong.

My “argument” is just a question: How much new information can Darwinian mechanisms generate? It’s a quantitative question, and it needs more than an <i>ad hominem</a> answer. If I ask a physicist, “How much energy can fission of uranium generate?” he can tell me the answer, without much difficulty, in ergs/ mass of uranium/unit time. He can provide references in scientific journals (journal, issue, page) detailing the experiments that generated the number. Valid scientific theories are transparent, in this sense.

So if “people you trust” are right about the science, they should have no difficulty answering my question, with checkable references and reproducible experiments, which would get to the heart of Darwinists’ claims: that the appearance of design in living things is illusory.

[…]

One of the things that has flipped me to the ID side, besides the science, is the incivility of the Darwinists. Their collective behavior is a scandal to science. Look at what happened to Richard Sternberg at the Smithsonian, or at the sneering denunciations of ID folks who ask fairly obvious questions that Darwinists can’t answer.

The most distressing thing about Darwinists’ behavior has been their almost unanimous support for censorship of criticism of Darwinism in public schools. It’s sobering to reflect on this: this very discussion we’re having now, were it to be presented to school children in a Dover, Pennsylvania public school, would violate a federal court order and thus be a federal crime.

There’s lots more interesting stuff in the comments section referenced above. I encourage you to check it out. I was pleasantly surprised at the number of commentaters who stood up for ID and challenged Darwinian theory along with Dr. Egnor.

[HT: Evolution News & Views]

Comments
F/N: Six years later, the more things change, the more they remain the same! KFkairosfocus
January 28, 2013
January
01
Jan
28
28
2013
10:44 AM
10
10
44
AM
PDT
"Let’s see whether we continue here or elsewhere" Agreed.great_ape
March 4, 2007
March
03
Mar
4
04
2007
12:11 PM
12
12
11
PM
PDT
Hi Great_Ape I see your time-out call. Maybe in the meantime, the blog masters will deem it worth the while to post that new thread . . . Now, too, you comment on some points I will take up a bit: 1] You can believe in the fine-tuning concept of the cosmos yet remain so skeptical about darwinian evolution in principle. First, while I am simply discussing here within the ambit of the generally accepted timescales and cosmology of the past [I will pick up later], I am pointing to the gap in the empirical evidence as respecting the macro-level claims of NDT. I am aware that at micro-evolutionary levels, NDT-style evolution can and does happen. But Galapagos island finch beaks that get a little longer in a drought, then change back, or “species” that are now hybridising and interbreeding, or the like do not cross the key threshold McDonald noted and Meyer picked up. Further to this, from the Cambrian life revolution forward, there is a characteristic pattern of sudden appearance and stasis then abrupt disappearance (on a temporal interpretation . . .) in the now “almost unmanageably rich” fossil record. All of this is linked to the information generation problem I and others have highlighted, including the mosquito case above. Until I see clear evidence of ability to cross that core body plan innovation threshold, I will remain skeptical on the common descent through RM + NS etc thesis. By sharpest contrast, the picture of the origin and development of the physical cosmos, is in my observation not presented dogmatically, but is both provisional and well-anchored on supportive empirical data. (The contrast in tone between, say, my favourite general survey Astronomy text and the tone I have seen in many a discussion on these matters I have found telling!) The Hertzsprung-Russell diagram, the stellar distance scale [the subject of my very first ever scientific presentation and public speech . . .], the observed red shift and Hubble Constant are all anchored pretty directly to observation. 2] Is it not possible that the cosmos was arranged in such a way that evolution would happen as it has?. That is, it would happen in a way that was *indistinguishable* from neodarwinism. I observe the underlying inference/assertion, that NDT is the actual observed mechanism of origin or is indistinguishable from it. Methinks there is a major evidentiary gap to be bridged before one can assert such, as noted. In short, the current state of evidence is at macro-level, quite plainly “distinguishable” from NDT's predictions and explanations. 3] sometimes I wonder if you ID guys don’t simply suffer from a lack of imagination This is of course a classic gambit. But in fact, the design inference is precisely based on an explanatory filter that reckons with chance, necessity and agency then proceeds to ask how may we credibly and empirically distinguish the three. [Reverse engineering the design is a further stage, and is indeed one being embarked upon. But in the key cases of OOL and macro-evolution, as well as cosmology, the prior issue – given the institutional prevalence of evolutionary materialism as both a guiding philosophy and a paradigm for science -- is whether design is a reasonable and indeed better explanation of the observed data.] This has been outlined above, and is in the always linked, but we may briefly summarise:
a] It is generally accepted that chance, natural regularities and agency are all known as the three major root causal forces at work in the world. b] For instance, if a heavy object falls, that is the NR, gravity. If it is a die, the face that is uppermost is effectively a product of chance. If it is tossed as a part of a game, that is agency. c] When an empirically observed event is: contingent, complex beyond the credible probabilistic reach of chance in its context, and functionally specified, the resulting FSCI/CSI is a well-known, frequently observed marker of intelligent agency. [Think of coming across 500 pennies, all heads uppermost. Which is the better/likelier explanation, a lucky toss or someone who arranged the pennies?] d] Indeed, in every case where we directly observe an event's cause in action, if it meets the above filter's criteria, it is produced by agents. That is, we see here an empirical basis for inferring on a best explanation basis, to agency, even when we do not see the cause in action directly. And – certain well-known cases notwithstanding – this inference is routinely made in science and general life.
So, the matter is not failure of imagination. Let's see whether we continue here or elsewhere GEM of TKIkairosfocus
March 4, 2007
March
03
Mar
4
04
2007
02:05 AM
2
02
05
AM
PDT
Lemme take a breather then I'll try once again. On a sidenote, though, I find it interesting, kairosfocus, that you can believe in the fine-tuning concept of the cosmos yet remain so skeptical about darwinian evolution in principle. Is it not possible that the cosmos was arranged in such a way that evolution would happen as it has?. That is, it would happen in a way that was *indistinguishable* from neodarwinism. Yet a omniscient/omnipotent designer could still know what would happen and achieve its goals. In short, sometimes I wonder if you ID guys don't simply suffer from a lack of imagination concerning the ultimate capacities of the designer. Just something to ponder, as it is close to my personal view.great_ape
March 3, 2007
March
03
Mar
3
03
2007
04:44 PM
4
04
44
PM
PDT
GP: I see you also linked the same paper on bacterial resistance and its implications. I would like to pick up one of your points to G_A rather briefly:
All your words, all your concepts, all your arguments, don’t apply in any way to the specific case of mutation of the mosquito, for the reasons already discussed, but apply perfectly to an intelligent designer, Nature . . .
You are here bringing up the Case III in my always linked, cosmological design. [The onward linked philosophical issues faced by evolutionary materialist worldviews are important but of course not scientific questions.] The key link is that a fine-tuned cosmos that with minor shifts becomes non-life sustaining, often radically so, exhibits a complex balance that is of a type often set by designers. In this context, should we discover that there is in fact a law of nature that forces or makes highly probable the emergence of life on suitable planets etc, then that is in fact suspicious, given the cosmological finetuning issues in the linked. (BTW, this is also why the project to find a grand theory of everything is actually a design research project, though of course by and large an inadvertent one. Imagine the discovery of a super-law of physics that implies that the sort of fine-tuned cosmos we observe is forced. What does that suggest? And, the often met with alternative, an infinite array of subcosmi with randomly distributed laws or parameters, is of course a resort to ad hoc, ex post facto philosophy not science, and opens the door to the credibility of a Design based worldview alternative.] Similarly, observe that bacterial resistance to say Cipro, the subject of Fig 1 in the linked paper, is by detuning the folding of the enzyme in question. Fine-tuning in short is linked to sparseness in the local configuration space. [Observe how Wright in the 1932 paper adverted to cosmological instances to help interpret the sparseness issue,t hen failed to follow through on the implications, i.e. Exhaustion of probabilistic resources. I gather in Monod's Chance and Necessity, 1970 or so, the same gap emerges. Methinks this is a systematic gap in the evolutionary materialist research programme, from hydrogen to humans.] We know that intelligent agents routinely create functionally specific, complex [sometimes, irreducibly complex] information rich systems, that exhibit finely-tuned behaviours, and may have defences for the fine-tuning through error detection and correction, sometimes leading to intelligently programmed graceful degradation. So, we see here a very familiar pattern . . . GEM of TKIkairosfocus
March 3, 2007
March
03
Mar
3
03
2007
03:24 AM
3
03
24
AM
PDT
Continuing . . . 5] Further to this, the mutation in view is at root an information-loss incident, not an information-creation event. That is, a previously functioning molecule has been damaged and this in this case frustrates the mechanism by which the insecticide poisoned the cell processes. That is, a sufficiently strong random disruption of the chain of digital information has caused an arbitrary shift in the decoding process, due to the way the code interprets in this case protein coding. [And, BTW, there is evidently a fair amount of redundancy in the code itself including steps that make for graceful degradation by shifting to similar monomers.] 6] Let us note: the change here in view is microevolutionary; it does not rise to the body-plan change level, and the new population is till a population of disease-carrying mosquitoes, just resistant to a formerly successful insecticide -- and by the report, less functional as fliers. [This is quite similar to the case with antibiotic immunity through misshapen enzymes etc and reported HIV immunity through a genetic breakdown in the port used by the virus to gain entry, etc. Cf discussions here (esp fig 1 and table 1), here [observe on the failure to recognise the full implications of the high degree of sparseness indicated in this classic paper, starting with abiogenesis], and p. 11 ff. on CCR5 here (also cf the paper as a whole).] 7] The information creation side is, in the end, a challenge to empirically justify a claimed process by which through a cluster of random point mutations, we materially gain novel and coherent biofunctionality in the used code space of the cell, especially at the body plan level. Notice the McDonald threshold for information-creating macro-evolution: novelty in body plans which creates a novel, coherent somatic system. This, has never been actually observed and credibly passes the threshold of exhausting the available probabilistic resources. 8] Further to this, we see beyond this case of modifying an already existing cell, the same challenge at the point of creation of life through proposed abiogenesis mechanisms. For, observed minimally complex life forms have ~ 300 – 500 K or more monomers in their DNA strands, three orders of magnitude of string elements up in information-carrying capacity from the level specified by Dembski as exhausting the PR of the observed cosmos. 9] The information carrying capacity of course increased exponentially with chain length: 4^N; where 4^250 ~ 10^150. 4^300k ~ 9.94*10^180,617. I conclude for good reason that the islands of functionality are effectively impossibly sparse in such a space, relative to random processes that just so happen to chance upon the coherent codes and implementing machinery for the relevant systems we see in the cell. But, we routinely see intelligent agents creating functional digital configurations that are comparably sparse in say text-string space.] 10] So, while indeed, the issue of random changes and metrics on information carrying capacity are just a first step, they are an important step as they tell us the size and credible sparseness of the configuration spaces we are working with. In that context, the required increment in biofunctionality coupled to the coding and processing systems and the probabilistic issues as the number of points mounts towards 250+, by direct implication raise the issue of exhausting the probabilistic resources of the situation. BOTTOM-LINE: A single-point mutation that actually seems to have damaging consequences and works by information loss, seems to be nowhere near the relevant threshold for explaining macro-evolutionary change. [It aptly explains one mechanism for micro-evolution, but micro evolution is uncontroversial across live options and is not the material issue at stake.] In short, interesting, but not yet the level of explanation required. Thanks for a good try GEM of TKIkairosfocus
March 3, 2007
March
03
Mar
3
03
2007
02:55 AM
2
02
55
AM
PDT
Hi Great_Ape: Seems the blog is having fun with a Japanese cartoon series and with the Templeton story. Anyway, back on issue. You will note I described information-carrying capacity, as that is the real point of the I = -log2p expression. In this case, a single-point mutation in a 4-state digital system, carries up to 2 bits. My 250-point issue has to do with the associated fact that that gives us a capacity of up to 500 bits, i.e. a configuration state space of ~10^150, linked to the Dembski type bound for a unique specified state. [This extends of course to islands and archipelagos of functionality, too.] Let's take up on your point in # 190 that:
perhaps a wayward gamma ray passed in the vicinity . . . the break was repaired, but repaired incorrectly, as happens from time to time. The repair system is not perfect by any stretch of the imagination. So you have your new amino acid. As it happens, it confers resistance to this insecticide. This all happens in the context of a gene duplication, which are also often the result of repair mishaps . . . . How much new information does it contain that has been generated by evolution? I don’t have the foggiest idea. But it’s more than you think it is, at least if you think that biological information is something above and beyond shannon information . . . A process is born, one involving several components . . . . The material substrate change is modest, but the “interpretation” of the substrate change by nature may or not be more dramatic.
Remarking: 1] Of course, I pick the physics-linked case, as that ties into the implications of radiation damage (part of one of the must-do physics major courses I did way back when; recall how the hairs on the back of my neck stood up at appropriate points, too . . . esp. the Hiroshima “expt”). The usual mechanism is ionisation of the most common molecule in the body, H2O, leading to free radicals, thence disruption of DNA and resulting damage. Cell death etc. if bad enough, cancer etc. as a serious possibility, if not that bad. Really minor damage is repairable. 2] You advert to and imply existing and functional repair mechanisms, which are of course error-detecting and correcting systems based on redundancies in the stored information. [The simplest case of such a system, and the first example I studied, was the 3M voting code: repeat the code twice so there are three copies. [M, M, M] Take a vote, bitwise, and majority wins. Obviously, it corrects a single error, but misses a double on the bit point, which would reverse the vote. (For DNA, if there is such a code that would require voting across four possible point states: G/C/A/T [and the match in the neighboring linked complementary strand . . .]; of course, the actual mechanisms are subtler than a brute force voting mechanism.) But observe: error correction mechanisms can be saturated, leading to possibilities for failure in environments that overly stress the system. Mosquitoes subjected to insecticide assault sounds like a stressed situation.] 3] Underlying issue and implication: we see a code-based information system, with a control language and by implication algorithmic error correction processes. So, let us not strain out gnats while swallowing camels: in all cases of error-correcting information processing systems based on codes where we directly know the causal story, they are the produce of intelligent agents. Indeed, this is true of all such cases of functionally specified complex information [FSCI or in Dembski's term, CSI] . 4] That immediately means that relative to what we know – as opposed to speculate – we have a candidate best explanation for such systems that we need a very good reason and evidence to reject. Pausing . . . . GEM of TKIkairosfocus
March 3, 2007
March
03
Mar
3
03
2007
02:54 AM
2
02
54
AM
PDT
great_ape: thank you for your answer, which is rich of very interesting insights, and allows a lot of pro ID discussion! The only thing it cannot do, alas, is helping Myers' position. Let's discuss why. First of all, you are elaborating a lot of points which, I think, Myers and those who think like him are completely unaware of. I think that Myers'answer was just what it seemed: the infamous attempt to pass a single aminoacid substitution as the miraculous acquisition of CSI, hoping that nobody would check the biological nature of that mutation (or, perhaps, unaware himself of it). The choice is always the same: lie or ignorance? Fascinating options, both. But let's leave Myers alone, and discuss what is much more interesting, that is your points. They are very thoughtful but, again, I cannot agree on the conclusions. First of all, I would like to complete the "negative" part of my post and point to the only part of your argumentation which is, in my opinion, frankly wrong, in a technical sense. You say: "But we know when a new feature or process has been generated. And evolution conferring a resistance to a pesticide is a positive instance of some kind of increase of *something.* " I don't agree. In this specific case, which is anyway representative of most, if not all, the "evidence" of darwinists, including antibiotic resistance, no positive instance has been created. We are only witnessing a cellulare "disease" which, by mere chance, happens to make a particular "threat" ineffective for the owner of the disease. The only thing which increases in the mutated mosquito is entropy, or if you want Shannon information, or anyway disorder. Indeed, a protein which was in a way designed to have a specific function (ACE1) loses a small quantity of its designed structure and function beacuse of a random event. That's all. Let's see in more detail the example of antibiotic resistance, which is equivalent but which gives us an interestint further element. For all the details, please read this very good article: Is Bacterial Resistance to Antibiotics an Appropriate Example of Evolutionary Change? http://www.trueorigin.org/bacteria01.asp The facts are as follows: in bacteria we have two different mechanisms by which antibiotic resistance is conferred: a) Mutation of some structure of the bacterium, which was the target of some antibiotic action. This case is the perfect equivalent of the mosquito case. b) HGT of some enzyme which can inactivate the antibiotic (eg penicillinase). In this case, we have no evidence that penicillinase is generated by random evolution, no more than any other complex enzyme. Penicillinase is a protein with function and CSI, but we find it in the scenario "ready to be used". The bacterium acquiring penicillinase by HGT is really acquiring "something", CSI inseed, but that something is only transferred, not created. So, Dembski's concepts are absolutely confirmed. But let's go back to the a) scenario. In that case, the acquisition of resistance is not, in any way, a new "function". I'll give a metaphoric example whose only purpose is to clarify the logic here: If in a population of animals some of them have a genetic mendelian disease, due to a single mutation which inactivates a function, let's say a coagulation disease, and some bizarre scientist decides that he is interested only in the diseased animals, and not in the normal ones, and he goes on killing the normal animals and keeping the diseased ones, is that a demonstration that the disease is "a positive instance of some kind of increase of *something.* "? Only if you define that "something as disease, loss of function, loss of meaningful information. But positive? I state it again, in my opinion the only thing which increases is entropy. And CSI, however you want to define it, is certainly decreased. Now, to the "positive" part of my post. I think you introduce a concept which is of the greatest importance, and which is the cause of great confusion in darwinist thought. It is a confusion which is inherent in the concept of "selection", and which often expresses itself in the use of so called "pseudo-teleological language" by darwinists. It can be summed up this way: many darwinists are IDers without knowing it (not Dawkins, beware. He is one of the pure). One of the manifestations of that aspect is the frequent use of inverted commas. Your arguments are a very good example of a honest and creative attempt to express explicitly this bias, and so they are a very good field for discussion. You say: "It’s the context **in your mind** that matters. The interpretation. And here the weird thing is that the interpreter is nature itself. " and: "What patterns are in the mind of nature that determines what it sees and how it interprets the splotches?" and: "That is the “mind,” if you will, of nature. It is the landscape of possibilities it can “imagine” as being viable." Indeed I "will"! You are perfectly describing the point of view of a designer. You are describing Nature as the designer. All your words, all your concepts, all your arguments, don't apply in any way to the specific case of mutation of the mosquito, for the reasons already discussed, but apply perfectly to an intelligent designer, Nature, who can: a) Determine non random mutations and/or b) Select random mutations which are potentially useful according to a predetermined plan or design. That's what Nature does, if we want to call the designer Nature. I don't know how he/she does it, but he/she does. The fact is: according to a strict naturalist view, and that's a beautiful paradox, Nature does not even exist. Or at least, it exists not as an entity, but as the sum of what exists and of the laws at work. In other words, you could well subsitute the world "reality" for nature, in a naturalistic sense. A naturalistic reality is not only blind, it is by definition not conscious, it is only the outcome of rigid deterministic laws. So, unless you define new laws which I am not aware of, nature has no purpose, it selects nothing, it implements nothing. You call it "landscape", implying again that in some subsets of reality we can find some appearance of meaning. Landscape is a human word. It is the way a conscious being sees a piece of reality and gives meaning to it. It is the way the conscious mind reconstructs apparently meaningless bits of information, sometimes inferring, if the poetic mood is working, a designer or an artist behind them. A landscape exixts only in a mind. Nature exists only in a mind. Ouside of a mind, only deterministics laws (not many laws, only three or four, depending how you count them) exist, at least according to present science. So, if you infer special behaviours in nature, which are characteristic of minds, or of a mind possessing Nature, you should show how such behaviours can arise from the strict application of the known laws. That's exactly what darwinism can't do, and what it gives for granted. Because to analyze that assumption means to delve deeply in the nature of information, of thought, of order, it means considering and applying serious sciences like mathematics and statistics, and why not, physics. Better stay off from that, and leave it to Dembski and the like of him, calling them fraud, or bad science, or anything else one can device. A last note. You say: "There is that gene–the one that’s been altered–but it will also have a support system of transcription factors, etc, that regulate its expression. They are all oriented now within the context of a new process, which is to provide insect resistance." That's exactly the point. That's what the concept of "irreducible complexity" is all about. The interesting fact is that your argument does not apply to the mosquito case, for the reasons already discussed: in that case, the new "diseased" gene is completely alone, there is no new "support system of transcription factors, etc, that regulate its expression", because the regulations remain the same as before the mutation, there is nothing "oriented", there is no "process", there is nothing trying to "provide insect resistance". The only "selecting" thing here is the insecticide, whose only fault is of not being able to "predict" the gene's "disease" (but a new, well designed model certainly could...). But your words apply perfectly to the case of a designer: it is perfectly true that any variation in a gene, any true variation which implements a new function, is perfectly meaningless unless it is managed by a coordinated system of regulation, at the transcritpion level and at many other levels. That's why, for me, any new biological function is irreducibly complex, because the designer has to implement the effector (usually the final protein), but also, and especially, the procedure, that is the code necessary to correctly use the effector. And the procedure is usually more complex than the effector itself. May be much more complex. The procedure implies information processing, boolean checkpoints, measures, quantitative evaluations, decisions, coordination with all the other procedures, interfaces, and so on. Irreducible complexity, in its purest form. Enough for now.gpuccio
March 3, 2007
March
03
Mar
3
03
2007
02:28 AM
2
02
28
AM
PDT
The mosquito example seems trivial on the surface. A single amino acid change. Not hard to come by; perhaps a wayward gamma ray passed in the vicinity of a germ-line DNA strand? The free-radical chemicals produced along the gamma ray's wake attacked the DNA, resulting in a double-stranded break. Or maybe, far more likely, the free radicals were the byproducts of Mr. Mosquito's own metabolism. In any case, the break was repaired, but repaired incorrectly, as happens from time to time. The repair system is not perfect by any stretch of the imagination. So you have your new amino acid. As it happens, it confers resistance to this insecticide. This all happens in the context of a gene duplication, which are also often the result of repair mishaps, allowing a copy of the original gene to exist intact. (But more on that later maybe.) The germline cell, probably a male's, went on to fertilize an egg. The egg hatches, and we have a pesticide resistant mosquito. How much new information does it contain that has been generated by evolution? I don't have the foggiest idea. But it's more than you think it is, at least if you think that biological information is something above and beyond shannon information (You have to think that, by the way, otherwise Egnor's question/challenge is nonsensical). Because the information differential between mosquitoVersion1 and mosquitoVersion2 is more than just an amino acid change. It now *does* something different. A process is born, one involving several components. There is that gene--the one that's been altered--but it will also have a support system of transcription factors, etc, that regulate its expression. They are all oriented now within the context of a new process, which is to provide insect resistance. To accurately assess the information attained by that single amino acid change, we'll have to weigh in the added information value of the specification of "mechanism of insecticide resistance" and however that breaks down into primitive components. We haven't really been told how to do that exactly. Most evolutionary changes are probably similar in kind to this in nature. The material substrate change is modest, but the "interpretation" of the substrate change by nature may or not be more dramatic. It's like that optical illusion where two faces are staring at each other. Or maybe it's a single vase, depending on what you saw first. Sometimes you see it one way, sometimes another. You might add a little splotch for an eyespot, though, and suddenly the faces image is impossible to not see. It's the context **in your mind** that matters. The interpretation. And here the weird thing is that the interpreter is nature itself. Mutations in the organism are (I'm currently trying to flush out this analogy more in the near future, so bear with me) a sort of Rorschach text for nature itself. What patterns are in the mind of nature that determines what it sees and how it interprets the splotches? The patterns consist of the complex fitness landscape that I am always ranting about here. That is the "mind," if you will, of nature. It is the landscape of possibilities it can "imagine" as being viable. The landscape we don't understand it well enough to even begin to calculate something like CSI, even if such a thing could be calculated. That is why Egnor's question can't be answered in the fashion that he demands. But we know when a new feature or process has been generated. And evolution conferring a resistance to a pesticide is a positive instance of some kind of increase of *something.* I can't tell you the magnitude of that something, but I'm confident that you can't either. So, in light of the fact that formal calculations are impossible to make, and in light of the fact that the timescales involved for things potentially more interesting can't be observed--and you know if he tried to argue by reconstructing historical data, everyone'd cry foul b/c maybe agency was involved back then or something--I think Myers' answer is among the best possible among these constraints.great_ape
March 2, 2007
March
03
Mar
2
02
2007
05:39 PM
5
05
39
PM
PDT
Hi guys; I was waiting for the future thread, but it does not seem to be forthcoming. I'll go ahead and post my response to gpuccio here this evening after $work if new thread does not arise by then...great_ape
March 2, 2007
March
03
Mar
2
02
2007
12:12 PM
12
12
12
PM
PDT
Hi Gpuccio: Let's wait for that onward thread . . . and let's note where it is here whenever it appears. You make an interesting note:
the concept itself of body plan in multicellular beings is almost inconceivable if not in terms of design. One of the interesting aspects of the body plan (indeed, plan!), meaning not only the general form of the body, but also the detailed spacial and functional organization of parts at many levels and sublevels, of segments in the body, of organs in the body parts, of subparts in each organ, and so on…, is that nobody can say where it is written . . . . mutations in the homeobox genes certainly can macroscopically derange the normal order of gross body parts, but that does not mean that homeobox genes are the repository of the body plan. [I]t seems obvious that the realization of a complex macroscopic body plan, realized by coordinating the growth, differentiation and spacial placement of billions of individual cells, cannot be realized unless a tremendous work of regulation, control, error management, continuos transcription fine tuning, and so on, is accomplished under the guide of precise information about the final result to be obtained.
This is indeed a major issue, especially in light of the point made by Meyer through citing McDonald:
McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes–the very stuff of macroevolution–apparently do not vary. In other words, mutations of the kind that macroevolution doesn’t need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don’t occur.6
We are dealing with highly complex, tightly integrated, functional processes that are seriously information-driven, and information-controlled [error detection ands correction . . .]. the architecture of the information system we see in outline before us, the languages used to code the software side, the nanotechnology of the genetic and related functional molecules, the fact that this constitutes a self-replicating automaton, etc etc all point to vast informational complexity and finely balanced coupling [finetuning . . .] When I work with telecommunication or information-pocessing devices, systems and networks, and see similar coupling, integration and complexity -- in some cases they are species within the same genus [two state digital bit strings in RAM or on hard drives, four state digital GCAT strings in DNA, etc.] – save that the life systems are vastly more sophisticated – I cannot consistently infer to design in the one case and dismiss design in the other just because it is not in line with the views and agendas of certain key individuals and institutions. I refuse to swallow a camel whilst straining out gnats. So, let us see what the onward discussion brings out. And, it is indeed a pleasure to discuss with Great_Ape, as he has been both civil and serious. Kudos to him. (I would indeed like to see his answer . . .) All the best GEM of TKIkairosfocus
March 2, 2007
March
03
Mar
2
02
2007
02:37 AM
2
02
37
AM
PDT
kairosfocus: "looking forward to onward discussion" Me too! You have introduced, in your last post, a very important subject, speaking of the problem of body plans. Of course the Cambrian explosion remains, in spite of all the meager attempts of darwinists to bypass it, one of the biggest problems for those who believe in a gradual, step by step unguided evolution. Anyway, the concept itself of body plan in multicellular beings is almost inconceivable if not in terms of design. One of the interesting aspects of the body plan (indeed, plan!), meaning not only the general form of the body, but also the detailed spacial and functional organization of parts at many levels and sublevels, of segments in the body, of organs in the body parts, of subparts in each organ, and so on..., is that nobody can say where it is written. The recent trend is to explain the body plan in terms of a few genes, usually the homeobox group or similar, because we know, essentially from experiments in drosophila, that mutations in those genes change the order of body segments. The usual, quick conclusion is that we have found the few genes which control the body plan. And, again, the quick answer is wrong. The tendency to explain complex engineering in terms of single genes and proteins is perhaps the most irritating assumption of all darwinist thought. Proteins are evidently the final effectors of a complex process, and it is obvious that if we modify the final effector, the outcome of the whole process is significantly modified. But that does not mean that the whole process is determined by the final effector. So, mutations in the homeobox genes certainly can macroscopically derange the normal order of gross body parts, but that does not mean that homeobox genes are the repository of the body plan. it seems obvious that the realization of a complex macroscopic body plan, realized by coordinating the growth, differentiation and spacial placement of billions of individual cells, cannot be realized unless a tremendous work of regulation, control, error management, continuos transcription fine tuning, and so on, is accomplished under the guide of precise information about the final result to be obtained. And that process is very likely controlled not only by proteins, but by RNA itself, and it seems almost certain that non coding DNA has a key roel in that. Most of these questions are at present completely unanswered, and while the enthusiasm for evo-devo approach may be justified, the general triumphalism about homeobox genes explaining everything is really laughable. great_ape: "I consider gpuccio’s post an understandable objection to Myers, but not one that is unanswerable" Looking forward to your answer... It is a pleasure to discuss with you.gpuccio
March 1, 2007
March
03
Mar
1
01
2007
01:36 PM
1
01
36
PM
PDT
Hi folks: Great to see a little life left in the thread. Gpuccio has given us a great cluster of posts above. I look forward to any continuation . . . I cannot but observe that a single mutation that causes loss of information and is apparently associated with diminished functionality of the mosquito, seems to be two orders of magnitude below the level that begins to count as complex in the relevant sense:
* 1 four-state element: ~ 2 bits of information carrying capacity. [That is I am adverting to Shannon and Hartley etc.] * 250 such elements ~ 500 bits, with a config space ~ 10^150. * 1:250 LT 0.01, i.e we are two orders of magnitude down on information carrying capacity here. * further to this, the biofunctional result is premised on in effect damaging a gene and causing information loss, i.e. not relevant to the creation of novel, incremental biofunctional information that leads to emergence of new capacities * this supports the contention in the Meyer paper that random changes in DNA linked to core body functions are more likely to be destructive than creative.
It is worth excerpting and highlighting that peer-reviewed paper:
In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types [nb perhaps 50 or more], but also for the origin of new body plans [nb on the reported order of dozens at phylum and sub-phylum levels] . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes--the very stuff of macroevolution--apparently do not vary. In other words, mutations of the kind that macroevolution doesn't need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don't occur.6
PZM, in short, has evidently given us a strawman counter-example; about par for the course in light of my observations on this point over the past few years; perhaps inadvertently -- I think there is a MAJOR communication gap here.. (I note too that Mr Egnor aptly complained that a literature count is not an answer to his actual question, which is on the information-generating capacity of RM + NS.] Okay, looking forward to onward discussion. GEM of TKIkairosfocus
March 1, 2007
March
03
Mar
1
01
2007
01:29 AM
1
01
29
AM
PDT
"...guided by questionable leaders like Dawkins and similar, have constantly refused, in the last ten years, even to admit that there is a challenge, much less a problem." ==gpuccio Hi guys, I thought this thread had gone dead. Would look forward to beginning afresh in another top-level post. I just wanted to point out that we have no leaders in science. We have the occasional mouthpiece. The closest thing to leaders are those influential old-timers in various subdisciplines that can influence how money gets distributed.That's the real power behind what gets pursued. I consider gpuccio's post an understandable objection to Myers, but not one that is unanswerable. I have a reply regarding the apparent simplicity of the mosquito example. But again it delves into definitions of complexity and information and it's probably best we begin a new thread.great_ape
February 28, 2007
February
02
Feb
28
28
2007
05:23 PM
5
05
23
PM
PDT
Patrick: Of course, if you want to turn my recent comments into a blog post, I am happy of that. Regarding your question about the fitness cost in the mosquito example, an anwer can be found in the abstract of the first article linked in my previous post: "Resistance ace-1 alleles coding for a modified AChE1 were associated with a longer development time and shorter wing length."gpuccio
February 28, 2007
February
02
Feb
28
28
2007
03:13 PM
3
03
13
PM
PDT
gpuccio, This blog post has gotten buried for a while. Would you mind me turning your recent comments into a blog post?
In the mutated gene (less functional than the original one) by sheer luck (sometimes it happens) the insecticide cannot act.
Sounds a lot like the nylon bug always being touted. In the case of the nylon bug, information was lost and the new enzyme was many times less efficient than its precursor, making the minor advantage null. 1. The bug went from 100% efficiency to 2% effciency to metabolize. 2. The bug lost genetic info as a result of a frameshift. 3. The bug has a lower reproductive rate and efficiency. 4. The bug cannot survive amongst the parent species. 5. The bug acquired no functional divergence. An increase of information requires functional divergence without information loss. Going from metabolic function to metabolic function is not considered functional divergence. Going from, say, a sequence that codes for a metabolic function to a sequence that codes for oxygen transport would be considered "functional divergence." Short on time...but what is the loss in functionality for this mosquito example?Patrick
February 28, 2007
February
02
Feb
28
28
2007
09:43 AM
9
09
43
AM
PDT
gpuccio, GREAT POSTS!!!!!!tribune7
February 28, 2007
February
02
Feb
28
28
2007
06:33 AM
6
06
33
AM
PDT
gpuccio, Thanks for the excellent answer. You, GEM and great_ape are great assets for anyone trying to understand these issues.jerry
February 28, 2007
February
02
Feb
28
28
2007
05:55 AM
5
05
55
AM
PDT
great_ape (#170): " The legitimate question IMO is whether this type of information increase is trivial in comparison to some other necessary type/quantity you think is meaningful." I think you can find some answer to your legitimate question in my previous post about PZ’s “answer” to Egnor. In it I analyze the specific example cited by PZ out of the "thousands", and I think it should be obvious that the “new information” PZ and friends are speaking of is not CSI, and not even near. So, if all the thousands of examples cited by PZ are of that kind, Dembski's affirmation that CSI can never increase by random mechanisms remains unchallenged. I don't understand all the enthusiasm of darwinists about gene duplications. Gene duplication, if it is random (which is to demonstrate), is anyway only a mechanism which does not create new information, in the best scenario it could be of help by allowing to retain the old information while new "attempts" are done, or to reduce the cost of the loss of information in cases like the mosquito gene discussed in my previous post. But the answer which should be given and nobody gives, the answer which Egnor has repeatedly requested without success, the answer which is not contained in the mosquito example, and I bet in none of the thousands of articles cited by PZ, is the answer to the following question: "how is CSI supposed to be created by random forces, including natural selection"? Gene duplication is no answer. HGT is no answer. Somebody has to show a model of how a sequence of, let's say, 200 aminoacids, which has a specific enzymatic function, may have been generated randomly from some condition where that information was not present in any form. If we have a step by step model, we can calculate its plausibility, in terms of probability, resources, intermediate function of each step mutation which should be selected by a reproductive advantage, and so on. No wonder darwinists have never produced such a hypothetic detailed model not even for one protein, because otherwise the impossibility of their theory would immediately be evident to everybody. One last note about authority. I am not a big fan of authority, especially in science. Authority is good only in the measure that it can be challenged. And we, in the ID movement, are challenging it, indeed! The authority you speak of, besides, is the authority of a majority of scientists who, guided by questionable leaders like Dawkins and similar, have constantly refused, in the last ten years, even to admit that there is a challenge, much less a problem. It is the authority of those who, knowing they are the many, can denigrate the few who have different views, without even trying to understand their reasons. The authority of those who believe in the use of force and not in the confrontation of ideas. That kind of authority speaks for itself. It's a bad authority, the worst we can imagine.gpuccio
February 27, 2007
February
02
Feb
27
27
2007
11:27 PM
11
11
27
PM
PDT
Oh, well, I must admit that, reading better PZ's rants about Egnor, I have discovered that he has indeed given a specific answer to Egnor, and a very brilliant one! In the words of PZ: "In addition to showing that PubMed lists over 2800 papers relevant to his question, I singled out one: an analysis that showed that insecticide resistance in mosquitos was generated by a mutation of an acetylcholinesterase gene, and that they also had a duplication of the gene—this is a classic example of how to generate new information. Duplicate one gene into two, and subsequent mutations in one copy can introduce useful variants, such as resistance to insecticide, while the original function is still left intact in the other copy" OK, now we know! All our debates about CSI are stupid, and not only because "over 2800 papers" tell us that it is that way out of sheer authority, but because PZ Myers has demonstrated it with a single case of irrefutable evidence. You may say that a single case is not much, but I don't agree. A fact is a fact, after all, and I have always stated, even in this blog, that a single fact can well falsify a whole theory. But... Let's look better at the fact (after all, it is only one, we can take the time to verify it). The idea is that mosquitos become resistent to insecticides by a mutation in a gene which is also duplicated to keep the original one (excuse me, PZ, for my pseudo-teleological language). Well, I checked. The gene in question is the AchE1, one of the genes in the mosquito which code for an acetylcholinesterase, exactly the acetylcholinesterase which is the target of organophosporic (OP) insecticides. Well, without discussing the problem of gene duplication, whose role is only to reduce the fitness cost of the mutation, let's see what is the mutation which, in PZ Myers' words, is "a classic example of how to generate new information". I have checked again (thanks to internet) and here it is: a single base-pair alteration, G119S, within the mosquito's version of the AchE1 gene confers high levels of resistance to these insecticides. A single base-pair alteration? Is that the new information which, in PZ Myers' opinion, discredits all our debates about CSI? Is that the brilliant answer to Dr. Egnor? Yes, it is. PZ Myers' ignorance of the problem we have been discussing here is simply astonishing, matched only by his arrogance. Just to be clear to those who may not familiar with the problem, we are speaking of a single nucleotide mutation with partial loss of a pre-existing function (the AchE1 gene), which happens to be the target of OP insecticides. In the mutated gene (less functional than the original one) by sheer luck (sometimes it happens) the insecticide cannot act. It is exactly the same model as antibiotic resistance by single mutation with loss of function. See the very good article about antibiotic resistance linked from this blog, if you are interested to know more. In other words, what is the complexity (not the specification!) of this "new information"? I am not a mathematician, but it should be something like that: a single nucleotide substitution, nucleotides are four, the probability is 1 to 3 by the length of the mosquito's genome (or three times higher, if any substitution applies). It is not a low probability. After all, we are talking of a single nucleotide substitution: It may happen. It happens. It is not complex. It is not even specified, because it does not build any new information, just destroys partially the information which is already there. The advantage is indirect, depending on the loss of an interaction between a very specified and complex molecule (acetylcholinesterase) and a very specified (but not too complex) molecule (the OP insecticide), which, by the way, has been designed by an intelligent agent (man) to get rid of mosquitos through the knowledge of the mosquito's intelligently designed information. Random mutation merely interrupts that intelligently designed interaction. That's what random mutation always does in complex systems. By the way, here are the links to some material about that: http://bpi.sophia.inra.fr/topics/perso/guillemaud/bourguet2004.pdf http://www.beyondpesticides.org/news/daily_news_archive/2004/08_06_04.htmgpuccio
February 27, 2007
February
02
Feb
27
27
2007
11:22 PM
11
11
22
PM
PDT
PS: Is the "surrender with dignity" beginning? Cf this latest thread and the onward linked. James Shapiro's Abstract:
ABSTRACT: 40 years experience as a bacterial geneticist have taught me that bacteria possess many cognitive, computational and evolutionary capabilities unimaginable in the first six decades of the 20th Century.Analysis of cellular processes such as metabolism, regulation of protein synthesis, and DNA repair established that bacteria continually monitor their external and internal environments and compute functional outputs based on information provided by their sensory apparatus. Studies of genetic recombination, lysogeny, antibiotic resistance and my own work on transposable elements revealed multiple widespread bacterial systems for mobilizing and engineering DNA molecules. Examination of colony development and organization led me to appreciate how extensive multicellular collaboration is among the majority of bacterial species. Contemporary research in many laboratories on cell-cell signaling, symbiosis and pathogenesis show that bacteria utilize sophisticated mechanisms for intercellular communication and even have the ability to commandeer the basic cell biology of “higher” plants and animals to meet their own needs. This remarkable series of observations requires us to revise basic ideas about biological information processing and recognize that even the smallest cells are sentient beings.
Key excerpt:
. . . My own view is that we are witnessing a major paradigm shift in the life sciences in the sense that Kuhn (1962) described that process. Matter, the focus of classical molecular biology, is giving way to information as the essential feature used to understand how living systems work. Informatics rather than mechanics is now the key to explaining cell biology and cell activities. Bacteria are full participants in this paradigm shift, and the recognition of sophisticated information processing capacities in prokaryotic cells represents another step away from the anthropocentric view of the universe that dominated pre-scientific thinking . . .
Of course, I think SC is right to suggest that there is an open question on the conscious agency of bacteria [which JS suggests . . . !], but they certainly show sophisticated information processing that in other contexts we would not hesitate to term at least Weak Form AI. Have a read, and have a think, as Sal suggests. GEM of TKIkairosfocus
February 26, 2007
February
02
Feb
26
26
2007
11:18 PM
11
11
18
PM
PDT
Oops: I should have used b's not as on the word "distinction" above. Forgive my error, andthe occasional missed typos. GEM of TKIkairosfocus
February 26, 2007
February
02
Feb
26
26
2007
10:41 PM
10
10
41
PM
PDT
H'm Re: MEANING Great_Ape has stimulated me to think and dig in a bit: if “design detection” research makes any headway into better formalizing the concept of “meaning,” then I find that worthwhile and interesting. That is, first, one of the advantages of serious dialogue over manipulative debate: mutual stimulation to clarification and development of ideas. Now, first, one of my favourite classical authors has spoken to this issue aptly:
Even in the case of lifeless things that make sounds, such as the flute or harp, how will anyone know what tune is being played unless there is a distinction in the notes? Again, if the trumpet does not sound a clear call, who will get ready for battle? So it is with you. Unless you speak intelligible words with your tongue, how will anyone know what you are saying? You will just be speaking into the air. Undoubtedly there are all sorts of languages in the world, yet none of them is without meaning. If then I do not grasp the meaning of what someone is saying, I am a foreigner to the speaker, and he is a foreigner to me . . . [Paulo, Apostolo, Mart, 1 Cor 14:7 – 11, c. 55 AD] In short, meaning is bound up with: 1] A distinct set of symbols from the vocabulary of a code, each of which in context makes a possible difference, and which collectively are common to the source and the receiver of the message involved. (NB: this highlights one difference between a general communicative and a specifically educational situation – in the latter, one has to find common ground for communication, but the primary goal is then to teach/learn.) 2] A characteristic pattern of sources, encoding, messages [using symbols from that common set of distinct possibilities], channels, interfering noise, receivers, decoding, sinks, and responsiveness – this last including feedback. 3] This brings to the fore the issue of the inherent inference to design involved in reception and decoding of a putative, complex message. In the face of the possibility of confusion caused by noise, the acceptance that a noise-influenced signal is a message is an inference to design [and one we routinely make], precisely because we see that the message is functional relative to the communicative context, and is sufficiently complex that we are inclined to accept it as message not “lucky noise.” 4] Such functionality, brings to the fore another feature of messages:
[In the context of information processing systems] information is data -- i.e. digital representations of raw events, facts, numbers and letters, values of variables, etc. -- that have been put together in ways suitable for storing in special data structures [strings of characters, lists, tables, "trees" etc], and for processing and output in ways that are useful [i.e. functional]. . . . Information is distinguished from [a] data: raw events, signals, states etc represented digitally, and [b] knowledge: information that has been so verified that we can reasonably be warranted, in believing it to be true. [GEM, UWI FD12A Sci Med and Tech in Society Tutorial Note 7a, Nov 2005.]
5] Note here, that we make a difference between the difference-making functionality of messages, and the fundamentally epistemological issues of warrant and credibility, thence “wisdom” -- the art of proper and successful use of credible, well-warranted knowledge and insights. [But note that in ICTs and in the natural world, we see many cases of input, processing and output based on essentially algorithmic patterns, which make a survival/thriving difference, raising the issue of their common origin in intelligent agency, as Trevors and Abel etc discuss.] 6] Thence, we further see that: we here introduce into the concept, information, the meaningfulness, functionality (and indeed, perhaps even purposefulness) of messages -- the fact that they make a difference to the operation and/or structure of systems using such messages, thus to outcomes; thence, to relative or absolute success or failure of information-using systems in given environments. (And, such outcome-affecting functionality is of course the underlying reason/explanation for the use of information in systems. Can this “meaningfulness” qualitative feature of messages be formalised mathematically? Can it be quantified [not quite the same thing . . .]? Most relevantly: Is that a proper criterion of success for design theory? Shannon, of course, explicitly disavowed intent to attempt such quantification, successfully holding that the quantification of information-carrying capacity [in bits etc] was enough for the technological purposes he had in mind. Likewise, in today's I[C]T-rich world that has built on his work, we are able to use the functionality of information to manage quite well while leaving the issues of warrant, meaningfulness and wisdom to the intelligent decision-makers who use the information we process and communicate. But also, we observe that the mere observed functionality within information communication and processing systems of complex messages in the face of the possibilities of noise, is enough to credibly mark such signals as artifacts of agency. And, that is in fact the declared – and crucially difference-making -- objective of design theory. In Dembski's phrasing:
intelligent design begins with a seemingly innocuous question: Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? . . . Proponents of intelligent design, known as design theorists, purport to study such signs formally, rigorously, and scientifically. Intelligent design may therefore be defined as the science that studies signs of intelligence.
So, while it would be nice to develop such a formalism for meaningfulness, that is not the major current purpose of design theory, nor is it a necessary step to its current declared objectives [and its utility]. For, from the mere functionality of recognised complex information, we may freely infer on a best explanation basis, to agency as its source. Then, we may set out to reverse engineer the systems viewed as information systems, making advantageous use of our findings. Cheerio GEM of TKIkairosfocus
February 26, 2007
February
02
Feb
26
26
2007
10:37 PM
10
10
37
PM
PDT
Of course, Dembski has claimed, if I’m not mistaken, that CSI can only *decrease* or at most be preserved. So even capitulating this minor issue of gene duplication with specialization is a big deal to some.
He has three categories that account for modification of CSI inherent in biological systems: (1) Inheritance with modification (2) Selection (3) Infusion Brief quote for 1:
Inheritance is thus merely a conduit for already existing information. ……. By modification I mean all the instances where chance enters an organism’s developmental pathway and modifies its CSI. Modification includes–to name but a few–point mutations, base deletions, genetic crossover, transpositions and recombination generally.
Intelligent Design; page 177. I wish there was eBook versions of ID literature...would make finding info so much easier.Patrick
February 26, 2007
February
02
Feb
26
26
2007
07:02 AM
7
07
02
AM
PDT
H'mm: SNOWFLAKES: Having just blown my big sister J a Hershey's kiss-tinged smack from 1,000 miles away for a particularly nice email [a real tweetie . . .], let me be short & sweet here. Snowflakes form under boundary conditions that lend themselves to complexity, but are constrained by the bonding properties of the H2O molecule, and indeed there is a saying that no two snowflakes are alike. But, that is just the point: it would be hard indeed to set up an experiment to replicate the precise shape and size of a given snowflake, whether by random search or by sophisticated experimental manipulation. In short, once the configuration space gets big enough, specification becomes highly elusive to random searches. That directly makes the design inference point: once we see a functionally specific configuration, in a context of sparseness of such configurations in a large enough configuration space, then agency is – absent compelling reasons to think otherwise – the logical explanation. And, worldview level question-begging is plainly not sufficient to be compelling. Tweet tweet . . . GEM of TKIkairosfocus
February 26, 2007
February
02
Feb
26
26
2007
05:00 AM
5
05
00
AM
PDT
Kariosfocus -- thanks for the link to Johnson's papere!tribune7
February 26, 2007
February
02
Feb
26
26
2007
04:47 AM
4
04
47
AM
PDT
Hi All: A few more remarks are in order (and again, thanks for the kind words and generally civil tone on both sides): 1] On Egnor's [still unmet] challenge: I think it is unfortunate that he made a veiled reference to bovine scatology [as was promptly picked up . . .], but note that his core challenge still stands:
How much new specified information can random variation and natural selection generate? Please note that my question starts with ‘how much’- it’s quantitative, and it’s quantitative about information, not literature citations . . . . Duplication of information isn’t the generation of new information. No one doubts that living things can copy parts of themselves. You have presented no evidence that the process of (slightly imperfect) copying is the source of all that can be copied and the source of what actually does the copying . . . . There is obviously a threshold of the information-generating power of RM + NS . . . . So what’s the threshold, quantitatively?
It is telling that, for all the literature bluffing on gene duplication etc, there is still no direct response on the point from the advocates of evolutionary materialism. [In the Time Mag thread, he notes that if he asked similar questions about physical parameters, he would get a prompt response. He would, too.) 2] What is the threshold: I will take a brief stab at why the contrast. 500 bits is a smallish quantum of digital storage, hardly a blink in today's Windows world of 1 – 2 Gigabyte RAMs! (As one still hankering after Macs and Amigas and looking to Linux to change the world, I cannot resist this one: Windows, even in Vista, is “living proof” that design and optimality are two very different questions! But, good enough can be very successful, as the success of Mr Gates' software amply demonstrates.) And yet, 2^500 ~ 3.27*10^150. In short, once we are at about 500 bits or more of storage, if we are looking for a unique state in the configuration space, it is credible that a random search or a search reducible to such a search will not reach that state. If we deal with islands and archipelagos of such states [which are, for argument, viewed just now as sufficiently tightly spaced that once we reach the first island, we can freely walk around] once the islands are sufficiently sparse by a similar criterion, we end up in the same position: we cannot get to the first island from an arbitrary start-point. Now, existing life forms have ~ 300 – 500k up to 3 – 4 Bn DNA elements, where each G/C/A/T 4-state monomer therefore stores up to two bits; and where there is good reason to infer that the lower end is if any thing a bit too small for a lifeform that is independent of other forms providing key nutrients. This is three or more orders of magnitude up from the 500 bit threshold; i.e there is strong evidence here that OOL studies are pursuing a task that is in fact beyond the probabilistic resources available in plausible pre-biotic environments. Further to this, once we move to say treh Cambrian life revolution, we are looking at a need to get to dozens of body plans, requiring dozens of new cell types, with associated epigenetic structures etc. As Meyer et al have pointed out,t hat credibly means looking at moving up to the 180 million storagre unit DNA code found in a typical modern arthropod or the like, several times over, within a fairly short compass of time and space – even if we accept the claims that invisibly the DNA was producing the required variety for a billion years or so ahead of the revolution. Such increments in information, a fortiori, are well beyond the 500-bit threshold. And, they do not even begin to address the issues of creating the DNA's code, the associated integrated mefdchanisms that implement it,and the required algorithms to do so. In short, both the OOL and macroevolution scenarios proposed by the evolutionary materialists, are absolutely well beyond the proabilistic resources of the observed universe. [And, if they resort to the infinite array of subuniverses scenarios, to expand the resources, they have shifted subject to metaphysics, and have no right to supress discussion of the alternative, design.] 3] Above: IMO no one has the moral high ground here. This is a disappointing resort to the ethical fallacy of [im]moral equivalency. On my long observation, the leading Design Theorists [and even the leading YECs for that matter], and most of those who follow them do not routinely resort to the attack to the man or to the strawman as their basic and first resort. By sharpest contrast, in my experience in other less regulated blogs, including major ones I could name, such is unfortunately the routine rhetorical resort of evo mat advocates. When I have followed up links to major Evo Mat sites, I have seen that in this, they are following the policy of a great many of the leading advocates of NDT, of which PZM, Dawkins, NCSE, and Forrest are unfortunately all too typically representative. Further to this, as the Sternberg case shows, this uncivil attitude also spills over into career busting behaviour and unjustified trashing of professional reputation. [Indeed, in part my unlurking here is in the context of a pattern in a major blog over in the UK, in which Mr Bradley, Mr Behe, Mr Dembski, Mr Minnich, etc were all attacked in this vein instead of dealing fairly with their case on the merits; I have linked to this thread from there to show the contrast. Cf here Johnson's recent paper.] That is not a pattern of moral equivalency . . . ___________ Okay, can we speak to the merits of the matter raised by Mr Egnor; perhaps showing me where my issues above [and earlier expressions such as in Denton's 1985 book, ch 13 on Evo a theory in crisis] miss the mark, if they do? GEM of TKIkairosfocus
February 26, 2007
February
02
Feb
26
26
2007
03:10 AM
3
03
10
AM
PDT
"But do you think that the design detection methods of ID are useful in general for other things?" ==patrick In principle, being able to accomplish what the method intends to do, I could see contexts where this ability could be useful (SETI, cryptography maybe?). I am not in the best position to judge, though. I leave it to my colleagues that are mathematicians, computer scientists, etc. So far there does not seem to be much excitement in academia. (This despite the fact that in some circles (.e.g. chomski crowd) darwinism is not politically correct enough to be acceptable) Theoretically, for me, if "design detection" research makes any headway into better formalizing the concept of "meaning," then I find that worthwhile and interesting.great_ape
February 25, 2007
February
02
Feb
25
25
2007
01:49 PM
1
01
49
PM
PDT
gpuccio, Let me clarify one point. I'm not defending P.Z.'s general approach to these discussions. I'm not a fan. There are, however, just as many folks on the ID side of the aisle who are given to ranting, rhetoric, and recourse to authority. They just use different authorities. IMO no one has the moral high ground here. I haven't even read all of PZ's responses to Egnor, aside from his post: "Dr Michael Egnor challenges evolution!". That post, in P.Z. terms, is downright tactful. I just happen to agree with him that numerous instances of gene duplication followed by specialization represent legitimate cases of information increase of the most relevant kind to this discussion. (This against a backdrop of gene duplications that *don't* specialize, atrophy, and are ultimately lost.) You may question those data and whether they are true evolutionary accomplishments, but you'd have a lot of data for which to come up with an alternative (and valid) explanation. The legitimate question IMO is whether this type of information increase is trivial in comparison to some other necessary type/quantity you think is meaningful. Of course, Dembski has claimed, if I'm not mistaken, that CSI can only *decrease* or at most be preserved. So even capitulating this minor issue of gene duplication with specialization is a big deal to some. The citing of articles is an appeal to authority, but it is the authority of peer-reviewed data and interpretation. (i.e. the sort of authority we use in scientific discourse on a regular basis; and when you opt to buck authority, you better have an arsenal of data and analysis at your disposal to overturn the established view)great_ape
February 25, 2007
February
02
Feb
25
25
2007
01:40 PM
1
01
40
PM
PDT
great_ape, Let's see if we can get to a starting point that we agree upon. The main contention of ID is of course that what we see in biology was designed in some fashion. But let's assume for a second that biology is a special case like I noted above and that this contention is incorrect. Obviously most people would lose interest in ID if that occurred. But do you think that the design detection methods of ID are useful in general for other things?Patrick
February 25, 2007
February
02
Feb
25
25
2007
12:52 PM
12
12
52
PM
PDT
1 2 3 7

Leave a Reply