Uncommon Descent Serving The Intelligent Design Community

genetic-id, an instance of design detection? (topic revisited)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

(In an effort to help my IDEA comrades at Cornell I revisit the issue of Genetic-ID. My previous post on the issue caused some confusion so I’m reposting it with some clarifications. I post the topic as something I recommend their group discuss and explore.)

The corporation known as Genetic-ID (ID as in IDentification, not ID as in Intelligent Design) is able to distinguish a Genetically Modified Organism (GMO) from a “naturally occurring” organism. At www.genetic-id.com they claim:

Genetic ID can reliably detect ALL commercialized genetically modified organisms.

I claim that detecting man-made artifacts (like a GMO) is a valid instance of applying the Explanatory Filter.

The Explanatory Filter is used all the time (implicitly):

The key step in formulating Intelligent Design as a scientific theory is to delineate a method for detecting design. Such a method exists, and in fact, we use it implicitly all the time. The method takes the form of a three-stage Explanatory Filter.

I want to emphasize, the Explanatory Filter (EF) is used ALL the time. When ID critics say the EF has never been used to detect anything, they misrepresent what the EF is, because the EF is used ALL the time.

The Explanatory Filter faithfully represents our ordinary practice of sorting through things we alternately attribute to law, chance, or design. In particular, the filter describes

how copyright and patent offices identify theft of intellectual property
….
Entire industries would be dead in the water without the Explanatory Filter. Much is riding on it. Using the filter, our courts have sent people to the electric chair.

(bolding mine)

When we detect design in a physical artifact, we detect the Complex Specified Information (CSI) the artifact evidences. That means we see that a physical artifact conforms to an independent blueprint.

In the Bill’s book, No Free Lunch (NFL), the concept of CSI if formalized. CSI is detected when the information from a physical artifact (physical information) conforms to an independent blueprint or conception (conceptual information). CSI is defined as:

The coincidence of conceptual and physical information where the conceptual information is both identifiable independently of the physical information and also complex.

It is important to note CSI is defined by two pieces of information not just one

CSI is consistent with the basic idea behind information, which is the reduction of possibilities from a reference class of possibilities. But whereas the traditional understanding of information is unary, conceiving of information as a single reduction of possibilities, complex specified information is a binary form of information. Complex specified information , and specified information more generally, depends on a dual reduction of possibilities, namely a conceptual reduction (i.e., conceptual information) combined with a physical reduction (i.e., physical information ).

Genetic-ID uses PCR (polymerase chain reaction) to detect whether an organism has physical characteristics (physical information) which match a known blueprint (conceptual information) for a GMO. This is a relatively simple case of design detection since the pattern matching method is exact and highly specific. Genetic-ID’s technique is a somewhat trivial example of design detection, but I put it on the table to help introduce the concept of the Explanatory Filter in detecting designs at the molecular level.

But how about less specific pattern matches to detect GMO’s? Do you think we could detect a GMO such as this:

Data stored in multiplying bacteria

The scientists took the words of the song It’s a Small World and translated it into a code based on the four “letters” of DNA. They then created artificial DNA strands recording different parts of the song. These DNA messages, each about 150 bases long, were inserted into bacteria such as E. coli and Deinococcus radiodurans.

Or how about this kind of GMO, a terminator/traitor which does not have a published specific architecture : Terminate the Terminator.

Terminator technology (sometimes called TPS-Technology Protection System or GURTs-Genetic Use Restriction Technologies) refers to plants that are genetically engineered to produce sterile seeds. If commercialized, the technology will prevent farmers from saving seed from their harvest for planting the following season. These “suicide seeds” will force farmers to return to the seed corporations every year and will make extinct the 12,000-year tradition of farmers saving, adapting and exchanging seed in order to advance biodiversity and increase food security.

Extending these ideas, can we in principle detect nano-molecular designs such as a nano-molecular computer? If we find a physical molecular artifact conforming to the blueprints of a computer, should we infer design?

With that question in mind, I point to the fact that biological systems are computers, and self-replicating computers on top of that! This fact was not lost upon Albert Voie who tied the problem of the origin-of-life to the fact that the physical artifacts of biology conform to a known blueprint, namely, a self-replicating computer. I commented on Voie’s landmark outline of the origin-of-life problem here.

In as much as biology conforms to the blueprints of a computer, are we justified in inferring design? And finally, are not the claims of Darwinian evolution ultimately claims that blindwatchmakers can create “Gentically Modified Organisms” (so to speak) from pre-existing organisms? What then do we make of Darwinian evolution’s claims?

Comments

Dave, you're mischaracterizing hypermoderate's argument. A more accurate analogue would be as follows:

Given: Snukldorf is defined to include only things that sparrows can't make.
Therefore: if you see snukldorf, it wasn't made by a sparrow.

I submit that hypermoderate is correct, and that the tautological nature of the CSI argument has been pointed out by several people and never addressed by Dembski. You can do one of four things with this assertion:

1. Demand that I back it up with evidence.
2. Explain why it's incorrect.
3. Proclaim it wrong, but offer no explanation.
4. Censor it and ban me.

I'm hoping you'll take one of the first two options, but I suspect you'll go with #4.

You missed the fifth option. Pat you on the head and say "that's nice, sonny". -ds secondclass
May 20, 2006
May
05
May
20
20
2006
11:16 AM
11
11
16
AM
PDT

great_ape,

Your concern about sampling bias is a valid one. I would answer it this way: In science, we sometimes have to choose between two hypotheses which fit the data equally well. If the two hypotheses have a chance element, it makes sense to prefer the one which is more probable. If we choose the more probable explanation, we're more likely to be correct, although there is no guarantee. We might be wrong. In any case, we keep our eyes open and are prepared to modify or abandon our chosen hypothesis as more data comes in.

In Dembski's framework, there are two possible explanations for why we are here. One of them, as you suggest, is that a statistical fluke occurred which created CSI via sheer luck, leading to us. Another is that we were designed. Dembski would presumably argue that the second is overwhelmingly more probable than the first. Though you cannot rule out the first absolutely, you're far more likely to be right by betting on the second.

It reminds me of something I've thought about in connection with the multiverse hypothesis. If there really exists an infinitude of universes with differing physical constants, laws, and starting conditions, then presumably there exists a universe somewhere much like ours, but where every coin ever tossed has come up heads. Scientists there are convinced that there is some deep explanation for this regularity, but have been unable to find it. From our perspective we can say "You just got (extremely) lucky (or unlucky)."

Inside that universe, the right thing to do is to look for a deterministic explanation of the coin-tossing phenomenon, because the alternative is so improbable. From the outside, we know that the improbable alternative happens to be the true one.

Okay, that's it. Find somewhere else to babble. -ds hypermoderate
May 20, 2006
May
05
May
20
20
2006
03:14 AM
3
03
14
AM
PDT

great_ape wrote:
"I do not see any fundamental circularity in Dembski’s argument. It basically boils down to “if it’s ludicrously unlikely that it was produced by unintelligent materialistic causes, it can be comfortably inferred that it was produced by some kind of intelligent agency.”

Hi great_ape,

If that were Dembski's argument, there would be little to object to. There would also be nothing new, as that argument has been around since long before Dembski. To restate, it simply says "Everything is either at least partially designed, or it is undesigned. If one of these alternatives is extremely unlikely, the other is overwhelmingly likely." It's just an application of Aristotle's Law of the Excluded Middle.

What Dembski is trying to do is different. He's trying to introduce a concept, CSI, as an independent, reliable indicator of design. Find something with CSI, and you know it was designed. Salvador certainly interprets Dembski's argument this way, which is why he suggests that "some architectures are recognized by engineers as designed, and it’s only a matter of asking if a biological system conforms to our pre-conceived pattern and if the pattern can be shown to have 500 bits of information."

But the very definition of CSI requires that unintelligent causes be incapable of producing it, and so via the excluded middle something that has CSI is by definition designed. So to say that CSI is a reliable indicator of design is simply to say "Something that is designed can be reliably inferred to have been designed." Quite true, but also quite circular.

And we're left with exactly the same question we had before the concept of CSI was introduced, which is "Could natural selection (or other unintelligent causes) have produced the living structures we see around us today?"

Translated into Dembski's terms, we would say "Structures with CSI are designed, but it's an open question whether living structures have CSI, by Dembski's definition of the term."

You're about to get the boot for stupidity. According to you the following is circular reasoning:

Given: Sparrows can't make bicycles.
Therefore: If you see a bicycle, it wasn't made by a sparrow.

This isn't circular reasoning. It's a simple deduction. Stop wasting comments with this idiocy. Last warning. -ds

hypermoderate
May 20, 2006
May
05
May
20
20
2006
02:42 AM
2
02
42
AM
PDT
hypermoderate, I do not see any fundamental circularity in Dembski's argument. It basically boils down to "if it's ludicrously unlikely that it was produced by unintelligent materialistic causes, it can be comfortably inferred that it was produced by some kind of intelligent agency." False positives will occur, but they should be ludicrously infrequent. The informative, non-circular essence of the argument is that a probabilistic framework for this kind of thing might be arranged so that we can make reasonable inferences on these matters similar to the kind of inferences we make concerning whether the sun will come up tomorrow, etc. Think what you may about logistical details involved in making such a calculation, but I don't see it as inherrently circular or tautological. As a scientist, I would be mildly concerned with sampling bias coming in to play, though. Assuming they can be calculated--however ludicrous the probabilities might turn out to be--**if** our existence as questioners was, in fact, contingent upon such occurrences happening via nonintelligent mechanisms, then the probability of our, as reasonably complex sentient beings, observing such complex structures is shifted to 1. So ultimately I think Dembski may have to make an even stronger case. Not only is achieving a certain threshold of specified complexity highly unlikely given the overall system, but it is, in fact, not possible at all. Only then can the anthropomorphic "sample bias" argument be finally put to rest. I would be interested to hear people's thoughts on this.great_ape
May 19, 2006
May
05
May
19
19
2006
01:56 PM
1
01
56
PM
PDT
Patrick, I have no problem with the fact that a design inference can't occur until someone notices the rocks. The question is whether the rock pattern constitutes CSI before anyone notices it.secondclass
May 19, 2006
May
05
May
19
19
2006
10:44 AM
10
10
44
AM
PDT
secondclass, It's readily admitted that ID can produce false negatives.Patrick
May 19, 2006
May
05
May
19
19
2006
10:18 AM
10
10
18
AM
PDT
Salvador, there's a problem with the "conceptual information" requirement for CSI. Consider Dembski's example of the rocks on the ground that match a certain constellation. If the rock pattern is not specified until an agent notices it, then CSI is created by the act of noticing. Is this your position?secondclass
May 19, 2006
May
05
May
19
19
2006
09:32 AM
9
09
32
AM
PDT

"Is this anything like natural selection’s survival of the survivors? You meant to show a tautology, not circular reasoning. You accomplished neither. -ds"

ds,

1. A tautology is a form of circular reasoning (try Googling "tautology circular reasoning").
2. Why do you think the Dembski quotes are non-circular?

Regards,
Hypermoderate

You call what you wrote "reasoning"? :roll: -ds hypermoderate
May 18, 2006
May
05
May
18
18
2006
08:59 AM
8
08
59
AM
PDT

Salvador, Mung:

It is Dembski, not me, who defines specified complexity in terms of the probability of producing a structure via material mechanisms:

From Chapter 12 of The Design Revolution:
"Indeed, to attribute specified complexity to something is to say that the specification to which it conforms corresponds to an event that is vastly improbable with respect to all material mechanisms that might give rise to the event."

Another quote from Chapter 10:
"For something to exhibit specified complexity therefore means that it matches a conditionally independent pattern (i.e., specification) of low specificational complexity, but where the event corresponding to that pattern has a probability less than the universal probability bound and therefore high probabilistic complexity."

And from Chapter 12 again, regarding the possibility of false positives:
"Even though [the absence of] specified complexity is not a reliable criterion for eliminating design, it [the presence of specified complexity] is a reliable criterion for detecting design."

Thus Dembski's own words illustrate the circularity of the argument.

To recap:
1. According to Dembski, specified complexity is only present if the event is "vastly improbable with respect to all material mechanisms that might give rise to the event."
2. Specified complexity is "a reliable criterion for detecting design."

The circularity is obvious: if it wasn't produced by material mechanisms, then it wasn't produced by material mechanisms. Therefore it was designed.

Is this anything like natural selection's survival of the survivors? You meant to show a tautology, not circular reasoning. You accomplished neither. -ds hypermoderate
May 17, 2006
May
05
May
17
17
2006
11:58 PM
11
11
58
PM
PDT
1. To quantify the CSI contained in a structure, you need to know how probable it is for that structure to come about by non-intelligent means.
I believe this is incorrect.
2. To quantify that probability, you need to understand all of the non-intelligent mechanisms that could potentially produce the structure in question, and you need to be able to estimate the probability of success for these mechanisms working separately and in concert.
Which would mean that this also is incorrect.
3. Natural selection is one of the mindless mechanisms available for producing biological complexity.
And this is both unintelligible and unproven. 1. How does one establish the claim that "natural selecton" is mindless? 2. How does one establish the claim that natural selection is a mechanism? 3. How does one establish the claim that natural selection is capable of producing biological complexity? 4. Since natural selection is just one of the mindless mechanisms available for producing biological complexity, what are the others, and why doesn't one need to take those into account as well?Mung
May 17, 2006
May
05
May
17
17
2006
02:45 PM
2
02
45
PM
PDT

Hypermoderate: To quantify that probability, you need to understand all of the non-intelligent mechanisms that could potentially produce the structure in question, and you need to be able to estimate the probability of success for these mechanisms working separately and in concert.

I appreciate that point, but that is not how scientific theories are postulated. If we applied that standard to every theory, there would be no theory, and certainly no evolutionary theory. There is no theory that even attempts to account for every possible cause. It is ordinary practice to simply identify the magnitude of the space of possibilities, and make a reasonable estimate based on empirical evidence as to the probability. Seeing 500 coins heads and inferring design does not require accounting of every possibility.

And finally, regarding Turing machines, they can not arise out of self-organizing systems. They must arise where improbability is guaranteed as uncertainty is an essential element of an information processing system (Shannon defined information as reduction of uncertainty). Thus a highly probable Turing Machine is a oxymoron. The Designer chose an architecture which would resist complaint that we don't know enough!

Salvador

scordova
May 17, 2006
May
05
May
17
17
2006
06:59 AM
6
06
59
AM
PDT
jaredl, I agree that they must be known mechanisms. After all, without knowledge of the mechanism, you can't estimate the probability of producing the structure in question. But restricting consideration to known mechanisms does not eliminate the circularity. The tautology remains. Inserting the word "known" into my previous comment: "5. In other words, it is a tautology to say “Any structure containing 500 bits of CSI is designed”, because the very definition of CSI implies it. In other words, all you are saying is “If natural selection (or any other known mindless mechanism) couldn’t have produced something, then natural selection couldn’t have produced it.”"hypermoderate
May 16, 2006
May
05
May
16
16
2006
05:59 PM
5
05
59
PM
PDT
"No. All we need are the relevant known non-telic mechanisms. To appeal to unknown non-telic mechanisms constitutes an argument from ignorance." --jaredl Fair enough. But you're still in a precarious position because what is to be done concerning *plausibly relevant* non-telic processes that are *known* to exist and impact the outcome in some fashion, but for which the exact parameters and associated dynamic interactions involved are either unknown and/or the entire (putatively) generative system (i.e. the universe) is too large/complex/nonlinear to assess whether these (ostensibly) nontelic processes could in fact yield a specified level of complexity? This, I believe, more accurately reflects our current situation and state of knowledge and this is why, specifically concerning the question of complexity, ID and darwinism are effectively at an impass in terms of achieving air-tight indisputable arguments for their respective positions. This will continue to be the case for the forseeable future in IMHO.great_ape
May 16, 2006
May
05
May
16
16
2006
03:17 PM
3
03
17
PM
PDT
"you need to understand all of the non-intelligent mechanisms that could potentially produce the structure in question, and you need to be able to estimate the probability of success for these mechanisms working separately and in concert...." No. All we need are the relevant known non-telic mechanisms. To appeal to unknown non-telic mechanisms constitutes an argument from ignorance.jaredl
May 16, 2006
May
05
May
16
16
2006
07:10 AM
7
07
10
AM
PDT

Salvador,

There's a fatal circularity in the idea of using 500 bits of CSI as a criterion of design:

1. To quantify the CSI contained in a structure, you need to know how probable it is for that structure to come about by non-intelligent means.

2. To quantify that probability, you need to understand all of the non-intelligent mechanisms that could potentially produce the structure in question, and you need to be able to estimate the probability of success for these mechanisms working separately and in concert.

3. Natural selection is one of the mindless mechanisms available for producing biological complexity.

4. To say that a biological structure has 500 bits of CSI, you therefore must already know that the probability of producing it by natural selection (or any other mindless mechanism) is extremely low.

5. In other words, it is a tautology to say "Any structure containing 500 bits of CSI is designed", because the very definition of CSI implies it. In other words, all you are saying is

"If natural selection (or any other mindless mechanism) couldn't have produced something, then natural selection couldn't have produced it."

hypermoderate
May 15, 2006
May
05
May
15
15
2006
04:39 PM
4
04
39
PM
PDT

Michaels7 wrote:

A longer view question; does every new pattern recognition algorithm established take another brick from the wall of evolution? My ignorance in genetics I’m sure shows. My bunny comment was intended humor. But I think Salvadore has hit on something here especially as es58’s point of intellectual property values relates to the debate. I’m curious how lawyers see this issue and businesses like GeneticID. Market forces created GeneticID. Lawyers I’d think will see similar opportunities.

Thank you Michaels7 and es58 and others. I did scant work in nano-molecular machines. The time will come when it might be helpful to be able to distinguish a molucular artifact from the work of blind purposeless forces.

I pose these questions to thoughtful individuals: "What kinds of molecular architectures would suggest one is dealing with a design of intelligent origins? If one's goal is to make a molecular machine that would signal design to a human observer, what characteristics would it possess? Does biology conform to these architectures? Will it become hard to distinguish a man-made nano-molecular machine from a naturally occurring one had we not had the a database of existing "naturally" occurring machines?"

I emphasize again, the case of detecting a Monsanto GMO via a direct sequence was meant to be starting point, not an ending point for the disucssion. I chose the example because it illustrates important concepts that must be mastered before tackling the far greater issues at hand.

At ARN when I put up a similar thread, it showed how much the critics of ID misrepresented Dembski's concepts. Most did not even realize CSI dealt with two items of information (conceptual and physical) not just one (conceptual is what most think CSI deals with exculsively, and they are mistaken).

I hope if nothing else, the readers have a better idea that CSI is composed of 2 sets of information not one! In answer whether the flagellum is designed, here is the case for CSI:

1. the conceptual pattern (conceptual information) are the numerous lock-key and login-password systems in flagellum's architecture and construction. There are bit values associated with lock-key login-password systems

2. the physical pattern (physical information) is the lock/key patterns evidenced by the flagellum

The lock-and-key metaphor is an independent detachable pattern. Bill Dembski calls this metaphor "interface compatibility".

Computing the bits for this is more diffucult than calculating for an expliciy pattern for a GMO that is given by the designer, but it is not impossible. I refer to the reader to my thread at ISCID where I give hints for calculating lock-key probabilities without knowing the exact pattern in advance! The example I give for dice, with some modification is extensible to lock-and-key systems which we do not have exact patterns for (like the Monsanto GMO). See:

Response to Elsberry and Shallit 2003

Salvador

scordova
May 15, 2006
May
05
May
15
15
2006
03:58 PM
3
03
58
PM
PDT

Mung and Jon_e,

Welcome to my friends from ARN!

Everyone,

I'd like to thank everyone for their comments. The reason this thread was re-opened was I asked Bill Dembski's permission, and he granted it. He felt detecting a grain of wheat as a GMO versus a wildtype grain was indeed design detection, thus the closure on my earlier thread was removed and this discussion was permitted to go forward. That is not to say there is necessarily a right or wrong answer to these issues, but these issues will likely surface again.

The case for genetic-ID is trivial. We have access to the designers (like Monsanto) to give us the detachable specifications which allow us to detect design in an unknown grain of wheat or corn, etc.

The reason I put the example out was to educate the readers in understanding design detection. There are several issues in detecting design, for starters:

1. what architectures would signal design?
2. does an artifact conform to an architecture?

when #1 AND #2 are decided, one detects CSI.

for the case of GMO's, #1 was fairly trivial as the designer (Monsanto or other genetic engineering companies) gave us the blueprint or specification in advance of the detection. #2 easily followed.

But what if we do not have the designer giving us the independent, detachable specification? What architectures then would satisfy #1? We humans surprisingly have archytypal architectures in our consciousness which would signal design even if we do not have the explicitly written down like a Monsanto GMO:

1. patterns of yet-to-be heard music (indeed we know music when we hear it, even if the tune is novel!)

2. patterns of software or language (how did we know heiroglyphics and cuneiform were designed even before we had the rosetta stone)

3. transcendant engineering designs such as a computer (we recognize a computer irrespective of whether it is made of vacuum tubes, silicon transistors, germanium transistors, magnetic relays, DNA, or other materials)

4. our subconscious conception of what would be designed (such as glowing fish)

Regarding the glowing fish, or green rabbit, indeed these are genetic innovations. We are very tempted to infer design in such cases! But cannot the same be said for every other major innovation in an organism from a supposed prior ancestor in the fossil record? I would argue yes. The case is made rigorous if the independent pattern can be assigned a number of bits (a glowing fish innovation requires a certain amount of information increase which may be measurable in bits).

The case for GMO detection was the case of #2 above, not #1.

Ok, how about more difficult issues with design detection where we have to answer the question posed by #1. What patterns in biology would signal design, absent the designer handing us the blueprint?

Here is my suggestion, we have already working examples in human engineering that conform to biological systems, thus, in some cases #1 has been solved in that some architectures are recognized by engineers as designed, and it's only a matter of asking if a biological system conforms to our pre-conceived pattern and if the pattern can be shown to have 500 bits of information. Here are some examples:

1. bat and whale sonar (bat echolocation is absolutely cool)
2. optical sensing and vision processing (I worked at Army Night Vision, and I can tell you the human eye is non-trivial)
3. computers
4. software
5. digital-to-analog and analog-to-digital
6. spectral analyzers and advanced signal processors (the ear!)
7. error-correction
8. software search heuristics (immune system)
9. self-replicating automata
10. digital control circuits
11. feedback control circuits
12. adaptive neural networks
13. fail-safe systems
14. information security
15. lock-and-key, login-password metaphors (protein interaction)
16. coders/decoders
17. complex navigation (monarch butterflies)

etc.

Note the architectures above transcends the underlying materials which build the system. (Shakespeare transcends the chemistry of the ink and paper or screen pixels used to convey his writings).

A sonar system is a sonar system whether it is made of man-made materials for a submarine or biological materials such as in whales or bats. (The founder of IDEA FUMA (Fork Union Military Academy) was a Naval Academy grad in Electrical Engineering. He recognizes the intricacies in sonar systems. They are non-trivial designs. Biological sonar is an example of designed pattern reconizable to electrical engineers, like biological computers are recognizable to computer engineers.)

The challenge is affixing a number of bits to each of these transcendant architectures. The self-replicating computer has a defined architecture, and my preliminary analysis says it's bit numbers exceed Universal Probability. Therefore, in much the same way that we have the Monstanto GMO blueprint, we have the computer (Turing Machine) blueprint and the self-replicating automata blueprint. We merely need to see if a biological system fits the blueprint to reasonably infer design, and the answer is yes. That was the point of Albert Voie's Peer-Reviewed Paper

Salvador

PS
jon_e I fixed your blockquote comments.

scordova
May 15, 2006
May
05
May
15
15
2006
03:22 PM
3
03
22
PM
PDT
"But admitting even that much would be a concession to an idea - simple as it is - that originated with a Prominent ID Person." --Jon I don't think labelling this as EF would entail a concession of anything. The idea of searching for evidence of *human* tampering has been around for quite some time. It's not unlike looking for boards, soda cans, and rope at the sight of a crop circle. Genetic-ID just does it with modern technology, and it happens to involve organisms and their DNA. This can not be seen in any way as the culmination, or in any way shape or form the product, of Dembski's work. I suspect that the folks at genetic-ID (and their lawyers) would agree with me here.great_ape
May 15, 2006
May
05
May
15
15
2006
10:16 AM
10
10
16
AM
PDT
Jon, When quoting here don't use quote and /quote in square brackets. Instead use blockquote and /blockquote in angle brackets .Mung
May 15, 2006
May
05
May
15
15
2006
06:24 AM
6
06
24
AM
PDT

If anything, you’ll lose points when the rank and file try to pass off examples as ID-supportive that are merely childsplay when compared to the more restrictive usage of “design inference,” the unknown case, the one everyone discusses in the context of ID. Many will not understand–as has already been made evident here–that just because you include this in the design inference definition for formal theoretical and/or semantic completeness, you can’t run around using such examples to support design inference in the more restrictive sense. Because of this danger you should invoke the ultimately arbitrary nature of naming to carefully and constructively delineate “design inference,” when in the context of ID, more narrowly.

Salvador himself has already stated this is a "trivial case." The problem ID folks face is that anything with Wm. Dembski's name on it (viz the EF) is open to misrepresentation by the critics. The genetic-ID case is simple, and trivial, on one plane, but very instructive and revealing on another. If it is so trivial as to be obvious, then why do the critics prolong the discussion interminably arguing that it is not a valid demonstration of the EF? Either it is a trivial, yet valid, demo of the EF, or it is an invalid case for the EF. It is bleedingly obvious that, when concerning issues of patent protection, for example, what Genetic-ID does is entirely relevant to the EF. But admitting even that much would be a concession to an idea - simple as it is - that originated with a Prominent ID Person.

The same type of rigamarole is invoked when other simple, basic IDeas are discussed (most notoriously, Behe and the concept of IC). Therefore, ID proponents find themselves arguing endlessly with critics about whether mousetraps are really IC. You'd think the ICness of mousetraps would be blatantly obvious, and one should move on to the more interesting and difficult cases of IC in biology - yet the critics (generally establishment scientists) have spent a great deal of energy and time stubbornly refusing to give quarter to the concept in its most basic and trivial form.

Jon_Ensminger
May 14, 2006
May
05
May
14
14
2006
06:46 PM
6
06
46
PM
PDT

hypermoderate,

Excellent points and illustration. As you indicated, the genetic-id approach can be encompassed within the larger usage of "design inference." And that has been at the heart of the confusion here. But to me it seems evident that it is only the more restricted, nontrivial usage of design inference that is meaningful and relevant to ID.

mung, you're certainly free to include such "known" cases if you wish to define "design inference" that broadly. But employing a working definition that is broad enough to include trivial displays of "pattern matching" will ultimately injure your cause. Don't expect anyone to give ID any additional credibility for extending definitions so widely that these trivial cases could be treated as evidence of "design inference" in action. If anything, you'll lose points when the rank and file try to pass off examples as ID-supportive that are merely childsplay when compared to the more restrictive usage of "design inference," the unknown case, the one everyone discusses in the context of ID. Many will not understand--as has already been made evident here--that just because you include this in the design inference definition for formal theoretical and/or semantic completeness, you can't run around using such examples to support design inference in the more restrictive sense. Because of this danger you should invoke the ultimately arbitrary nature of naming to carefully and constructively delineate "design inference," when in the context of ID, more narrowly.

I agree. This is a counterproductive example. -ds great_ape
May 14, 2006
May
05
May
14
14
2006
04:01 PM
4
04
01
PM
PDT
The *inference* is based upon what we *know* of the cause and effect structure of the world. This is why ID theory must include cases of known design and known relationships between artifacts and intelligence. If it didn't, then there would be no basis upon which to make an inference in the cases where the relationship is not clearly known.Mung
May 14, 2006
May
05
May
14
14
2006
09:28 AM
9
09
28
AM
PDT
There is a typo in my post #40. In the last line, should be this paper If I messed up th link, try http://tinyurl.com/em28cXavier
May 14, 2006
May
05
May
14
14
2006
07:37 AM
7
07
37
AM
PDT

I have a few slightly-picky points to make regarding the most recent posts.

1. Genetically-modified plants/crops are modified for a reason: to confer resistance to herbicides, to increase shelf life, or whatever. Therefore, it is not necessarily a simple case of comparing one bland rock to another; there might be other indicators that distinguish the plant from "nature." If a patch of canola plants survives a heavy dose of Roundup spraying, chances are good that the plant contains the Roundup-Ready gene. A PCR test to detect the presence of the RR gene would merely be confirming something already suspected.

2. GMO detection is moving beyond relatively simple PCR pattern-matching from a database of known sequences, to heuristic methods for detecting unknown GMO sequences (for example this.

I'm not sure how (2) is related to detecting pre-existing (not human-made) design in nature. The unknown GMO detection relies on using known natural reference genomes, characterizing PCR hybridization on a number of natural reference variants, then in the actual test popping up a GMO red flag when hybridization patterns not matching anything produced by known natural variants is observed. The former (1) case relies on knowing ahead of time what human-made DNA sequences look like and the latter case (2) is a subtractive method that relies on knowing ahead of time what non-human-made DNA sequences look like and flagging all others as possible human insertions. Such pre-knowledge is not available in inferring non-human design and so it isn't comparable to what ID attempts to demonstrate. -ds Jon_Ensminger
May 14, 2006
May
05
May
14
14
2006
06:43 AM
6
06
43
AM
PDT
I am totally in agreement with DaveScot on the point that Genetic-ID do not (nor do they claim to) use the EF when comparing test samples to their database of known GMO material. Rather than pursue this red herring, has anyone else thought that this paper may have some relevance?Xavier
May 14, 2006
May
05
May
14
14
2006
04:26 AM
4
04
26
AM
PDT

It seems that much of the confusion on this thread stems from different meanings of the phrase "design inference":

1. Some folks are using "design inference" to refer to any process that allows you to examine an object and conclude that it was (at least partially) designed.

2. The other folks use "design inference" to mean the process of examining an object's structure and concluding that such a structure could not come about except as the result of an intelligent teleological process. Therefore it was designed.

Meaning #1 is broader and includes meaning #2 as a subset.

To highlight the difference between these two meanings of the phrase "design inference", imagine that we're hiking in the desert and we come across an irregularly shaped boulder. Being extremely ID-conscious, we ask ourselves, "Was this boulder designed?" Everything about the boulder appears compatible with natural, mindless geological processes, so we conclude that it was not designed.

A little further up the trail, we come across a dented tubular container. Opening it up, we find an exquisitely detailed blueprint of a boulder, with an Acme Boulder Co. logo in the corner. Taking it back to our boulder, we find that it matches every indentation, crack, and protrusion with uncanny accuracy.

Calling the Acme Boulder Co. the next day, we learn that they have manufactured thousands of these boulders for installation in rock gardens from Vegas to Tokyo. The one we found in the desert is one that fell from a cargo plane (along with the blueprints) and was never recovered.

Now we believe that the boulder was designed. What made the difference? Not the shape of the boulder itself -- after all, we examined it upon first encountering it and concluded that it was not designed. The difference is that we now know that the boulder matches a design manufactured by the Acme Boulder Co.

Which of the two kinds of "design inference" have we performed? Clearly the first kind, but not the second. Before we knew about the blueprint, there was nothing about the structure of the boulder that suggested design.

We concluded that the boulder was designed, not based on the design itself, but on other information we acquired about the source of the design.

GeneticID is making the first kind of design inference. ID theory is attempting to make the second kind. The second kind is much harder than the first to demonstrate.

This brings up a valid point I overlooked. Absent the design specification (artificial DNA sequences) Genetic ID acquires from the GMO producers they can't tell a genetically engineered tomato from a natural tomato. The artificial DNA is indistinguishable from the naturally occuring DNA. Depending on whose worldview you choose to speak from one can say "It all looks equally designed" or "It all looks equally undesigned". In your analogy, hypermoderate, you chose the equally undesigned viewpoint. Any reason for that? -ds hypermoderate
May 13, 2006
May
05
May
13
13
2006
11:00 PM
11
11
00
PM
PDT

Alas, it was Ogg's *deer.* I must have lost that particular brain cell last night during my sleep.

It seems there is no use trying to convince folks that this unremarkable thing as far as ID goes. One last time, though, because I'm stubborn: this is not a new industry with its foundations in the design inference paradigm. This is a fancy--it's not even that sophisticated--tamper detection scheme. Nothing conceptually new. We know the precise nature of the contamination to look for. If you try to offer this as a legitimate working example of your approach, you have my sympathies...

great_ape
May 13, 2006
May
05
May
13
13
2006
06:02 PM
6
06
02
PM
PDT
"this adds weight to design inference" As DaveScot has already pointed out whatever ramifications this has for law etc it is a really bad example if you are then going to extend the process to look at nature. You can say this is a good use of Dembski's method but the problem is it can be calculated accurately that a certain gene in an organism had a very low probability of arising naturally, because we know exactly what and how the designer designs.Chris Hyland
May 13, 2006
May
05
May
13
13
2006
01:05 PM
1
01
05
PM
PDT

mung: "It adds to the existing knowledge that certain effects can be attributed to intelligent causation."

Maybe so, but I don't think that the fact that such attributable effects *exist* was ever in serious doubt. The question--the *difficult* question--is how on earth to define rigorous criteria to diagnose those effects. Here you have a case where this central question doesn't even pertain. How about formulating the genetic-ID situation like this:

How does *man* know when man has modified an organism?
Man look for things man puts in organism.
Man finds these things -> man content he modified.
Man no find -> man content he not modify.

How about a parable? Ogg told big chief he was on a hunt, injured a dear with arrow, but alas he failed to recover it. Big chief goes into woods alone to find Ogg's dear. Loh! Chief find 3 dear dead instead!! One dear has an arrow in its heart; the other two have no marks. Chief ponders which is dear felled by Ogg? Chief recalls Ogg shoots dear with arrows. Wise chief chooses dear with arrow in heart, and brings back to tribe. Later that night the chief recounts the story to the tribe as they sit around the tribal fire. He boasts of his clever "inference from arrow technique." Tribe is underwhelmed. They beat chief senseless and install Ogg as new chief. Everyone eats dear and there is much rejoicing.

Now replace "injure" with "genetically modify." Substitue "arrow" with "defined dna sequences." Replace "dead" with, say, "orange". What you get is an disturbingly primitive group of geneticists, but I think my point holds nevertheless.

I hope it was really Ogg's deer, not his dear, he wounded. -ds

great_ape
May 13, 2006
May
05
May
13
13
2006
11:24 AM
11
11
24
AM
PDT
Summation of issues/problems/insights by posters.... possible research areas? 1) Probability/stats of GMO vs nature - can data attributes be rated and appropriate tables of CSI be determined for informational boundaries? This ultimately comes down to math and vindicates Dembski's involvement in this debate as well as mathematicians on both sides. 2) Identification of trivial vs non-trivial in RM/NS vs Design - where on the scale? One single point mutation within species vs gene splicing of proteins across multiple levels of taxa. Ambiguious Mutations(with cost) vs Targeted Beneficials w/o cost(great_apes sickle cell allele example). 3) Laws/Patents/lawsuits - does this ultimately force the issue from a new perspective neither side predicted in the ID/Evo debate? I'm reminded of commentary on OPFOR blog re: Cold War. Each side geared up for a battle which did not materialize head on, instead unforseen circumstances hit each nation from side on. I think this is the case now for ID/Evo advocates - business and law will drive the future debate, not academics(except as expert witness testimony). What will be the ramifications as technology moves forward in science and education, crime? Example: future branding technologies by GMO companies? Future Gene Hackers to remove branding? Classes offered at leading universities in design recognition of GMO's vs nature? Specialized genetic law degrees and Case Law studies of Evo vs Design? Business is saying, yes, we detect design and will use it. This makes it "non-trivial" imo certainly as lawsuits will erupt in the future. Great_ape, Dave, I think Mung is correct this adds weight to design inference. Plus, billions are already being leveraged on the new design paradigm. They do not care about teleological debates or materialist views. But it still pushes design to the forefront. They will ultimately want to protect their investments. 4) Establishing test procedures and QA of evolvability in the lab for case law of Evo vs Design. Can test be developed to induce genes to evolve at rapid rates vs designed products? Example might be nylanase eating bacteria in nature vs lab. Business will ultimately want Design Laws to win, not evolution. Because if evolution succeeds in the lab, it could cost them vast profits? Sorta the generic vs brand name cost? 5) Because business and market forces drive the new design paradigm - will actuary's find new positions in risk adventures of GMO vs old case law precedents and anticipatory design cost vs evolution? A longer view question; does every new pattern recognition algorithm established take another brick from the wall of evolution? My ignorance in genetics I'm sure shows. My bunny comment was intended humor. But I think Salvadore has hit on something here especially as es58's point of intellectual property values relates to the debate. I'm curious how lawyers see this issue and businesses like GeneticID. Market forces created GeneticID. Lawyers I'd think will see similar opportunities. It "appears" that design detection methodology is a valid future specialty minor, if not maybe a specialized major for genetics. It appears a whole new level of forensics ID will develop. Am I to far off in some of these conjectures? It seems like fertile ground for ResearchID topics. Finally, can someone answer my transgenic question vs simpler GMOs? Are there not different levels of complex manipulation? Therefore a table of evolutionary vs design rates a valid metric for future litigation and property rights?Michaels7
May 13, 2006
May
05
May
13
13
2006
11:06 AM
11
11
06
AM
PDT
1 2 3

Leave a Reply