Home » Complex Specified Information » Functionally Specified Complex Information & Organization, ID Foundations » ID Foundations, 16: A pivotal facet of ID foundations so far — the significance of inductive reasoning on observed, reliable signs for inferring design in the world of life and the fine tuned cosmos

ID Foundations, 16: A pivotal facet of ID foundations so far — the significance of inductive reasoning on observed, reliable signs for inferring design in the world of life and the fine tuned cosmos

In recent days, the UD “Engineer says . . . ” thread has become an extended discussion on the design inference and its justification. It has already led to another ID Foundations post, on the significance of Mignea’s simplest self-replicator model for the design inference from FSCO/I in life. Today, it is worth excerpting and adapting a recent summary post in the thread on the significance of inferring on signs that design is the best causal explanation for certain phenomena in the natural world.

To set context, it is useful to first pause and remind ourselves from the online New World Encyclopedia, what design theory, at core, is about:

Intelligent design (ID) is the view that it is possible to infer from empirical evidence that “certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection[1] Intelligent design cannot be inferred from complexity alone, since complex patterns often happen by chance. ID focuses on just those sorts of complex patterns that in human experience are produced by a mind that conceives and executes a plan. According to adherents, intelligent design can be detected in the natural laws and structure of the cosmos; it also can be detected in at least some features of living things. . . . .

ID makes no claims about biblical chronology, and technically a person does not have to believe in God to infer intelligent design in nature. As a theory, ID also does not specify the identity or nature of the designer, so it is not the same as natural theology, which reasons from nature to the existence and attributes of God. ID does not claim that all speciesof living things were created in their present forms, and it does not claim to provide a complete account of the history of the universe or of living things.

ID also is not considered by its theorists to be an “argument from ignorance”; that is, intelligent design is not to be inferred simply on the basis that the cause of something is unknown (any more than a person accused of willful intent can be convicted without evidence). According to various adherents, ID does not claim that design must be optimal; something may be intelligently designed even if it is flawed (as are many objects made by humans).

ID may be considered to consist only of the minimal assertion that it is possible to infer from empirical evidence that some features of the natural world are best explained by an intelligent agent. It conflicts with views claiming that there is no real design in the cosmos (e.g., materialistic philosophy) or in living things (e.g., Darwinian evolution) or that design, though real, is undetectable (e.g., some forms of theistic evolution). Because of such conflicts, ID has generated considerable controversy.

Despite all the loaded talking points about “creationism in a cheap tuxedo” — often made by materialists dressed up in the holy lab coat — and the like, this summary is quite correct.

But, is it something that we can take as a basis for sound convictions about the origin of those crucial features of the world of life or of the cosmos?

That brings us full circle to where the ID Foundations series began, inference on signs.

So, let us pick up, using and modifying the summary post.

For, we have a common pattern that causal forces and factors commonly stamp what they act on with characteristic signs.

A deer, walking down a forest trail, will leave characteristic tracks and often droppings. From tracks, we

A probable Mule Deer track, in mud, showing dew claws (HT: http://www.saguaro-juniper.com, deer page.)

may properly infer deer as best explanation, even if not directly observed. That, even in the teeth of possibilities of trickery. The responsible interpretation in absence of additional signs of manipulation, or of another animal that somehow leaves the same sort of tracks, is: deer.

As Huntingnet observes:

Tracks are the most overlooked of all deer sign. But, they carry lots of valuable information. For example, they tell us which way the deer was walking, approximately the time of the day it passed (tracks pointed toward bedding areas were likely made in the morning, tracks pointing toward feeding areas were likely made in the afternoon) and something about the deer’s size. Tracks can teach us many things about the deer we are hunting. They tell us the direction it was walking, approximately what time it passed and something about the size of the deer that made them . . .

Provisional, but well warranted and credibly true.

(BTW, this is the same general degree of warrant that obtains in scientific contexts, and on the same basic logic.)

Just so, and as one of the background posts to the ID foundation series argues:

Signs: I observe one or more signs [in a pattern], and infer the signified object, on a warrant:

I: [si] –> O, on W

a –> Here, as I will use “sign” [as opposed to "symbol"], the connexion is a more or less causal or natural one; e.g. a pattern of deer tracks on the ground is an index, pointing to a deer.

(NB, 02:28: Sign can be used more broadly in technical semiotics to embrace “symbol” and other complexities, but this is not needed for our purposes. I am using “sign” much as it is used in medicine, at least since Hippocrates of Cos in C5 BC, i.e. to point to a disease on an objective, warranted indicator.)

b –> If the sign is not a sufficient condition of the signified, the inference is not certain and is defeatable; though it may be inductively strong. (E.g. someone may imitate deer tracks.)

c –> The warrant for an inference may in key cases require considerable background knowledge or cues from the context.

d –> The act of inference may also be implicit or even intuitive, and I may not be able to articulate but may still be quite well-warranted to trust the inference. Especially, if it traces to senses I have good reason to accept are working well, and are acting in situations that I have no reason to believe will materially distort the inference . . .

This pattern of reasoning on signs is well-known and indeed is ancient, as the deer track example highlights. An interesting discussion appears in Aristotle’s The Rhetoric, Bk I Ch 2:

[1357b] Of Signs, one kind bears the same relation to the statement it supports as the particular bears to the universal, the other the same as the universal bears to the particular. The infallible kind is a “complete proof” (tekmerhiou); the fallible kind has no specific name. By infallible signs I mean those on which syllogisms proper may be based: and this shows us why this kind of Sign is called “complete proof”: when people think that what they have said cannot be refuted, they then think that they are bringing forward a “complete proof,” meaning that the matter has now been demonstrated and completed . . .

In short, Ari here distinguishes signs that convey moral certainty per a strong pattern in our experience, from those that convey lesser warrant but are “good enough for government work.”

It turns out that in scientific (and a lot of ordinary, day to day) investigations:

1: we see natural, more or less fixed regularities that point to laws of mechanical necessity at work, such as the tendency of heavy objects near the Earth’s surface to fall at about 9.8 N/kg.

2: In other cases, outcomes under similar initial circumstances are highly consistent but show a stable pattern in accord with some model or other that gives a probability distribution. This is a sign of chance at work, constrained by the driving parameters of the distribution. For instance a dropped fair die tumbles and settles to read 1 to 6 in a flat distribution, and wind speeds often follow Weibull distributions.

3: In other highly contingent situations, outcomes reflect characteristic signs of purposeful intelligence at work, by design. For instance, we often see functionally specific, complex organisation and associated information [FSCO/I] the operationally relevant form of the CSI discussed since Orgel and Wicken, then taken up by Thaxton et al and latterly Dembski et al. In every case where we can directly and independently observe and assess the cause of FSCO/I, it is design. And this is backed up by the general nature of chance sampling, which will tend to reflect the bulk of a distribution when samples are too small to reasonably expect to catch needles in the haystack.

In short, we here see the rationale of the design inference filter, which can be summarised in a flowchart diagram (it is in effect an algorithm of inductive logic inference),

The per-aspect design inference explanatory filter

 

. . . or expressed in an equation; here, in a simple form at solar system atomic and temporal resources level:

Chi_500 = I*S – 500, bits beyond the solar system threshold for inferring design as best explanation for FSCO/I

In effect, where we see I bits of functional and specific info (S being a dummy binary variable of objective warrant for specificity: if so, S = 1, and it is 0 otherwise), and I*S exceeds 500 bits, we are warranted to infer design as best explanation on the gamut of the solar system. For the observed cosmos as a whole the threshold would move to 1,000 bits, for similar needle in the haystack reasons.

DNA-based, cellular life forms, by that criterion, are chock-full of signs of design. That is controversial, but only because of the dominance of evolutionary materialism. There is no empirical observation based warrant for the evo mat claims of chance and necessity creating such FSCO/I, starting with the origin of the very self-replicating facility integrated in a metabolic, encapsulated automaton that defines the living cell. (Notice, the studious absence of objectors in that thread.)

Going to cosmological level, perhaps the pivotal observation is that the observed cosmos seems fine-tuned for C-chemistry, aqueous-medium, cell based life. For just one instance, Astrophysicist Sir Fred Hoyle famously noted:

From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16.  Emphasis added.]

He went on to say:

I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [["The Universe: Past and Present Reflections." Engineering and Science, November, 1981. pp. 8–12]

Canadian astrophysicist (and Old Earth Creationist) Hugh Ross aptly explains:

 As you tune your radio, there are certain frequencies where the circuit has just the right resonance and you lock onto a station. The internal structure of an atomic nucleus is something like that, with specific energy or resonance levels. If two nuclear fragments collide with a resulting energy that just matches a resonance level, they will tend to stick and form a stable nucleus. Behold! Cosmic alchemy will occur! In the carbon atom, the resonance just happens to match the combined energy of the beryllium atom and a colliding helium nucleus. Without it, there would be relatively few carbon atoms. Similarly, the internal details of the oxygen nucleus play a critical role. Oxygen can be formed by combining helium and carbon nuclei, but the corresponding resonance level in the oxygen nucleus is half a percent too low for the combination to stay together easily. Had the resonance level in the carbon been 4 percent lower, there would be essentially no carbon. Had that level in the oxygen been only half a percent higher, virtually all the carbon would have been converted to oxygen. Without that carbon abundance, neither you nor I would be here. [[Beyond the Cosmos (Colorado Springs, Colo.: NavPress Publishing Group, 1996), pg. 32. HT: IDEA.]

In short, we are looking at how our observed cosmos turns out to be “suspiciously” set up for Carbon-Chemistry, aqueous medium, cell based life.

Indeed, as we saw, it turns out that the first four most abundant elements in the cosmos are linked through a key set of properties and interactions at nuclear level — I here speak of a resonance responsible for the abundance of C and O — and get us to H: stars, He: build-up of other elements from the “ash” of H fusion, C & O: water and organic chemistry. Add another common element, N, and we are at proteins. look at the delicate and unique properties of water, and the impression of purpose, given the evident fine tuning, is overwhelming.

At least to those open to consider it.

In short, through inference on warranted signs, there is a serious case for design of life and of the cosmos that accommodates it.

But, if you were to listen to the evo mat establishment and take them at their word on their line of talking points, you would never see that.

A sign — a sadly revealing one — of our times. END

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

14 Responses to ID Foundations, 16: A pivotal facet of ID foundations so far — the significance of inductive reasoning on observed, reliable signs for inferring design in the world of life and the fine tuned cosmos

  1. F/N: Just to sum up and remind ourselves on the core ID inference, in the teeth of the tendency to obscure it in the midst of all sorts of debates. Now that we have revisited OOL (thanks to Mignea), maybe we can begin to look at OO body plans and the Darwinian tree of life gradualist origins model. Not for this thread though. Let us focus here on inference on sign as a key aspect of the epistemology and inductive logic of science. KF

  2. 1. The default hypothesis, if I cannot understand or can misrepresent a natural phenomena, is that an invisible intelligence did it.

    2. I don’t know how to compute the probability of a phenomenon happening naturally, so instead I will make up a probability and pretend I computed it, via several mathematical fallacies. The probability I made up is very low.

    3. Therefore, the phenomenon is only “explained” by an invisible intelligence.

    Or, to put it more succinctly,

    1. My cow died.
    2. The odds of my cow dying due to natural causes, I can’t compute, but I’ll make up a number and pretend I computed it.
    3. Therefore, my neighbor’s a witch.

    This was in fact the logic used by Cotton Mather at the Salem Witch trials.

  3. Diogenes:

    First, please look again at just the flowchart of the filter. You will see that there are two successive defaults before one would infer to design: necessity, and chance. (In other words you have shown yourself unable or unwilling to read a simple diagram to do duties of care to truth and fairness, before making a rather nasty adverse comment. Consider this a warning.)

    Necessity, FYI, is ruled out by high contingency as the first choice diamond shows.

    Once an aspect of a phenomenon is highly contingent, the default moves to chance. This is only ruled out if — per objective reasons to be shown on a case by case basis — chance is maximally unlikely to achieve the result. This covers the case of specified complexity, for reasons linked to the implications of a blind chance walk in a config space of appropriate scope.

    Notice, contrary to your talking point, no probablility calculation is required; other than that that would estimate information storing capacity per standard techniques. (In most relevant cases, info carrying capacity is estimated directly using standard techniques, e.g. DNA is 2 bits per character, as is well known, though the codes tend to have some redundancy.)

    The issue is not probablility estimation but the rather well known sampling theory result that a reasonable size more or less random sample will capture the bulk of a distribution, but is unlikely to capture results from isolated and specially describable zones. For instance, this is often used in hypothesis testing. (The usual example I use is dropping a dart 30 times from a step ladder to a bristol board chart of a bell-curve, marked out in even stripes. If you tot up the numbers of hits in stripes, you will get a numerical pattern that will tend to mimic the relative area of the stripes. The far skirts will be unlikely to be hit, by chance at any rate.)

    Now, you have gone on from failing to properly read a simple flowchart, to misrepresenting it grossly, to making inferences about probability that are unwarranted, and finally have gone on to try to drag in the Salem Witch trials. Which have nothing whatsoever to do with the substantial issue here, this just reflects hostility on your part and ill will manifested in bigotry.

    This last is clearly an exercise in willful atmosphere poisoning, which is not tolerated in this blog — for good reason.

    Given that; you have just one opportunity to make amends, and failing that kindly do not ever post again in any thread that I am author of.

    Good day

    GEM of TKI
    +++++++
    F/N: Given that the object of the accusations above is to poisonously distract attention from a serious and soberly presented issue, I am not willing to entertain debates on a side-track. Those genuinely seeking some reasonably objective info should cf here, and here and here will give more specifically Christian responses. The hysteria and improper trials were wrong, but a fair view will acknowledge that men of Christian conscience objected and in the end helped stop the madness. The penitence of the judge involved speaks for itself. A sounder view would be to realise that we are all finite, fallible, morally fallen/struggling and too often both ill-willed and gullible when our itching ears are tickled with what we want to hear. So, we should be careful to restrain power, and should do diligence by the duties of care to truth, fairness and respect for neighbour. (That includes eschewing the sort of well poisoning tactic I have had to call attention to above.) A solid dose of Jesus’ parable of the Good Samaritan will teach us much about who our neighbour that we ought to love is.

  4. 1. The default hypothesis, if I cannot understand or can misrepresent a natural phenomena, is that an invisible intelligence did it.

    Let’s ignore who wrote this and concentrate on what was written:

    1. If I cannot understand a natural phenomena, an invisible intelligence did it.

  5. Mung:

    Diogenes was ill informed or willfully deceitful on the question of defaults. he then used this in an atmosphere poisoning exercise that appeals to misinformation, bigotry, prejudice and slanders of Christians and those influenced by Christianity as a whole.

    Such incivility has to be corrected, and restrained. As well we know from experience on what it does and has repeatedly done in this blog and elsewhere. That is why I took such sharp action as to tell him he is facing strike 3.

    Now, as to the substantial claim:

    1: Inference to intelligence, contrary to his assertion is neither a first nor a second default.

    2: It is resorted to in cases where we have high contingency (so, not mechanical necessity), and contingencies maximally implausible to have been arrived at by chance based random sampling of a config space (the needle in haystack issue).

    3: Next, intelligence itself — like ever so many physical phenomena like energy or various particles etc — is INVISIBLE, but evident from effects and reliable, observable signs. (A major point of the OP)

    4: The assertions about “natural phenomena” suggest strongly an implicit question-begging that all that is, is material. This is consistent with assertions in the parallel thread on freedom. In short, the discussion is loaded and cannot proceed with that “have you stopped beating your wife” sort of insinuation on the table.

    5: Taking, instead, “natural phenomena” to mean OBSERVABLE phenomena (the reasonable meaning), the centrality of inference on tested, reliable sign at once comes up. Many observable phenomena trace to unobserved or even unobservable causal factors. Energy is a good example. so is the unobservable past of origins.

    6: So, we are warranted to discuss invisible causes, once we have reasonable signs that are observable and can be correlated with and dynamically connected to the underlying causes. The deer that makes a track may not be visible, and by the time you follow up, it may not any longer exist [the example comes from a page that talks about mountain lions and their favourite lunch . . . ], but from track we properly infer to the invisible deer that walked through the mud.

    7: Intelligence is invisible but familiar from within and from interacting with and observing the actions of those who are as we are.

    8: So, since it has credible signs, we can discuss it and discuss tracking it down from its traces. Such as FSCO/I.

    9: In fact, we routinely do so, or we could not live and operate in human community. that is, the dismissive, loaded terms being used are selectively hyperskeptical.

    10: Instead, we ought to be consistent and reasonable in our standards of warrant. (Which brings us to the very point of the design filter, i.e. presenting in a visual summary how we warrant an inference to design.)

    11: And so, utterly contrary to the snide insinuation, inference to design is precisely NOT a label for want of understanding — that is where the implicit evo mat lurks, but an inference based on our understanding of intelligent causes and their characteristic traces.

    ========

    In short, the argument presented is wrong-headed and utterly fallacious. As well as viciously loaded and snidely dismissive. That is, it is irresponsible and should be acknowledged as wrong and withdrawn.

    KF

  6. 6
    englishmaninistanbul

    I have formulated a little thought experiment to try with ID opponents. Let’s see what you make of it.

    I would ask the following questions in order:

    1. Is it possible that we could discover an artifact on Mars that would prove the existence of extraterrestrials, without the presence or remains of the extraterrestrials themselves?

    2. If yes, exactly what kind of artifact would suffice? Car? House? Writing? Complex device? Take your pick.

    3. Explain logically why the existence of this artifact would convince you of the existence of extraterrestrials.

    4. Would that explanation be scientifically sound?

    I would then make the following assertions:

    a. If you answer “Yes” to Question 4, then to deny ID is valid scientific methodology is nothing short of doublethink, even if you disagree with its conclusions.

    b. If you can answer Question 3 while answering “No” to Question 4, then you are admitting that methodological naturalism/materialism-based science is not always a reliable source of truth.

    c. If you support the idea that [ID-excluding] methodological naturalism/materialism is equal to [rationality], then you are obligated to answer “No” to Question 1.

    And hopefully a lively conversation would ensue.

    +++++
    ED: Per request, adjusted. KF

  7. EIB:

    Good example, I note that c should refer to rationality.

    Let’s see if there is a reasonable response.

    KF

    PS: Robin Collins uses the discovery of a biosphere on Mars as a thought exercise to speak to the fine tuning issue, cf no 6 in this series, an onward linked.

  8. 8
    englishmaninistanbul

    KF:

    Thanks for the correction, and the reference.

    I’m wondering if I should have phrased “methodological naturalism/materialism” as “ID-excluding methodological naturalism/materialism”, since even a reductionist approach to the question of agency need not necessarily exclude an empirically verifiable extraterrestrial intelligence.

  9. Point, adjustments made as requested.

  10. F/N: Collins excerpt (doc format):

    _______

    >> The Evidence of Fine-tuning

    Suppose we went on a mission to Mars, and found a domed structure in which everything was set up just right for life to exist. The temperature, for example, was set around 70o F and the humidity was at 50%; moreover, there was an oxygen recycling system, an energy gathering system, and a whole system for the production of food. Put simply, the domed structure appeared to be a fully functioning biosphere. What conclusion would we draw from finding this structure? Would we draw the conclusion that it just happened to form by chance? Certainly not. Instead, we would unanimously conclude that it was designed by some intelligent being. Why would we draw this conclusion? Because an intelligent designer appears to be the only plausible explanation for the existence of the structure. That is, the only alternative explanation we can think of–that the structure was formed by some natural process–seems extremely unlikely. Of course, it is possible that, for example, through some volcanic eruption various metals and other compounds could have formed, and then separated out in just the right way to produce the “biosphere,” but such a scenario strikes us as extraordinarily unlikely, thus making this alternative explanation unbelievable.

    The universe is analogous to such a “biosphere,” according to recent findings in physics. Almost everything about the basic structure of the universe–for example, the fundamental laws and parameters of physics and the initial distribution of matter and energy–is balanced on a razor’s edge for life to occur. As eminent Princeton physicist Freeman Dyson notes, “There are many . . . lucky accidents in physics. Without such accidents, water could not exist as liquid, chains of carbon atoms could not form complex organic molecules, and hydrogen atoms could not form breakable bridges between molecules” (1979, p. 251)–in short, life as we know it would be impossible.

    Scientists and others call this extraordinary balancing of the fundamental physical structure of the universe for life the “fine-tuning of the cosmos.” It has been extensively discussed by philosophers, theologians, and scientists, especially since the early 1970s, with many articles and books written on the topic. Today, many consider it as providing the most persuasive current argument for the existence of God For example, theoretical physicist and popular science writer Paul Davies claims that with regard to basic structure of the universe, “the impression of design is overwhelming” (Davies, 1988, p. 203). >>
    _______

    More food for thought. (Cf discussion in ID Founds 6, here.)

    KF

  11. Diogenes: “1. The default hypothesis, if I cannot understand or can misrepresent a natural phenomena, is that an invisible intelligence did it.”

    Or that an invisible intelligence didn’t do it. What’s good for the goose is good for the gander. And all parties would be well served here by brushing up on the Principle of Insufficient Reason.

    Kairos: “Warrant” is a terrible term to use. It’s as meaningless to the lay as it is to the pros. I understand the preference over using “justified”, but in either case its still no more than a statement of personal rationalizations. Personal taste and all that, I suppose.

    You’re also making a hash of Abduction and Induction. Please don’t do this, it’s too terribly common in the other camp and throughout discussions of theories in general already. Induction is always and everywhere supported, but not guaranteed, by what is known. And often when generalizing is generalized counterfactually. Abduction is, however, purely “what if” nonsense. It’s valuable but it is not even remotely the same thing.

    Your example of deer tracks is spot on to this. And precisely because we know deer, their parts, their movement patterns, and the tracks they leave behind. Every piece of the puzzle is already known and what remains is simply a matter of what odds you get from the bookmaker.

    And this has everything to do with time machines. Have we put the whole kibosh together for the hands-off construction of macroevolution? No. For ID constructions? Well, that depends on how you feel about 500 myo genes. But in both cases what is missing is the time machine necessary to satisfy the historical side of things. And while we may not have a time machine it must be possible somewhere in the multiverses…

    And I’m not at all certain what you’re intending to mean with ‘Contingency’. Do you mean to take it as necessary consequences of contingent antecedents, as a stochastic correlative approach, or do you mean recursive stability? There are differences and which is which is not at all clear from your flowchart and discussion.

    But if we go by your flowchart then any possible idea of ‘reliable Induction’ is a mile wide of the target you’re shooting at. For if ‘reliable Induction’ means anything then the only ‘warrant’ we may have about chaotic systems is that they are not analytical. That goes just as well for ID as it does for Darwin.

    And no, there’s no argument here until you can get me reliable weather forecasts. A much simpler and more regular system that can’t do stuff-all to match the system more than 10 days out by anyone’s best efforts.

    But theories? Theories don’t have any need of warrant. They are not there to be rationalized or believed. They are big honking “maybe this” affairs that are valid as long as they are not inconsistent. Leave the cultist cudgel to the other camp and take the higher road.

  12. Hi Maus:

    Actually, warrant is a favoured term in epistemology, post Gettier counter-examples to justification. I use it to highlight objectivity of the grounds of credibility of a belief that we have good reason to think it more than a figment of our imaginations.

    KF

  13. Maus:

    Following up.

    Intelligence in general is invisible, but reasonably inferred from effects. Indeed, from characteristic signs.

    As for abduction vs induction, the former is a species of the latter. The best model being inference to best explanation. I here follow Copi and others on the view that inductive arguments reason to conclusions made more likely to be true by the accepted premises [not necessarily a generalisation], where the premises are often summaries from experience.

    IEP, for instance notes:

    A deductive argument is an argument in which it is thought that the premises provide a guarantee of the truth of the conclusion. In a deductive argument, the premises are intended to provide support for the conclusion that is so strong that, if the premises are true, it would be impossible for the conclusion to be false.

    An inductive argument is an argument in which it is thought that the premises provide reasons supporting the probable truth of the conclusion. In an inductive argument, the premises are intended only to be so strong that, if they are true, then it is unlikely that the conclusion is false.

    The difference between the two comes from the sort of relation the author or expositor of the argument takes there to be between the premises and the conclusion. If the author of the argument believes that the truth of the premises definitely establishes the truth of the conclusion due to definition, logical entailment or mathematical necessity, then the argument is deductive. If the author of the argument does not think that the truth of the premises definitely establishes the truth of the conclusion, but nonetheless believes that their truth provides good reason to believe the conclusion true, then the argument is inductive . . .

    On this approach, abduction is an important species of induction and the key question is why a particular claim is held to be the best explanation. In the OP, that is on the matter of signs and what they reliably point to, as in deer tracks and deer. (Notice, how I am careful to point out that deer tracks do not guarantee deer.)

    In terms of the inductive pattern of warrant, the first issue is that we know intelligence and have identified characteristic signs it often leaves behind. When it comes to FSCO/I, intelligence is not only the routine source but the only observed causally adequate source. the needle in the haystack type analysis shows part of why.

    On the strength of this pattern, we infer — per good reason — that FSCO is an empirically reliable sign of intelligence.

    Of this pattern of reasoning, Newton aptly said in Principia:

    Rule I [[--> adequacy and simplicity]

    We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances . . . .

    Rule II [[--> uniformity of causes: "like forces cause like effects"]

    Therefore to the same natural effects we must, as far as possible, assign the same causes.

    As to respiration in a man and in a beast; the descent of stones in Europe and in America; the light of our culinary fire and of the sun; the reflection of light in the earth, and in the planets.

    Rule III [[--> confident universality]

    The qualities of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.

    For since the qualities of bodies are only known to us by experiments, we are to hold for universal all such as universally agree with experiments; and such as are not liable to diminution can never be quite taken away. We are certainly not to relinquish the evidence of experiments for the sake of dreams and vain fictions of our own devising; nor are we to recede from the analogy of Nature, which is wont to be simple, and always consonant to [398/399] itself . . . .

    Rule IV [[--> provisionality and primacy of induction]

    In experimental philosophy we are to look upon propositions inferred by general induction from phenomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.

    This rule we must follow, that the arguments of induction may not be evaded by [[speculative] hypotheses.

    So, it is quite reasonable to infer from FSCO/I to its empirically warranted cause, intelligence.

    In addition, it is reasonable to insist that those who would reject such an inference empirically demonstrate and so warrant their implication or claim that blind chance and mechanical necessity — without the material involvement of intelligence — can and do in our observation produce at least 500 – 1,000 bits of functionally specific explicit information or the equivalent in implicit info in a functionally organised entity. (This is rather like saying, if you believe that the laws of thermodynamics that effectively forbid perpetual motion machines of various kinds on reasons tracing to the statistical nature of matter do not credibly hold, you must pr prepared to show a successful case of such a PMM.)

    Failing such a demonstration of the causal power of unaided chance and necessity, it is improper to dismiss that FSCO/I is a reliable sign of intelligent design. And, to in effect demand that we can take a time machine and t5avel back to observe, instead of being willing to infer from empirically reliable and analytically warranted sign to signified, is a clear case of selective hyperskepicism.

    So long as an intelligence at the place and time in question is possible, we have no reason to reject FSCO/I as a reliable sign indicating the action of such an intelligence.

    In the case of origin of life, we are here dealing with codes, algorithms, effecting machines, and associated molecular nanotechnology. These, in an entity that integrates metabolism and self-replication, in an encapsulated entity. As Mignea’s simplest self replicator shows, that entails quite considerable, specific, integrated complexity well beyond 500 – 1,000 bits.

    On the strength of that, I beg to disagree with your assertions just above.

    As a footnote, contingent entities depend on necessary, external causal factors at one level, and in the particular cases in view, under evidently similar initial conditions, there are various possible outcomes, similar to what happens when one drops a die. A fair die will be influences by chance to read from 1, 2, . . 6, but a loaded one will be largely driven by intelligence.

    You speak of chaotic systems. I take it you mean those exhibiting sensitive dependence on initial conditions. The die is of course an example, and what happens is that tiny chance factors make for drastically different outcomes, even thought he pattern of relevant equations may be deterministic. In such cases, chance factors or the clash of uncorrelated causal chains, make for an outcome that is highly variable on initial conditions that are quite similar. Eight corners and twelve edges do that for you, when they come in contact with the table etc, making a die a useful random source. And indeed the sensitive dependence is an error amplifier which makes such systems magnify inescapable chance factors, into large scale divergence of outcomes.

    The flowchart (adapted from Dembski et al to emphasise a per aspect approach, so that he overall outcome is the accumulation of the various aspects) has room aplenty for inductive analysis. For instance, if a natural regularity is detected, it points to law. If a statistically distributed contingency is seen it points to chance working through a statistical distribution. It is when there is high contingency but in a context of complexity and specificity that is such that chance samples of the population of possibilities should not reasonably end up in special zones but there we are, that it points to design.

    For instance, imagine a black box machine of unknown origin with a power switch and an output line. Turn it on the first time, and it emits a string of bits [hi/lo values] in no recognisable order, perhaps for a day.

    Turn it off, then turn it on again.

    This time it emits in sequence the ASCII code for the text of Shakespeare’s corpus.

    The first time, the box emitted noise. The second time, code.

    The FSCO/I involved in the latter points to design.

    And the difference seen here has no dependence on whether we may make long term weather forecasts. As you know, the issue is error amplification on nonlinearities. That is just not relevant to the ability to recognise FSCO/I and infer on its empirically warranted source. (You have set up and knocked over a strawman, I am afraid.)

    I trust this helps.

    KF

  14. Maus: Are the remarks on method here helpful? KF

Leave a Reply