Home » Intelligent Design » Does ID Contribute to Knowledge?

Does ID Contribute to Knowledge?

A friend of mine sent me a link to a recent article by Giberson. In it Giberson claims that the problem with ID is that “first they need a fertile idea—one that generates new scientific knowledge”. I think it has already done this.

Many eminent scientists have noted that the reductionist way of looking at biology in the 20th century cannot remain forever the way biology operates. Carl Woese’s “A New Biology for a New Century” is a good example of this. The question, though, is what to replace the reductionist view with.

What ID has done is develop a conceptual framework (or a set of frameworks) for examining biological phenomenon.

First and foremost, it allows a biologist to investigate the logical relationships in an organism or an environment without having to simultaneously find a historical relationship. The reductionist paradigm filters out ideas about logical relationships which don’t have historical causes. That’s why so few people comment on the relationship between the human eye and the squid eye, or even the entirety of the placental and marsupial mammals. Design gives an intellectual justification for seeing logical relationships even when the historical relationships are unknown or unlikely.

That, for instance, is the impetus behind Wells’ examination of the Centriole’s polar ejection force – it makes perfect sense logically, but is completely ridiculous to imagine historical, contingent causes that would bring such a system into being. That’s why Jonathan Wells – an ID person – thought of it, and no one else has.

The new way of thinking brings with it new questions. Questions such as “how would one quantify the influence of design on evolution”? Dembski, however, has helped with his recent introduction of the Active Information metric. This brings in a whole new way of looking at a cell. A reductionist *couldn’t* ask the question. ID brings to the table new questions which are unthinkable to reductionists, and a set of tools to help answer them.

Of course, the reductionists usually say that this isn’t new knowledge, because we aren’t answering *their* questions. However, I don’t see how the failure of someone else to ask relevant, consequential questions should be viewed as a mark against us.

In Seelke’s lab, they are asking the question, “how far can evolution go?” and developing models and experiments to answer that question. For reductionists, the question is invalid. For ID’ers, the question is relevant. We don’t have a prior way of knowing how far evolution went. Therefore, we must test! That’s why only ID’ers are conducting such experiments – we are the only ones who can ask the question!

Dr. Behe defined qualitative characteristics of systems that are unlikely to have evolved. Scott Minnich performed the knockout tests on the bacterial flagellum to determine if Behe’s characterization of it was correct. I added to that a model based on computability theory for understanding why these qualitative characteristics are important. The model gives a set of criteria for determining whether or not exogenous information is required for producing a biological change. Future versions will hopefully extend that into a numerical treatment. In what way is this not knowledge building? Only if you don’t like the questions.

Dr. Dembski set up a mathematical framework for calculating active information. This summer, at the BSG conference in July, you can see me apply it to a biological system to calculate the active information within a biological process. In what way is this not knowledge building? Only if you don’t like the questions.

Many people have used Kauffman’s self-ordering work as an explanation for how biological organization arises. However, as several ID’ers have pointed out, self-ordering is not the same thing as self-organization, despite the fact that proponents of self-ordering continually confuse the two issues. Is making this distinction not knowledge-building? Only if you don’t like there to be distinctions.

Dr. Axe has researched the sequence requirements for protein folds to occur. He asked the question, can these folds be created via evolution? He formulated and performed tests and experiments to determine the answer. Is this not knowledge building? Only if you don’t like the questions.

How far can evolution go? How common does common ancestry go? When are relational causes required for changes? How much precoded information is in the genome to drive its own evolution? How can design be characterized mathematically? How are the organisms in the environment related to each other *logically*?

There are a lot of people who don’t like questions. They like the answers they have been handed, and want to stick with those answers no matter what. Other people like looking for new questions. They like looking beyond the abilities of our current tools and conceptions, and seeing what might be discoverable.

If you don’t like the questions – that’s fine. Don’t ask them. Don’t repeat them. Leave them out of discussion. But there’s a lot of us who think that these questions are fascinating and remarkable questions, and that the search for their answers is a worthwhile enterprise. Even those who think that nothing will be found should be delighted at the search, and how it can help us rethink even our existing questions, and the way we look at them. It has already brought us remarkable new knowledge and ways of looking at problems. If we keep asking questions, keep reframing our ideas in fresh new ways, and keep developing new tools to understand our world, then I don’t think we have much to fear from the shortsighted people who don’t like the questions.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

11 Responses to Does ID Contribute to Knowledge?

  1. Since you are commendably eager to answer questions, rather than evade or ignore them, I have a few for you.

    1) Intelligent Design is founded on the appearance of design in nature. That appearance of design rests on similarities between properties of biological structures and systems and those designed by human beings. However, a scientific assessment would require comparing both similarities and differences in order to judge whether the similarities are significant or simply coincidental. Has this been done?

    2) ID is critical of the theory of evolution because, for example, it is unable to provide a detailed account of how life formed from inanimate chemicals. It complains that “Darwinists” are unable to draw step-by-step maps of the evolutionary pathways to complex organs from their precursors or from one species to the next. How does postulating a designer help to answer those questions at all, let alone do it better than the theory of evolution?

    3) ID proponents steadfastly refuse to speculate on the nature of the designer. Why is that? It follows naturally from the proposition that there is a designer and would surely be a matter of intense curiosity and research by any scientist worthy of the name.

    4) To be more specific, if the designer is not God but a lesser being then the designs it produces will be constrained by its limitations. A designer of stone tools from the Neolithic era, for example, is unlikely to have been responsible for the appearance of life on Earth. What can we infer about the nature of this putative designer from the appearance of design in nature?

    5) If positing an Intelligent Designer does not lead to deeper insights, better predictions and more detailed and accurate accounts of how life was created and developed, which is what is being demanded of the theory of evolution, then in what way is it superior?

  2. 2

    I remember that at least on one public debate somebody acknowledge that scientist that are ID proponents actually ask interesting and valuable questions. Being critical of the current state of the art in science is necessary but you have to acknowledge the facts and have to be familiar with the theory. (A question like “how far can evolution go?” obviously presupposes that there are limits. Thus an evolutionary biologist would need to rephrase that question.)

    The problem is that publicly ID proponents rarely make the impression that they are interested in these kind of questions.

  3. Seversky:

    Why did you premise your comment on a strawman distortion? To wit:

    Intelligent Design is founded on the appearance of design in nature.

    On the contrary:

    1 –> Design theory as you do or should know, is based on the widespread observation and experience of acknowledged designs in our world.

    2 –> This, multiplied by identification of a cluster of empirically reliable characteristic signs of such design that have a rich base of empirical support. That is,

    3 –> where once we know the causal story through direct observation or credible record, these features if present are never seen as rooted in chance circumstances or mechanical necessity playing out from circumstances that just happen to be one way and could as easily have been otherwise.

    4 –> Instead, they are so strongly correlated with design that we have grounds for an induction that such signs are empirically reliable markers of designs, even where we did not observe the causal process.

    5 –> Similarly, we can identify characteristic features that mark the results of forces of mechanical necessity acting on given initial circumstances.

    6 –> For instance, if one drops a heavy object, it reliably falls at 9.8 N/kg near the earth’s surface. And, adjusting for the spatial spreading out of a field of influence through wider and wider spheres, the same force neatly accounts for the centripetal acceleration of the moon, thence the sructure and going-concern operational behaviour of the solar system. Thence Newtonian Gravitation and onward the General Theory of relativity.

    7 –> By contrast if the dropped object is a fair die that tumbles and rolls across a table, it settles to a highly contingent but credibly undirected — i.e. chance — outcome. This has a statistical pattern that reflects Bernouilli-Laplace indifference and/or modifications per biases that may apply to the circumstances.

    8 –> From this and similar circumstances we identify chance and/or random, probabilistic — and I list to identify the subtle differences and interactions that apply — patterns and characteristics that affect various aspects of events, objects and processes. Which are deeply embedded in Quantum theory, statistical mechanics, information theory and many other fields of pure and applied science.

    9 –> Especially in how we routinely snip out the random scatter that affects experiments, so that we may focus on the identified natural regularities tracing to mechanical necessity.

    10 –> Now, too, if our dropped object is a loaded die, we see what Wiki describes — making yet another damaging admission against interest — thusly:

    A loaded (or gaffed or cogged or weighted or crooked or gag) die is one that has been tampered with to land with a selected side facing upwards more often than it otherwise would simply by chance. There are methods of creating loaded dice, including having some edges round and other sharp and slightly off square faces. If the dice are not transparent, weights can be added to one side or the other. They can be modified to produce winners (“passers”) or losers (“miss-outs”).

    11 –> This brings to the fore the distinguishing pattern of recognisable designs, as well known OOL theorist Wicken inadvertently acknowledged in the following cite [he was hoping to put one form or another of (apparently pre-life . . . more later) natural selection in the category of a competent designer]:

    Organized’ systems are to be carefully distinguished from ‘ordered’ systems.  Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]

    12 –> We therefore see how organised, functionally specific complexity and thus resulting functionally specific complex information are empirically grounded, distinguishing markers of non-random processes at work.

    13 –> In our direct observation of course, such FSCO and FSCI are only seen to be produced by art, using skilled methods. Thus, they are arguably markers of intelligence.

    14 –> But what of cumulative chance plus natural — as opposed to artificial — selection for environmental success?

    15 –> The reef that sinks this ship is the problem of functional, complex organisation required for even initial success.

    16 –> For, as has been pointed out — but to often ignored — once function requires complex organisation, the resulting configuration space for the required components soon explodes beyond merely astronomical scale. For instance at the rule of thumb threshold for FSCI that we need at least 500 – 1,000 bits [125 bytes] of info storage capacity to raise the issue, 1,000 bits corresponds to 1.07 *10^301 possible states.

    17 –> The whole observed universe of ~ 10^80 atoms, turned into a search engine — let’s say robot-monkeys, typewriters, paper and support resources — and changing state every Plack time across its thermodynamically credible lifespan [~ 50 million times the timeline since the generally proposed date of the big bang, 13.7 BYA], could not scan through 1 in 10^150 of the possible configs for just 1,000 bits.

    [ . . . ]

  4. 18 –> That is the robot monkeys pecking away at random cannot credibly begin a search for islands of function [say a sonnet of Shakespeare or a paragraph from one of his works or indeed a coherent passage in English etc] before they run out of time and available energy due to the heat death of the cosmos, with terrestrial sites running past the lifespan of relevant stars, and finally white dwarfs finally cooling off etc.

    19 –> But of course known designers routinely produce such FSCI.

    20 –> What about chance plus natural , functional selection?

    21 –> The problem here — as genetic algorithms inadvertently demonstrate — is that the searches depend on already existing threshold of function to work, and they depend on having carefully structured algorithms/integrated irreducibly complex organised processes and underlying coding schemes or the equivalent.

    22 –> For instance, on origin of life, actual life — as opposed to speculative if pigs could fly scenarios such as RNA worlds and life spontaneously organising on clay or near undersea vents etc — requires both metabolism [to build up components for functional life and to rear down waste/ no longer useful products] and self replication. But such are of course highly organised, well beyond the FSCI threshold.

    23 –> For instance, this video (which UD’s evolutionary materialism advocates have been dodging for weeks now) shows in simplified form how observed life makes the workhorse protein molecules that carry out the key processes of the cell. Notice the step by step, code based process of carefully controlled, organised assembly.

    24 –> Similarly, the observed replication system — as opposed to hypothetical autocatalysis — implements a von Neumann replicator, which is plainly FSCI rich, code based, uses algorithms and data structures that cannot simply be grabbed by chance from already existing uses, and is irreducibly complex so needs to be assembled all at once.

    25 –> To wit, a vNR uses:

    (i) an underlying code to record/store the required information and to guide procedures for using it,

    (ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with

    (iii) a tape reader that reads and interprets the coded specifications and associated instructions, and

    (iv) implementing machines (and associated organisation and procedures) to carry out the specified replication (including that of the constructor itself); backed up by

    (v) either:

    (1) a pre-existing reservoir of required parts and energy sources, or

    (2) associated “metabolic” machines carrying out activities that provide required specific materials and forms of energy by using the generic resources in the surrounding environment.

    26 –> Just this week of course Craig Venter and associates announced how they have synthesised — albeit by copying existing systems [and after fixing an error] — the DNA component of such a system, and have got it to work in a cell that had been deactivated. So, we see empirical warrant for the claim that designers can build such an entity. {we already know intelligences make the sort of things a vNR requires, and have made software versions.]

    27 –> So chance plus natural selection on better vs worse function is not a credible source for the first living cells but designers can demonstrably do at least some of the key tasks, and credibly can do them all once we master the technology.

    28 –> When it comes to the origin of novel major body plans [as in the Cambrian fossil record], we face the need to spontaneously create credibly 10+ million bases of novel, functional body plan and underlying cellular bio information, integrate it and get it to be embryologically feasible in a viable population, step by step through functional and “fitter” intermediates every step of the way.

    29 –> To see what this runs into, think about how a wing needs a structure that is aerodynamically feasible, has enough power to control it for gliding, and has the relevant neurological controls to glide successfully, then also the additional power to move to full powered flight.

    30 –> And meanwhile the former foreleg is getting worse and worse at its primary tasks of clinging to a tree or running on the ground or grasping etc.

    31 –> Glorified just so stories are not good enough to overcome that set of challenges.

    32 –> And, when we see that the underlying cosmos that facilitates c-chemistry cell based life at suitable sites in it, seems to be exquisitely fine tuned cosmologically, across dozens of parameters, that is a further nail in the evolutionary materialist coffin.

    ___________________

    So, the issue comes down to whether science is meant to be a pursuit of truth, where knowledge is best understood as warranted, credibly true belief.

    If so, then a process that allows us to open up our minds to serious causal possibilities and assess them on empirical warrant by identifying and using signs of major causal factors is a major contribution to science. Even, given the dominance of a priori materialism, a revolutionary one.

    GEM of TKI

  5. Seversky -

    First of all, let me clarify just a bit that we are talking about biological ID. There is a larger scope of ID as well that *includes* human design. For instance, Dembski’s “The Design Inference” focused on the way that ID applies to *human* events, and only later work applied that to biological systems. In addition, I have pointed out elsewhere how ID can be used in human engineering. I didn’t make this distinction in the article thought I should have.

    So, on to answering your questions:

    1) The problem with your approach is that it uses human design as the archetype. ID does not. It uses human design as an indicator of a larger reality. It does not constrain that larger reality to human activity. In my Irreducible Complexity concept, for instance, the properties of design are based on mathematical features (Universality and open-ended systems) which are shown to be unattainable from physics-only systems.

    However, and this should be helpful to you, it does point out, based on those same mathematical features, the places where design is not required. It then goes back and shows the kinds of systems for which design is not required as an explanation, and even is able to detect the designed and non-designed parts of Avida organisms.

    2) First of all, the reason that Darwinists are unable to have a grasp of speciation isn’t just because of ID, it is also because they are unimaginative. Margulis, for instance, who I imagine to be as anti-ID as anyone, has done a good job of showing that speciation is primarily a result of the gains/losses of symbioses, but her work has largely been ignored except for specific cases.

    Anyway, more to the point, this (and other of your questions) presume that the *how* question is the most important one, and that it is always answerable. This is only true for materialists. What if *how* isn’t the most important question? That was one of the main points of the essay. As I said, “the reductionist paradigm filters out ideas about logical relationships which don’t have historical causes”.

    For the ID’er, there may indeed be logical relationships without historical causes. Whether or not a logical relationship has a historical cause is a question to be determined, not a question to which we already know the answer.

    Here’s a question – what if, for instance, organisms are pre-coded to have a certain set of ecological duties, and they can detect which duties need doing, and can evolve to fill that duty? This is a question that makes perfect sense to ask from an ID perspective (no matter what the answer), but is total nonsense from a historical perspective (what imaginable historical cause would give organisms the ability to fill ecological duties?).

    Just like the issue of centrioles for Wells, there are questions about how biology works that are only really able to be asked if you don’t require that one filter their reasoning through historical causes.

    3) “ID proponents steadfastly refuse to speculate on the nature of the designer.” That’s actually incorrect. The question is whether or not the *science* of ID indicates the nature of the designer. Right now, there are not tools within ID to answer the question. It’s not that it isn’t an interesting question, but rather there is a lack of tools to do the job. There are many design-related questions that cannot be answered with ID today. If this is a question that you think is reachable, you should join us and help us develop tools and methods for doing this.

    In the U.S., most ID’ers are Christians, and think that the designer is the Judeo-Christian God. I know, at minimum, that Stephen Meyer, Casey Luskin, and William Dembski have all said so (and myself also). But, at the moment, it is precisely as you say – speculation and personal belief. No one in ID has developed appropriate methods and tools to analyze this question scientifically.

    4) “What can we infer about the nature of this putative designer from the appearance of design in nature?” Excellent question! Maybe you should join us in our research program. I think that biological ID, though, is generally unable to differentiate between an agent with advanced tools in the world and God outside the world. Cosmological ID, like the work done by Gonzalez, for instance, would have to have a designer outside the universe.

    However, I agree with you wholly that this is a great question. I just do not have the creativity to come up with a methodology to test it. If you do, I say go for it!

    5) Again, you are obsessed with the *how* questions. The way that ID is superior is that (a) it can ask the question of whether or not “how” is reachable or important, and (b) it can provide the intellectual basis for someone to infer logical relationships even where the historical ones are lacking.

    Note, for instance, that, even though they have historical causes, it is impossible to infer from a computer program what the mechanism of its creation was. I can’t tell from a book whether the person typing it was using a QWERTY or Dvorak keyboard. Can you? But nonetheless, computer scientists can still analyze, interpret, and relate computer programs in important ways. In fact, the questions of logical relationship in computer science is by far a much more important question than the physical causes behind the program.

    Do you think that, for analyzing computer programs, if computer science cannot provide “more detailed and accurate accounts of how” a computer program is created that it is inferior to, say, someone with a video camera looking at a programmer while he programs? I would say that the logical analysis of parts from computer science would provide us with more relevant information than the videotape. Whether module A was typed out before module B is irrelevant to the question of how they are made to work together.

    It is only the materialist who assumes ahead of time that the “how” question is superior than all other questions, and that it must be answerable for something to make sense. *That* is the primary way that ID is superior to other forms of inquiry.

  6. JB:

    I note: I can’t tell from a book whether the person typing it was using a QWERTY or Dvorak keyboard.

    But, on inference to best, empirically anchored explanation of functionally specific complex organisation and associated information, you can tell that the book credibly was an artifact. And in particular, you can tell that while it is LOGICALLY AND PHYSICALLY POSSIBLE that it was produced by robot monkeys randomly typing [the rumours about certain trashy, boiler plate novels are false, and Bill Gates has not been ordering carloads of bananas and peanuts for his staff], this is not a credible, plausible causal factor for a book.

    This is because of he inductive pattern we observe and the islands of function in a configuration space challenge we saw above.

    So, when we see FSCI as an aspect of an event, phenomenon, process or object, unless and until we can show empirical basis for a different explanation, we are well warranted to infer on best known causal explanatory factor to intelligent cause, and to call that knowledge by the same standards commonly used in science.

    (Those who object to this inference need to substantiate why it should be rejected without implicitly imposing a priori materialism a la Lewontin et al.)

    Ah gawn . . .

    GEM of TKI

  7. Seversky:

    A few quick answers:

    1) It is true that ID is based on an inference by analogy. But a very important point is that human beings are considered as capable of design because they are conscious and intelligent. In other words, it is our experience that design produced by humans is the output of conscious cognitive representations. Therefore, any conscious intelligent being can originate design, even if not human.

    I don’t understand your point about the differences. The products of human design can exhibit all possible differences, but they are still designed. The recognition of design in ID theory is based on CSI, not on vahue similarities or differences. CSI is the evidence of design. All kind of differences can be present, but CSI can be originated only by conscious intelligent beings.

    2) ID is not critical of neo-darwinian theory of evolution because “it is unable to provide a detailed account of how life formed from inanimate chemicals”. ID is critical of neo-darwinian theory of evolution because it is unable to provide a credible hypothesis of how life formed from inanimate chemicals. There is a big difference.

    ID, on the contrary, can originate multiple credible hypotheses about that. For instance, a conscious intelligent force can have favoured highly improbable biochemical processes, exactly like humans try to realize in their laboratories, or could hace acted directly on quantum phenomena in matter in order to realize unlikely events. Thwe point is, any intelliugent being who has inputted the lacking information needs not have violated known physical laws, but could certainly have used natural laws which are still not known. But that is the problem of the modalities of implementation of design, which is fully open to future research. But the concept that external information has been added by one or more conscious intelligent beings is central to ID. Neo-darwinism, instead, has no way to explain the genesis of information, not only its implementation. That is the difference.

    3) ID proponents do not refuse to speculate on the nature of the designer. ID proponents (at least those who have a serious scientific approach) refuse to speculate from religious or nphilosophical ideas, at least in the scientific context. We can absolutely speculate on the nature of the designer from a scientific point of view, but only according to what is known from facts, especially from the observed designed things. We cannot go beyond that. Religious or philosophical speculations about the nature of the designer are however perfectly correct in a religius or philosophical context.

    4) In the present scientific context, I believe that we cannot say who the designer is. We have not enough scioentific information for that. Perfectly valid theories compatible with ID are: a god, an intelligent and conscious force immanent in reality, aliens, non physical entities of any kind, physical entities unknown to us or to our knowledge of history, and so on. I am not sponsoring any of these hypotheses in a strict ID context, but I do believe that, as our biological or general knowledge grows, we will be able to say much more about this issue.

    5)ID is a cognitive scenario more than a specific theory. It is superior to rductionist scientism because it takes into account very important components of reality which are simply ignopred by the current materialist approach: consciousness, design, intelligence, purpose, and so on. Therefore, an ID approach allows a greatly superior understanding of reality.

    To be very simple: if biological information has really been designed by conscious intelligent beings, as ID affirms, can you really belòieve that we have better chances to understand it correctly subbornly pretending that it arose by RV and NS?

    Adherence to truth and reality is the first and foremost factor in science. Sticking to obvious lies can never help.

  8. Footnote:

    It is a common objection these days to identify a particular argument as based on reasoning by analogy. Then, on the observation that analogies are not certainties and can break down, analogies that are not welcome are dismissed without serious consideration.

    This variety of selective hyperskepticism needs to be addressed in general and in the context of the credible source of digital, algorithmic functionally specific complex information.

    1 –> Analogies — roughly, reasoning by family resemblance — are not deductive and like the other inductive arguments that build on them, seek to provide warrant through cogency rather than demonstration.

    (We should also observe that deductive arguments depend on the credibility of their premises, which is as a rule established on our individual and collective experience and understanding of the world as conscious, intelligent, minded, language using, reasoning, enconscienced creatures. Proofs have to have premises in short, and we have confidence in premises based in large part on: inductive warrant. So either the objection directly reduces to global skepticism or to convenient objection to what one does not like.)

    2 –> Citing Wikipedia, as again an admission against interest:

    Analogy plays a significant role in problem solving, decision making, perception, memory, creativity, emotion, explanation and communication. It lies behind basic tasks such as the identification of places, objects and people, for example, in face perception and facial recognition systems. It has been argued that analogy is “the core of cognition”.[3] Specific analogical language comprises exemplification, comparisons, metaphors, similes, allegories, and parables, but not metonymy. Phrases like and so on, and the like, as if, and the very word like also rely on an analogical understanding by the receiver of a message including them. Analogy is important not only in ordinary language and common sense (where proverbs and idioms give many examples of its application) but also in science, philosophy and the humanities. The concepts of association, comparison, correspondence, mathematical and morphological homology, homomorphism, iconicity, isomorphism, metaphor, resemblance, and similarity are closely related to analogy. In cognitive linguistics, the notion of conceptual metaphor may be equivalent to that of analogy.

    3 –> So, when we turn to inference to best explanation or other inductive forms of argument, we should not be surprised to see that analogy lies close to the heart of how they work. As the same source notes in another admission:

    Inductive reasoning, also known as induction or inductive logic, is a kind of reasoning that allows for the possibility that the conclusion is false even where all of the premises are true. The premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; i.e. they do not ensure its truth . . . . Inductive arguments are never binding but they may be cogent. Inductive reasoning is deductively invalid. (An argument in formal logic is valid if and only if it is not possible for the premises of the argument to be true while the conclusion is false.) In induction there are always many conclusions that can reasonably be related to certain premises. Inductions are open; deductions are closed . . . in its broadest sense it involves reaching conclusions about unobserved things on the basis of what has been observed.

    4 –> Thus, we see the relevance of the strategy if inference to the best [current] explanation [IBE}, which I -- following Peirce et al -- would view as foundational to science. Yet another admission tells us:

    Abductive reasoning [i.e. IBE] starts when an inquirer considers of a set of seemingly unrelated facts, armed with an intuition that they are somehow connected . . . . Abduction allows inferring a as an explanation of b. Because of this, abduction allows the precondition a to be inferred from the consequence b. Deduction and abduction thus differ in the direction in which a rule like “a entails b” is used for inference. As such abduction is formally equivalent to the logical fallacy affirming the consequent or Post hoc ergo propter hoc, because there are multiple possible explanations for b.

    5 –> But of course, as the practice of science tells us, by comparing seriously competitive possible explanations [no deck-stacking permitted!] on factual adequacy, logical and dynamic coherence, explanatory power and elegance [simple, powerful, not either simplistic or ad hoc] etc, we can have high confidence in the empirical reliability and warranted credible [though provisional] truthfulness for a given explanation, a.

    6 –> In short, it is well-done abduction that lends explicit or implicit strength to an induction. And, where empirical reliability is more central than absolute demonstration of truth — hard save for undeniably true claims — that is in praxis quite good enough. (I actually hold that a lot of scientific theorising is really glorified modelling [which notoriously uses empirically validated simplifications -- often, strictly false! -- to get useful results]; and we need to be very careful when we assert that THEORETICAL claims of science are true or “the truth.”)

    7 –> When we turn to the subject of functionally specific complex organisation and associated information in biosystems, the usual whipping boy is Paley and his analogy of the watch. Watches do not self-replicate so are not subject to “natural selection” [notice how the actual claimed information generator, chance variation usually gets omitted . . . ], and the analogy can be dismissed.

    [ . . . ]

  9. 8 –> Not so fast; that is the introductory remarks in Chapter 1. I find it highly interesting that we simply do not hear discussions of the following from Chapter 2:

    Suppose, in the next place, that the person who found the watch should after some time discover that, in addition to all the properties which he had hitherto observed in it, it possessed the unexpected property of producing in the course of its movement another watch like itself — the thing is conceivable; that it contained within it a mechanism, a system of parts — a mold, for instance, or a complex adjustment of lathes, baffles, and other tools — evidently and separately calculated for this purpose . . . .

    The first effect would be to increase his admiration of the contrivance, and his conviction of the consummate skill of the contriver. Whether he regarded the object of the contrivance, the distinct apparatus, the intricate, yet in many parts intelligible mechanism by which it was carried on, he would perceive in this new observation nothing but an additional reason for doing what he had already done — for referring the construction of the watch to design and to supreme art . . . . He would reflect, that though the watch before him were, in some sense, the maker of the watch, which, was fabricated in the course of its movements, yet it was in a very different sense from that in which a carpenter, for instance, is the maker of a chair — the author of its contrivance, the cause of the relation of its parts to their use.

    9 –> Thus, in significant measure, at the turn of C19 [i.e. quarter century after Hume], Paley pointed the way to the work of von Neumann [remember, the originator of the dominant architecture of modern computers], who from 1949 identified the logical and dynamical requisites for an automaton that does something real-world, three-dimensionally physical and has in it the additional capacity to self-replicate:

    (i) an underlying code to record/store the required information [that specifies how to build the general machine and the replication facility itself! This code in principle could be a sequence of cams on a cam bar that program our watch to make the required steps . . . ] and to guide procedures for using it,

    (ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with

    (iii) a tape reader machine [called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions to guide implementation, and

    (iv) implementing machines (and associated organisation and procedures) to carry out the specified replication (including that of the constructor itself); backed up by

    (v) either:

    (1) a pre-existing reservoir of required parts and energy sources, or

    (2) associated “metabolic” machines carrying out activities that provide required specific materials and forms of energy by using the generic resources in the surrounding environment.

    10 –> This is of course irreducibly complex. Unless and until all main components and underlying requisites [such as the code, energy sources and materials sources] are present in a properly organised functional way, it will not work. (This can be demonstrated from software instantiations, and of course Craig Venter’s error that had to be fixed before the bacteria would startup says much the same.)

    11 –> So, Paley’s point is very definitely back on the table. Especially as, in the CV + NS –> Darwinian evolution, the premise is that we have viable, functional, environmentally more or less fit, reproducing populations. And, the CV part is the component that is expected to hit on novel function by chance.

    12 –> So, if the function requires a certain threshold of complexity [125 bytes is 1,000 bits, which I prefer to 500, to make the point clear on swamping of the possible states the observed universe can scan], then it is not credibly reacheable by CV + NS, whether in the pre-biotic world or in the biological world.

    13 –> Prebiotically, as a metabolising self-replicating organism is beyond the barrier [hypothetical spontaneiusly occurring autocatalysing, self relicating molecules or sets of molecules are neither credibly empirically supported nor functionally comparable to cells].

    14 –> In the bio world because to get to a body plan innovation of any significance, you have to cross yet another sea of configurations to find deeply isolated islands before natural selection across variations can kick in.

    ______________

    GEM of TKI

  10. Let’s see:

    1. Does it provide means for us to better understand design and its possible certality in the universe? Yes.

    2. Even if it’s not true, can we still learn more about design in the human world simply by studying it? Yes.

    3. Does it challenge established scientific (or “scientific” as some others would say) theories, a central tenent of the scientific enterprise? Yes.

    4. Does the fact that it has somewhat (unwarrantedly) become a center of controversy bring more attention to the matter of origins in the public domain? Yes.

    5. If it provides even some way for humans to better understand what we call the “natural” world, even though it might not be wholly true, does it still have merits? Yes.

    6. Does it hurt anyone that people with different ideas than the current status quo are researching issues pertaining to OOL, FSCI, macroevolution etc? No.

    So the answer to the OP question is: Absolutely!

  11. In Seelke’s lab, they are asking the question, “how far can evolution go?” and developing models and experiments to answer that question. For reductionists, the question is invalid.

    Why so? I’m not exactly sure what you’re meaning here: can you explain?

Leave a Reply