Home » Complex Specified Information, ID Foundations, Intelligent Design, Irreducible Complexity, Science, worldview issues and society » EA’s “oldie but goodie” short primer on Intelligent Design, Sept. 2003

EA’s “oldie but goodie” short primer on Intelligent Design, Sept. 2003

Sometimes, we run across a sleeper that just begs to be headlined here at UD.

EA’s short primer on ID, drawn up in Sept 2003, is such a sleeper. Let’s observe:

__________

>> Brief Primer on Intelligent Design

 

Having read a fair amount of material on intelligent design and having been involved in various discussions on the topic, I decided to prepare this brief primer that I trust will be useful in clarifying the central issues and in helping those less familiar with intelligent design understand its basic propositions.

This is not intended to be a comprehensive analysis of intelligent design, nor is it intended to respond to criticisms.  Rather, this represents my modest attempt to avoid the side roads and the irrelevancies, and outline the fundamental central tenet of intelligent design, which is that some things exhibit characteristics of design that can be objectively and reliably detected.  It is my view that criticisms of intelligent design must focus on this central tenet, or risk missing the mark.  It is also with this central tenet that intelligent design stands or falls as a scientific enterprise.

Setting the Stage

As with so many issues, it is important to first define our terms.  In public debates, the term “intelligent design” is often incorrectly associated with anyone who believes that the Earth and all life upon the Earth were actively created by an intelligent Creator, and when used pejoratively, the term generates much more heat than light and adds no substantive insight to the discussion.

In a broader sense, the term might be applied to individuals who hold to a basic teleological view of the universe or the diversity of life on earth.  In this sense, many individuals believe in some form of intelligent design, including those who hold to an initial act of life’s creation, followed by naturalistic evolutionary mechanisms.

In yet a more concrete sense, the term is often used with respect to those involved in the modern intelligent design movement, including vocal proponents such as Philip Johnson and Jonathan Wells.  Although Johnson and Wells are certainly involved in the broader intelligent design movement, they largely use intelligent design as a tool for promoting change in current educational and philosophical frameworks.  This use of intelligent design as a tool for change has received by far the most press coverage and is at the heart of the often-heated debates over school curricula.  However, as intelligent design’s primary spokesperson, William Dembski, has pointed out, intelligent design’s use as a tool for change is secondary to intelligent design’s undertaking as an independent scientific enterprise.

Finally, therefore, intelligent design refers to the science of detecting design.  In this latter sense, intelligent design is not limited to debates over evolutionary theory or discussions of design in nature, but covers the study of signs of intelligence wherever they may occur: whether in archeology, forensic science, the search for extraterrestrial intelligence, or otherwise.  (Though not strictly limited to historical events, intelligent design argues that design can be detected in some things even in the absence of any reliable historical record or independent knowledge of a designing intelligence.  It is in this context that we wish to discuss intelligent design.)  Defined more tightly, intelligent design can thus be viewed as the science of studying the criteria, parameters and procedures for reliably detecting the activity of an intelligent agent.

Associated with this latter more limited definition are scientists involved in such a scientific enterprise.  These individuals include, probably most notably, Dembski and Michael Behe, and a number of other scientists who have begun to take notice of intelligent design as a legitimate scientific inquiry.

It is in this latter sense that I wish to examine the concept of intelligent design.

Basic Propositions

What then is the basic foundation and what are the basic propositions of intelligent design?

Intelligent design begins with a very basic proposition: some things are designed.  This is slightly more complicated than it sounds, but not much, if we keep a couple of points in mind.

First, one might object that many things appear to be partly designed and partly not.  This, however, is simply a matter of drilling down deeply enough to identify the discrete “thing” being examined.  For example, if we look at a stone wall we can see that it is made up of stones of various sizes and shapes.  Even if we assume that the stones themselves were not the product of intelligent design, we would conclude that they have been used by an intelligent agent in designing and building the wall.  Thus, in situations where something looks partly designed and partly not designed, we need simply drill down further and determine which aspect, portion, or piece of the “thing” we are evaluating.  In this example, are we examining the individual stones, or are we examining their overall arrangement, pattern, and resulting function?

Even if we are unable to break down a particular object or system into its component parts, and we end up with a “thing” that is partially designed and partially not designed, the initial proposition of intelligent design would remain essentially the same: some parts, or portions, or components of some things are designed.

Second, when we talk about the fact that some things are designed, we are not referring only to physical objects, but are referring to anything that is the subject of design, whether it be a physical object, a system, or a message or other representation able to convey information.  Thus if I took the same naturally-occurring stones, and instead of building a wall, I laid them out on the beach to spell a message, we would also have a clear indication of the actions of an intelligent agent, once again not in the stones themselves, but in the representation created by the stones and the information conveyed by that representation.

Given this basic proposition that some things are designed, intelligent design then asks the next logical question: is it possible to detect design?  As others have pointed out, if the unlikely answer is “no,” then we can only say that everything may or may not be designed, and we have no way of determining whether any particular item is or is not designed.  However, if the likely answer is “yes,” then this leads to a final and more challenging question that lies at the heart of intelligent design theory and intelligent design as a scientific enterprise: how does one reliably detect design?

Characteristics of Design and Limitations of Intelligent Design

What kinds of characteristics do things that are designed exhibit?  When we contemplate things that are designed – a car, a computer, a carefully coordinated bouquet of flowers – a number of characteristics might spring to mind, such as regularity, order, and beauty.  However, if we think for a moment, we can come up with many examples of naturally occurring phenomena that might fit these descriptions: the rotation of the Earth that brings each new day and the well-timed phases of the moon exhibit regularity; naturally-occurring crystals are examples of nearly flawless order; the rainbow or the sunset, resulting from the sun’s rays playing in the atmosphere, are paradigms of beauty.

To be sure, characteristics such as regularity and order might be strongly indicative of an intelligent agent in those instances where natural phenomena would not normally account for them, such as a handful of evenly spaced flowers growing beside the highway, or a pile of carefully stacked rocks along the hiking trail.  Nevertheless, because there are many instances of naturally occurring phenomena that exhibit regularity, order, and beauty, the mere existence of these characteristics is not necessarily indicative of design.  In other words, these are not necessary defining characteristics of design.

On the flip side, there are many things that are designed that do not exhibit any particular regularity or order, at least not in a mathematical sense, such as a painting or a sculpture.  There are also many objects of design that do not evoke any particular sense of beauty.  And this brings up an important limitation of intelligent design: we are not able to identify everything that is designed.

A related limitation arises in that we cannot say with certainty that a particular thing is not designed.  This is particularly true, given that many things are purposely designed to resemble naturally occurring phenomena.  For example, in my yard I have many rocks that have been purposely designed and strategically placed to resemble the random placement of rocks in a stream.  In addition, when I recently remodeled a room in my home, I used a faux painting technique – carefully designed and coordinated over the course of several hours – to resemble a naturally occurring pattern.

As a result, intelligent design is limited in two important aspects: it can neither identify all things that are designed, nor can it tell us with certainty that a particular thing is not designed.

But that leaves one remaining possibility: is it possible to identify with certainty some things that are designed?  Dembski and Behe would argue that the answer is “yes.”

Possibility versus Probability

In order to identify with certainty that something is designed, we must be able to define characteristics that, while not necessarily present in all things designed, are never present in things not designed.  It is in defining these characteristics and setting the parameters for identifying and studying these characteristics, that intelligent design seeks to make its scientific contribution.

We have already reviewed some potential characteristics of things that might be designed, and have noted, for example, that regularity and order do not necessarily define design.  I have posited, however, that regularity and order might provide an inference of design, in those instances where natural phenomena would not normally account for them, such as the handful of evenly spaced flowers or the pile of stacked rocks.  Let’s examine these two examples in a bit more detail.

Is it possible that this pattern of flowers or the stack of rocks occurred naturally?  Yes, it is possible.  It is also possible, at least as a pure logical matter, that the sun will cease to shine tomorrow morning at 9:00 a.m.  To give a stronger example, is it possible that the laws of physics will fail tonight at midnight?  Sure, as a pure logical matter.  But is it likely?  Absolutely not.  In fact, based on past observations and experience, we deem such an event so unlikely as to be a practical impossibility.

Note that in the examples of the sun ceasing to shine or the laws of physics failing we are not talking simply about unusual or rare events; rather we are talking about something so improbable that we, our precious scientific theories, and the very community in which we live are more likely to pass into oblivion before the event in question occurs.  Thus for all practical purposes, within the frame of reference of the universe as we understand it and the world in which we live and operate, it can be deemed an impossibility.  Dembski has already skillfully addressed this issue of logical possibility, so I will not review the matter further, except to summarize that in science we are not so interested in pure logical possibility as in realistic probability.  It is within this realm of probability that all science operates, and it is in this sense that we must view the probabilities relevant to intelligent design.

However, while we need not be concerned with wildly speculative logical possibilities, we might nevertheless conclude that the pattern of flowers or the stack of rocks is possible, not only as a matter of logical possibility, but also as a matter of reasonable probability, within the realm of our experience.  After all, there are lots of flowers on the Earth and surely a handful of them must eventually turn up evenly spaced as though carefully planted.  In addition, we have all seen precariously balanced rocks, formed as a result of erosion acting on rocks of disparate hardness, so perhaps our pile of rocks also occurred naturally.  We might admit that our flowers and our stack of rocks are rare and unusual natural phenomena, but we would argue that they are not outside of the realm of probability or our past experience.

Thus, the inference of design needs to get much stronger before we are satisfied that our pattern of flowers or our stack of rocks have been designed.

The Design Inference Continuum

Now let’s suppose that we tweak the examples a bit.  Let’s suppose that instead of a handful of flowers, we have several dozen flowers, each evenly spaced one foot apart along the highway.  Can we safely conclude that this is the product of design?  What about a dozen identical stacks of rocks along the hiking trail?  One might still mount an argument that these phenomena do not yet reliably indicate design because they could have been created naturally.  Nevertheless, in making such an argument we would be relying less on realistic probabilities and what we know about the world around us, and slipping closer to the argument by logical possibility.  This precisely the mistake for which Dembski takes Allen Orr to task.

Now allow me to tweak yet a bit more.  Let’s suppose that the dozens of flowers are now hundreds, each in a carefully and evenly spaced pattern along the highway.  At this point, the probability of natural occurrence becomes so low as to completely escape our previous experience; it becomes so low as to suggest practical impossibility.  Is it the sheer number of flowers that puts us over the hump?  No, it is not the number of flowers itself that provides evidence for design, but the number of spacings between the flowers, the complexity of the overall pattern, and the fact that these spacings and the resulting complexity are not required by any natural law, but are only one of any number of possible variations.  In other words, it is the discretionary placement of all of these flowers, selected from among the nearly infinite number of placements possible under natural laws, which allows us to infer design.  It is this placement of all the flowers, which gives the characteristics of specificity and complexity, and which Dembski terms “specified complexity.”  And it is in this realm of specified complexity that the probability of non-design nears impossibility, and our confidence in inferring design nears certainty.

Yet, our examples can become even more compelling.  As a last modification, let’s suppose that the flowers are now arranged by the side of the road in the outline of the state of Texas, complete with Bluebonnets in the shape of the Lone Star.  Let’s suppose that our stacks of rocks are arranged so that there is one stack exactly each mile along the trail, or one stack at each fork in the trail.  Now we have not only specified complex patterns, but patterns high in secondary information content.  In the one case we have a shape that identifies Texas, a particular type of flower that signifies the state, and a star that is not just a pattern, but a pattern with strong symbolic meaning.  Along our hiking trail we have markers that carry out a function by providing specific information regarding changes in the trail or indicating the distance traveled.

Intelligent design, as a scientific enterprise is geared toward this end of the probability continuum where the probability of non-design nears zero and the probability of design nears one.  In some ways, focusing only on the area of most certainty is a rather modest and limiting approach.  Yet design theorists willingly give up the possibility of identifying design in many cases where it in fact exists, in exchange for the accuracy and the certainty that a more stringent set of criteria bestow.  In this way, the design inference is lifted from the level of broad intuition to a focused scientific instrument with definitive testable criteria.

Conclusion

As a scientific undertaking, intelligent design is not in the business of identifying all things designed, nor is it in the business of confirming with certainty that a particular thing is not designed.  Indeed, intelligent design, and it is fair to say current human knowledge, is incapable of performing these tasks.  What intelligent design does seek to do, however, is identify some things that are designed.

We have seen that the argument to design is essentially an inference based on probabilities.  As a result, there is a continuum ranging from the likelihood of non-design to the likelihood of design.  At a certain point the probability of non-design nears zero and the probability of design nears one.  At that point we can say, the design theorist argues, with as much certainty as any other scientific fact or proposition, that the thing in question was designed.  It is in this area of specified complexity (of which high secondary information content and Behe’s “irreducible complexity” are examples) that the theory of intelligent design operates.

Criticisms of intelligent design based on social, religious, philosophical, or cultural grounds, including complaints about the identity, motives, or capabilities of the putative designer, miss the mark.  Design theorists argue that specified complexity can be objectively and reliably defined and detected so that the probability of non-design nears impossibility and the probability of design nears certainty.  This is intelligent design’s central tenet.  It is on this point, and only on this point, that intelligent design as a scientific undertaking can be appropriately challenged and criticized.  And it is on this point that Dembski, Behe, and others are confident that intelligent design will make its greatest contribution.

Eric Anderson

September 9, 2003>>

___________

It seems to me the matter was clear enough a decade ago, and the objections were sufficiently answered a decade ago.

Why are we still meeting the same problems, ten years later?

I want to suggest, that this has more to do with unnecessary heat, unjustifiable polarisation and inexcusable clouding of issues, than with the basic substance on the merits. Can we learn from the mistakes made over these past ten years, and do better over the next ten years?

I hope so. END

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

62 Responses to EA’s “oldie but goodie” short primer on Intelligent Design, Sept. 2003

  1. Oldies, but goodies . . . thanks, EA

  2. Thanks, kf. I’m honored that you would highlight this.

    The arguments against design remain largely the same ten years on, including regular “complaints about the identity, motives, or capabilities of the putative designer,” and thus often miss the mark. Also, we saw recently (from Lizzie, I believe) a complaint about design based on false negatives, which reflects a misunderstanding of the design inference and is also addressed in detail above.

    I should add that if I were writing this today I would certainly include other names on the scientific list: Meyer, in particular, has done excellent work over the past several years; Axe, Gauger, Wells, Minnich and others.

    After a decade there is stronger evidence for design in biology growing all the time, with little substance from the objectors. In nearly all cases, objectors fail to even address the central issue. In those rare instances in which they home in on the real issue — can we lay out adequate parameters that allow us to reliably detect design in some instances — we too often see blatant denial of the existence of things like information and intelligence, bluffs about evolution’s alleged capabilities, and intelligence smuggled in through the back door in evolutionary algorithms. All these tactics point to the fact that intelligent design has struck a chord and that its central tenet is more alive and well than ever.

  3. “Johnson and Wells are certainly involved in the broader intelligent design movement, they largely use intelligent design as a tool for promoting change in current educational and philosophical frameworks”

    My recollection is that Johnson advocated taking the debate to academia (i.e. colleges and universities), because that’s where the agendas are set. I don’t recall him ever advocating pushing intelligent design into high school classrooms. I think the distinction is important.

  4. Nice essay, much better intro into ID than the Wiki article.

    The only important element left out (it may have been vaguely hinted at in one sentence) is the third leg of the specified complexity, besides the origin of life and its evolutions — the fine tuning of physical laws & constants. The latter artifacts are poised on a sharp tip of a needle, as improbable as anything in the biological world, yet tightly specified by the requirements of complex life, such as humans, which builds upon them.

    The importance of this third leg is threefold:

    a) No cheap, narrow solutions to one problem only (such as those James Shapiro which deal only with some forms of evolution) can do as a coherent answer on the origin and the nature of the intelligent agency behind these ID artifacts.

    b) Pointing at the ID at the level of fundamental physics, implies that the same intelligent act (presently seen as Big Bang) started our universe with full anticipation of its much later fruits. That means, the intelligence needed is much greater than what is needed to explain life.

    c) The tight requirements on physical properties, their continuation and their latent complexification attributes, means that the same intelligent agency is acting continuously, including now, on all elements of the universe, upholding its life supporting patterns (which are only partially reflected in our physical & chemical laws).

    The last point is essential in order to be able to challenge common neo-Darwinian attribution of “randomness” trait to any mutation for which the cause is not known.

    The possibility that intelligent agency is continuously and actively guiding any process that we describe with our current physical & chemical laws in a coarse grained manner (statistically), raises the proof bar on the “randomness” claim even for micro-evolution (such as adaptation of bacteria to antibiotics) which is currently largely conceded to neo-Darwinian “random mutation”. The type of randomness proof required from neo-Darwinians, before they can claim it as explained, was illustrated in “random” dice tossing example earlier.

  5. 5

    With regard to random mutations and mutation rates, James Shapiro gives us some numbers:

    “The E.coli cell reproduces its DNA with remarkable precision (less than one mistake for every billion (10^9) new nucleotides incorporated) and at surprisingly high speed. The E.coli cell duplicates its 4.6 MB genome in 40 minutes (about 2,000 nucleotides per second), independently of the cell division time.”

    James Shapiro, Evolution: A View from the 21st Century, Kindle location 500

    Well, I’ll risk making some calculations and simplifying assumptions. Using 10^-9 as an estimate for the mutation rate across a 4.6 mb genome, there is a 0.0046 chance for some nucleotide to be copied incorrectly. This means that after around 217 replications, it’s more likely than not that some mutation will occur in some bacterium.

    The likelihood that the mutation will occur at a specific site is 1/4.6*10^6, or around 2.17*10^-7, given an assumption of uniformity with respect to all nucleotides. Making the simplifying assumption that any nucleotide change would result in a specific amino acid substitution with equal probability, there’s an additional factor of 1/22 ~= 0.0454.

    Tallying the results that a specific mutation occurs in a single replication event, we have 0.0046 * 0.0454 * 2.17E-7 ~= 4.5E-11. Unless I’m in error, which is certainly possible, a specific AA substitution is more likely than not after around 22 billion replications. Squaring this number gives something close to 4.8*10^20, which as it happens, is pretty close to Behe’s empirical 10^20 figure for chloroquine resistance, which I believe was two AA substitutions. (The P. falciparum genome is about five times larger however.)

    Granted, things are certainly more complicated than those calculations suggest, but at the same time, if my math and assumptions are not far off the mark, there doesn’t appear to be a good reason to rule out random mutations as a sufficient causal phenomenon with regard to one or two AA substitutions. This does not imply that random mutation is the necessary cause for any given substitution however.

  6. Chance Ratcliff (5), solid math!
    I have some follow-up questions:
    - Do we also make the simplifying assumption that DNA repair systems are non-existent?
    - And do we also make the simplifying assumption that after this AA substitution there will be a functioning epigenome, metabolism etc. to accommodate the new code?

  7. How many bacteria in your population, chance?

  8. 8

    My #5 is meant to address the latter part of nightlight’s #4, and the ongoing issue as to whether random mutations can be considered a sufficient cause of point mutations that result in one or two amino acid substitutions.

    While I certainly don’t suggest that any given mutation must have been random, it doesn’t seem reasonable to deny that random copying errors are sufficient for a couple of AA substitutions.

    One of nightlight’s criticisms of ID is the acceptance of random mutations as a sufficient cause for certain microevolutionary events; but sufficient causes do not imply necessary ones. I do not think it’s productive to take issue with neo-Darwinism over such trivial events that are implied by less-than-perfect replication events. The presence of specific mechanisms for error-correction in living systems implies that actual errors are implicit in the replication process.

    To again reference Shapiro:

    “The extraordinary low error frequency results from monitoring the results of the polymerization process and correcting incorporation mistakes after the fact, not from the inherent precision of the replication apparatus. The DNA polymerase that incorporates nucleotides itself has an intrinsic precision of about one mistake for every 100,000 (10^5) nucleotides. Although this is impressive when compared to any man-made manufacturing process, the polymerase alone is at least for orders of magnitude less accurate than the final replication result. Ultimate precision is achieved by two separates stages of sensory-based proofreading:”

    He goes on to describe a two-stage process that increases copying fidelity from 10^-5 to the impressive 10^-9. If errors did not occur, there would be no point in correcting them. The presence of error-correcting systems suggests quite forcefully that replication errors occur. Until such errors are shown to be directed as opposed to random, accepting the sufficiency of random errors for such events is quite reasonable.

  9. 9

    wd400, my numbers relate to the probability of a copying error in a single replication.

    The probability of error in a single replication: 0.0046
    The probability of the error occurring at a specific site: 2.17E-7
    The probability of the error resulting in a specific AA substitution: 0.0454

    Based on such a probability, at 1/P(E) replications I find that there is a 0.6321 chance of the event occurring, which is more likely than not.

    1 – (1-P(E))^1/P(E) converges on 0.6321 as P(E) gets small, unless I’m mistaken.

    Note that I’m not arguing against the sufficiency of random mutations to account for these types of substitutions — quite the contrary, I’m suggesting that in principle they can.

  10. 10

    Box @6, I think my #9 addresses your first question. As to your second,

    “And do we also make the simplifying assumption that after this AA substitution there will be a functioning epigenome, metabolism etc. to accommodate the new code?”

    Yes, in short. I wouldn’t suspect that a single mutation could foul epigenetic regulation, but I couldn’t say for sure either. So implicit in the assumptions is that such minor alterations are often tolerable to the organism, and if it happens that some mutation is detrimental to function, the organism fails.

    If you’re implying that it may not be possible for any mutation to occur without corresponding epigenetic modifications, that may well be true, but I have my doubts that small changes at that level would necessitate corresponding epigenetic modifications to prevent total system failure, at least in all cases.

  11. 11

    Box @6, I think my #9 addresses your first question. As to your second,

    Correction: that’s post #8 which addresses error correction.

  12. Chance Ratcliff #8 My #5 is meant to address the latter part of nightlight’s #4, and the ongoing issue as to whether random mutations can be considered a sufficient cause of point mutations that result in one or two amino acid substitutions.

    I got that from your #5, but I still don’t see where how was the alleged “randomness” (fair pick in some probability space) established here.

    Just by labeling a deviation from conformity as an “error” doesn’t mean it is actually an error, let alone random error.

    For example, human social networks, as analogue of cellular biochemical networks, have strong mechanisms for enforcing conformity on its members (social error correction, as it were), yet the acts of ‘heresy’ (or political incorrectness, or lawbreaking, etc) are not random, or errors, despite those “repair” mechanisms which push members to conform. The choices by heretics or lawbreakers may be right or wrong, but they’re still result of an intelligent agent pursuing his happiness as he sees it. They are errors in eyes of the customs or laws, but that doesn’t imply errors, let alone random acts, in the eyes of the heretic — they remain deliberate actions of an intelligent agency (even if we judge it as a dumb thing to do).

    To really establish “randomness” as a scientific fact refuting all possible alternatives, you would need to know how to calculate probability space (as illustrated in that dice example we discussed earlier) for the DNA molecule under those conditions, but that is a quantum mechanical problem which is beyond the reach of the present science for that size molecules, even without any environmental effects. Hence, all claims of “randomness” of spontaneous mutations are unscientific (opinions, wishful thinking).

    Merely measuring the mutation rates at different sites doesn’t tell you by itself anything about intelligent or unintelligent nature of the guidance. It certainly is not a substitute for proper quantum calculation that computes probabilities for all possible changes.

    Among possible alternatives hypotheses blocking the leap to “randomness” claim, one can look at such mutations as deliberate attempts by the guiding intelligence to produce some variety as result of some more subtle anticipatory considerations i.e. one can view the processes of exact copying and inexact copying as two kinds of processes with vastly different odds of prevailing but still, each deliberately and intelligently guided.

    That’s like you deciding whether to get up for work at regular time as the alarm rings (i.e. copy your daily pattern exactly), to or just stay in bed and do some thinking (make an error in copy of the daily pattern) — the first one may prevail almost always, yet both outcomes, exact replica of daily pattern or the inexact replica, are results of deliberation by an intelligent agent.

    General rationale for this type of objection is that if a biological phenomenon has an analogous counterpart in social or technological realms, where it is intelligently guided, then one cannot use such biological phenomenon as a “proof” or even as mere “indication” of randomness of the phenomenon.

  13. Chance Ratcliff #10: If you’re implying that it may not be possible for any mutation to occur without corresponding epigenetic modifications, that may well be true, but I have my doubts that small changes at that level would necessitate corresponding epigenetic modifications to prevent total system failure, at least in all cases.

    I’m willing to argue that naturalism cannot allow for any serious DNA mutation.
    Let’s assume that the change in DNA leads to a new code for a new protein. Does this not require an adaption of spliceosomes? And how will the new protein find its way to its proper new place? We know of protein import and sorting pathways at the membrane of mitochondria, don’t they (or other pathways) need new information – and so adaptation – regarding this new protein? The amount of the new protein needs to be regulated. Don’t we need new information and so new or (adapted) proteins to accomplish this regulation?
    I would argue that the fact that organisms are capable of accommodating new proteins points towards a reality which is unexplainable by naturalism.

    ‘The essence of cellular life is regulation: The cell controls how much and what kinds of chemicals it makes; when it loses control, it dies’,
    Michael Behe, Darwin’s black box, p.191.

  14. 14

    nightlight @12, I believe I showed that based on current knowledge of cause and effect, random mutations are a sufficient cause for one or two AA substitutions. This is warranted and supported by two observations: 1) the presence of error-correcting mechanisms as part of the replication process suggests that errors actually occur; 2) observed error rates with respect to a specific substitution are able to account for one or two substitutions.

    If you’re trying to convince me that random mutations are not a necessary cause of mutations, then I already agree. Note that this is implied in my argument. If random mutation then substitution. This is a sufficient causal relationship, not a necessary one. This allows for other sufficient causes to potentially explain the consequent.

    Random mutations are a sufficient but not a necessary cause for a limited number of substitutions.

  15. 15

    Box, with reference to your #13, I don’t doubt that the creation of a new protein with a new function would potentially necessitate a whole suite of coordinated changes in other complementary systems, but I remain skeptical that a single AA substitution (or two) in a single protein, that confers a contextual selective advantage cannot ever occur, or is even unlikely to. I’m not trying to be dogmatic about this point, because we may find many cases where presumed randomness surrenders to definite purpose in the face of new evidence. However I just don’t think it’s reasonable, based on observations, to rule out that random mutations are sufficient to account for small changes. See my #14 to nightlight for more.

    I’d like to bring up Michael Behe’s First Rule of Adaptive Evolution: break or blunt any functional coded element whose loss would yeild a net fitness gain.

    For most of history such a question could not be investigated, but with the tools that have become available to molecular biology in the past few decades, a good start can be made on addressing it. The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. (I make a number of distinctions defining gain-, loss- and modification of function mutations, so for the complete story please read the paper.) Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.

    So even when we’re talking about beneficial mutations, often times these mutations confer an advantage by breaking something. The whole paper is here: Experimental Evolution, Loss-of-Function Mutations, and “The First Rule of Adaptive Evolution”. In cases where a loss-of-function mutation provides some net fitness gain given particular environmental stressors, we can consider the event may be directed, targeted, etc. — that’s certainly possible, but we can also consider that organisms have a property of robustness that allow them to continue functioning even when some system fails. Robustness is an engineering principle, and compatible with ID. Again I’m not suggesting that substitution mutations are necessarily random, just that the random factor is warranted as a sufficient cause. This doesn’t rule out other causes for such an event.

  16. 16

    Footnote to #14 and #15,

    I’m perfectly happy to entertain hypotheses which suggest other causes for presumed random events. But I really don’t think ID should be publicly challenging substitution events that occur within Behe’s Edge of Evolution. It’s a bit like arguing that a pair of dice that show a reasonable distribution might still be loaded in some way. Such may be the case, and evidence may present itself eventually; but until such a situation occurs, it’s not productive to accuse someone of cheating. However as an “internal” debate, I think it’s just fine to question just how random these mutations may or may not be. It’s just not where ID should be pushing its arguments, imo.

  17. Chance @16:

    Exactly correct. Because the design inference cannot demonstrate, and is not in the business of demonstrating, that something is not designed.

    As a result, it is possible, in a purely logical sort of way, that nearly everything is designed. But that is not a helpful notion in the current debate and we don’t gain any mileage going down that path, because (i) it is not the end of the spectrum where the design inference is focused, and (ii) there are lots of (potential) false negatives.

    The conclusion of design needs to remain squarely focused on, and limited to, those situations in which design is clearly present, using a reasonable probability bound.

  18. Chance,

    Sure – but your calculations appear to be based on on would happen to a single bacterium. There are ~10e9 E coli in your average mL of culture broth. Quite a few chances to a hit on a mutation. (In fact, across the world as a whole any given two-AA mutation probably happens every week)

    I also dont’ really follow your math. If the mutation rate is 10e-9 then the chance that any nucleotide mutates in any generation is 10e-9, isn’t it?

  19. Chance Ratcliff #14: …random mutations are a sufficient cause for one or two AA substitutions. This is warranted and supported by two observations:

    1) the presence of error-correcting mechanisms as part of the replication process suggests that errors actually occur;

    2) observed error rates with respect to a specific substitution are able to account for one or two substitutions.

    On (1): error-correcting mechanisms are commonly seen in intelligently guided evolutions at social & technological levels — it is a conformity enforcing mechanism, which doesn’t make non-conforming phenomena “random” or non-guided actions. Hence, one cannot use its presence in biology as a proof or evidence of randomness of the deviations from the norm, either.

    This is a simple application of the general logical coherence requirement — any biological phenomenon that has analogues in other realms where it is known to be produced by an intelligent process, cannot be concluded/inferred to be an example of non-intelligently (randomly) produced phenomenon in biology.

    Otherwise one could apply that same “logic” claimed valid in biology to arrive at the analogous conclusion in the other realm, where it is contradicted by the known explanation (as intelligently produced phenomenon). That’s then a refutation of any such “logic” via reductio ad absurdum.

    The main weakness, though, is with (2). The AA substitution is an extremely narrow kind of transition, a speck in the combinatorial space of all chemically or physically accessible transitions of similar magnitude which don’t produce AA’s (but any among countless other kinds of molecules).

    Consider analogy — you find some ancient hand copied manuscript with a word ‘sun’ written instead of word ‘son’ (as context or other better copies would require), obviously due to scribe’s momentary loss of focus. But that doesn’t imply ‘sun’ is a “random” ink smudge which didn’t require intelligent scribe, since it is still a valid word (like valid AA), one shape among astronomical number of all conceivable random ink smudges of that approximate size.

    Suppose now Darwinist style claim that no scribe is needed since the whole artifact of the manuscript is produced via random ink smudges that were later selected in competition until a book came out that appears written by a scribe.

    You can’t coherently oppose them by backing away to some higher level designed units, say sentences, and hold onto the ‘sure thing’ position that all valid sentences are result of intelligent action by a scribe, while conceding that anything below that level of complexity doesn’t require scribe and can be produced by random smudges from the ink drips.

    While it is true that having a whole valid sentence is even more unlikely to be a result of random smudges than having valid words in incorrect sentences, the supposedly ‘intelligent process’ that such compromise suggests is incoherent and can be rejected as absurd.

    Namely, by such ‘sure thing’ compromise theory, the ancient manuscript artifacts are produced by an intelligent agency which either writes correct full sentences, or every now and then leaves out sentence size gaps in the text where random ink smudges somehow drip into and which by chance have shapes of valid words seen in the rest of the text, but still fall short of making a correct whole sentence.

    Besides absurdity, allowing them to get away with ‘random smudges can create valid words and partial sentences without scribe‘ is sure way to have to back off next to holding onto whole valid paragraphs, then to whole valid pages,… since they may eventually find an example of a partial sentence which has only one letter wrong and which could be repaired into a correct sentence via the experimentally established random smudge in not very many tries.

    At that point they have shown, with the key help from your initial concession that partial sentences don’t need scribe, that random smudges can produce whole valid sentence which you claimed that only a scribe can produce. Hence you now have to back off from the sentence threshold to holding onto the valid paragraphs threshold, etc, i.e. by allowing a threshold for for no-scribe gap you have needlessly condemned yourself to a constant retreat and a certain complete defeat, eventually, reached one step at a time, by expanding the scribe-less gaps you left for them to work on.

    The position I was suggesting is to hold that intelligent agency is active at all levels, at all times, at all points. Hence the intelligent agency leaves no gaps or ‘dumb segments’ where it is absent from the scene. Allowing for any such ‘dumb segments’ lets the Darwinians string them into larger ‘dumb segments’ via some demonstrable random link of low complexity, and thus gradually expand the size of the ‘dumb segments’ i.e. leaving you defending ‘God of gaps’ in a constant retreat.

    Consequently, one would require that those claiming random smudges as the origin of manuscripts have a burden of proof of their claim at any level. For any level of complexity they claim sufficiency of randomness, they need to enumerate all possible random ink smudges of that level or size, then compute odds of such random smudges forming letter shapes or word shapes… and then establish that the number of available tries for random smudging can produce even one valid letter, valid word … whatever their claim is.

    In other words, since they are claiming particular kind of process as the origin of manuscripts at all levels, it is their burden to show it is probabilistically valid at all levels, not just from some threshold level, while conceding everything below it as a ‘dumb segment’ requiring no scribe.

    Otherwise, the intelligent scribe remains the most plausible consistent explanation across all sizes, since we know intelligent scribes can produce such manuscripts.

    That is a far stronger position to hold, since if they wish to claim some property of the process (such as randomness), the burden of proof is on them to show that such property is probabilistically plausible, not on me to prove it is implausible. It is also internally much more coherent position, since it doesn’t hypothesize an absurd kind of intelligent agency which designs full sentences, but also leaves sentence size gaps for random smudges to somehow form almost correct full sentences.

    Within this stronger & more coherent uniform position, the intelligent agency is active at all times and at all levels, leaving no ‘dumb segments’.

    In case the opposition can properly demonstrate that some level of complexity is accessible to a random process, that still only makes random process an alternative explanation at that low level of complexity, but not the sole explanation even for that since we never concede any ‘dumb segment’ — the agency doesn’t leave any gaps.

    Hence, they cannot use any concession (of dumb segments) to piggyback another level of complexity on top of it and push further, making you concede larger and larger ‘dumb segments’ (once you concede size X as dumb segment based on accessibility to random process, you have no basis to hold onto size X+1 if they can show accessibility of X+1 to random process, under assumption that any X is a ‘dumb segment’).

    Further, unlike the weak ‘sure thing’ position you propose, it is now Darwinian style explanation which appears absurd by offering an alternative explanation for small disconnected patches of complexity which they can show to be accessible to random process. In contrast, you hold the sole coherent explanation for all levels of complexity.

  20. Hello nightlight,

    Bolshinstvo v etom piecmo ya pisal ranshye, a dumal shto Vui uzhe ushyli sovsyem ot Uncommon Descent blog potomu shto otvyeti k Vashemu voprosi ne ochin horosho builo zdes ot IDists.

    Here I come out of retirement from UD, just for you. As someone who has met most of the leaders of the IDM, eaten with and discussed with them, exchanged e-mails, answered questions from them and also questioned them (many of which were not answered), I am in a somewhat unusual position to respond to you. At one time during my late undergrad and graduate school days, I ate freely of IDist arguments and ‘theories,’ that is, until finally rejecting them as simplistic, idealistic, duplicitous and ultimately naïve.

    So, please take my advice with that confession provided at the start, since you are of course free to decide as you choose.

    Taki zhe liudi zdes v Uncommon Descent, oni ne zhivyeot ili uchastvovat v mainstream nauk. Eto ne mesto dlya uchyeoni liudi, a prosti liudi kotorui liubit i poklonit na ‘Intelligent Design Theory’ (t.e. teoria ‘razumni zasmysl’) kak novui nauchni Ikon. To shto Vui skazali uzhe po povodi raznitsa mezhdu chelovecheski ‘design’ i ne-chelovechski ‘Design’ – eto horosho i yasno. K solzhaleniye, oni ne ponimayet i otkazat iz za tovo shto oni liubit zhalovatsa protev yestestveni nauk i ‘naturalism.’ (A ya tozhe protev ‘naturalism’ a ispolzuyu votochni Europeeski podxodi, a ne Amerikanski modeli.)

    Kazhetsa mne shto Vui tozhe hotel bui vuizovat neverushi uchyeoni potom shto Vui verushi chilovek. Konyetchno ya za eto positsia na odna storona. A v drugaya storona, ‘Intelligent Design Theory’ – takoi stranni i protevorechivi shtobui yesli Vui hotitye snimat i prodvigat eto theory, Vui budyete izkluchyeon ot nastoyashi nauchni soobshetvo. Eto prosto normalno i praveleno.

    Pishu tak potomu shto ya mnogo ob etom izuchal i dazhe v rossia podiscuteroval s liudmi. Eto ne pravoslavni theory, sovsyem nyet. Eto protev Catolica i tozhe protev Pravoslaviya, immeno na tema analogicheski znacheniye.

    Yesli hotiye mne sotrudnichit ili prosto contactirovat na eta tema, mozhno sleduet ssilka na moya imya i tam naidyeot address. S udovolstviam Vam otvichayu.

    When you acknowledge “the blurring between the map and the territory,” of course I agree. IDists do the same thing with ‘intelligence’ between ‘human-made’ and ‘non-human-made’ things. ‘Univocal predication’ is what they do, trying to force this as ‘orthodox’ for their Protestant revolutionary attitude. The vast majority of IDists are Protestant activists, many of whom are young earth creationists and most of whom have no training in natural or applied sciences.

    You highlight the problem of ‘infinite regression,’ which IDists have not satisfactorily answered.

    “MN would allow that chess playing program is an intelligent agency (agent).” – nightlight

    I’m not sure. It would seem to depend on how Americanised you are. The ideology of MN is an American invention, specifically, that of Christian ethicist Paul de Vries (1986). There are American theistic evolutionists/theists who accept the majority of evolutionary biology (if not those features that are clearly anti-theistic) that promote MN while being totally ignorant of ideological naturalism.

    You shared: “I am not sure whether it is semantics or on substance.”

    The ‘theory/hypothesis/ideology/paradigm’ of ‘intelligent design/Intelligent Design’ is highly semantic. Think about such terms as ‘code,’ ‘intelligence,’ ‘agent,’ etc. These are not common terms in physics, biology or chemistry. The IDM is proposing an entirely new semantic constellation from what most people currently use. They want to ‘REBEL’ against mainstream science and to manipulate ‘natural science’ so that it will include studies of ‘intelligence/Intelligence.’ Your comments noting this have been both insightful and helpful.

    Unfortunately, ideological-IDists won’t allow what you say to be spread amongst their ranks; it is too revealing of their twisting of words. ‘Design’ for non-human-made things is not ‘analogous,’ but claiming univocality with God’s Design/Creation of the world. They spin this as well as they can, but their insistence on the natural scientificity of ‘Intelligent Design’ is their downfall.

    Let’s not try to deceive ourselves. What they are really trying to convince you of is that “God exists,” the unnamed ‘God’ of the Greeks and Freemasons. That is Big-ID-implicationism in a nutshell. They don’t care which God/Allah/YHWH/Baha’i. I eto zhe tochno pochemu svyasheniki po mira otvergat takoi evangelesticheskaya-activistskaya Amerikanskaya teoria. Skrivat svoi muisli pod odeyalom, delaia vid shto eto yestestvenni Nauk. Eto zhe normalno ili iskazheno?

    On the one hand, good for IDists, as that is what their personal Good News approach (Matt 28: 19-20) requires of them. The responsible and ‘orthodox’ position for Abrahamic believers, however, is to reject the Big-ID claims of ‘natural scientific proof/inference of God’s fingerprints.’ What then is ‘faith’ (vera) for if we have Big-ID proving (dokazivayushi) an Intelligent Agent (i.e. God) using natural science?

    There are the universal IDists who use Romans 1:20 as their weapon against both unbelief and different belief than their Protestant Evangelical American Triumphalism. Pochti nikto v etom dvizheniye ne Pravoslavni.

    “basing ID on “conscious” intelligence is like building a house on a tar pit, resulting in endless semantic debates advancing nowhere.” – nightlight

    Well-said!

    “Anything resting on ‘consciousness’ as its foundation is automatically outside of science, leaving it at best in the realm of philosophy.” – nightlight

    Perhaps ‘cognitive studies’ too?

    Vse seuchas, u menya nyet dalshye vremya. Uzhe sliishkom mnogo vremya tratil na etom ‘Intelligent Design’ dvizheniye. Oni selduyat kak za Ivan Susanin (yevo zovut Phillip Johnson). Kuda? – Nikuda. Kak preodelet ‘naturalism’ ili ‘materialism’? Ne nado ‘ID theory’. A mnogo interesni teoriyi s vostochnimi uchyeonimi tozhe kotorui bui polesno dlya chenit raskoldovivaniye v America i s zapadnim uchyeonimi.

    Vsevo dobrovo,
    Gregory

  21. Quick correction: ‘code’ is (e.g. genetic code), but ‘intelligence’ and ‘agent/agency’ are not. The point is that the analogy is stretched too far from ‘designed’ to ‘Designed’.

  22. 22

    wd400 @18,

    My reasoning is based on the error rate given a single replication event across the entire 4.6 mbp genome. Population size is relevant if you want to cash it out in terms of replications.

    And the error rate of 10^-9 is per nucleotide copied. the number 0.0046 comes from reasoning per above. It’s the likelihood of a mutation occurring during the replication of a whole genome.

  23. Chance Ratcliff #15: Box, with reference to your #13, I don’t doubt that the creation of a new protein with a new function would potentially necessitate a whole suite of coordinated changes in other complementary systems, (…)

    Chance Ratcliff, I brought forth my simple argument on several occasions and you are the first one to respond and I thank you for it. What my argument boils down to is that under naturalism beneficial DNA mutations are impossible.
    The days of DNA code as First Cause are over. We now know that DNA code for protein X doesn’t operate on its own and is only a part of what can be viewed upon as a larger specific subsystem of the cell which contains highly specific information about protein X.
    My obvious point is that if it is necessary for DNA code X AND all the constituent parts of the protein X’s subsystem to arise synchronously – and indeed it is necessary in order to function – then probability ratings will go through the roof exponentially faster than they already do when we just consider the DNA code in isolation.
    p.s. CR, excuse me for diverting attention from your central argument.

  24. Gregory @20:

    Nice first paragraph. However, questions have been well addressed here at UD. At least those that are rational.

    Think about such terms as ‘code,’ ‘intelligence,’ ‘agent,’ etc. These are not common terms in physics, biology or chemistry. The IDM is proposing an entirely new semantic constellation from what most people currently use.

    Absolute nonsense. These are common terms in every day language and are used in the ordinary sense of the words. All you need is a standard dictionary. There is nothing strange or unusual about it.

  25. 25

    Eric @17, thanks. As usual you frame the concepts as they relate to ID quite well.

  26. Gregory,

    Let’s not go back over that big ID little id crap again, please.

    Here at UD, and I’m sure I speak for everyone, since your departure we have all enjoyed some great discussions without having to waste time with your conspiracy theory.
    It just got really boring to end up with.

    Look no one here is particularly interested in it, in fact we probably only have one more person interested in discussing it than you do on your own blog, and that’s you!

    So why not do us all a favour and go back there.

    Besides that, hope you are well :)

  27. 27

    nightlight @19,

    “Consider analogy — you find some ancient hand copied manuscript with a word ‘sun’ written instead of word ‘son’ (as context or other better copies would require), obviously due to scribe’s momentary loss of focus. But that doesn’t imply ‘sun’ is a “random” ink smudge which didn’t require intelligent scribe, since it is still a valid word (like valid AA), one shape among astronomical number of all conceivable random ink smudges of that approximate size.”

    (My emphasis)

    It’s not a random ink smudge, but the error is random. The scribe did not intentionally impose it, he just was distracted at some point along the process and chose the wrong word. (I do this typing sometimes.) This doesn’t suggest that the copying process is random. Without debating the ontology of randomness, which I don’t intend to do, random events happen even in designed contexts. For instance, manufacturing errors occur, which necessitate a quality control cycle. This does not imply that the manufacturing process isn’t designed. It only implies that physical processes contain a degree of uncertainty, which presents as randomness. We wouldn’t step out onto the slippery slope of conceding that manufacturing processes were the result of random forces, just by admitting that random events can occur during the process. The presence of a quality control factor implies that errors occur and need to be dealt with to assure a higher level of accuracy than is intrinsic to the manufacturing process itself.

  28. Gregory@20

    Hello nightlight…

    Here I come out of retirement from UD, just for you.

    Oh wonder. Better break out the brass band there, nightlight.

  29. 29

    nightlight @19,

    On (1): error-correcting mechanisms are commonly seen in intelligently guided evolutions at social & technological levels — it is a conformity enforcing mechanism, which doesn’t make non-conforming phenomena “random” or non-guided actions. Hence, one cannot use its presence in biology as a proof or evidence of randomness of the deviations from the norm, either.

    I don’t buy the social analogy, so I don’t think that observing steering behaviors of intelligent agents within social constraints means we must conclude that replication errors are not random. WRT technology, or practically anything which relies on a physical process, random occurrences are contingencies which can and do happen, and are dealt with accordingly in whatever context they arise in. None of this undermines the inference to design where it’s already warranted.

    I’ll repeat my thesis: Random mutations are a sufficient but not a necessary cause for a limited number of substitutions.

    To show that this is not the case, one might demonstrate that replication errors are actually targeted, specific changes that don’t occur uniformly but are goal-directed. Or one could show that copying errors do not result in AA substitutions. Neither of those seem likely. So my very modest claim, RM->S, holds up pretty well.

  30. 30

    Box @23, I definitely agree that as a system becomes more complex, the difficulty in integrating a new function or a new “pathway” to function can rapidly become more difficult, and as such, the probability of incorporating it by random drop exponentially as the system becomes more complex and specific.

    The days of DNA code as First Cause are over. We now know that DNA code for protein X doesn’t operate on its own and is only a part of what can be viewed upon as a larger specific subsystem of the cell which contains highly specific information about protein X.

    Indeed, that would seem to be the case. And I wouldn’t presume that random mutations can add up to new systems incrementally, I don’t think that’s feasible. I just wanted to take issue with the notion that we cannot attribute any perturbation at the single or double AA level to randomness, aka, replication errors. That just doesn’t seem reasonable.

  31. 31

    nightlight @19,

    “Besides absurdity, allowing them to get away with ‘random smudges can create valid words and partial sentences without scribe‘ is sure way to have to back off next to holding onto whole valid paragraphs, then to whole valid pages,… since they may eventually find an example of a partial sentence which has only one letter wrong and which could be repaired into a correct sentence via the experimentally established random smudge in not very many tries.”

    There’s no logical connection between accepting that random errors occur and supposing that they can construct whole volumes incrementally while preserving a definite meaning at each step.

    “Chance Ratcliff kicked the cat.”

    “Chance Ratcliff kicked the can“.

    If that sentence was a witness statement against me, a copying error might make me look rather innocent instead — a loss of information resulting in a net fitness gain. This in no way implies that any change to the text can result in an increase of information or a fitness gain, and that successive errors will do so as well.

    At that point they have shown, with the key help from your initial concession that partial sentences don’t need scribe, that random smudges can produce whole valid sentence which you claimed that only a scribe can produce. Hence you now have to back off from the sentence threshold to holding onto the valid paragraphs threshold, etc, i.e. by allowing a threshold for for no-scribe gap you have needlessly condemned yourself to a constant retreat and a certain complete defeat, eventually, reached one step at a time, by expanding the scribe-less gaps you left for them to work on.

    I make no such concession that partial sentences don’t need a scribe, even by analogy. Random copying errors must be preceded by both a manuscript and a scribe. Allowing that a manuscript may be generally reliable even if a couple of random copying errors of individual letters occurs, in no way implies that the manuscript nor the scribe can be explained causally by randomness acting incrementally to produce function at each step.

    Let’s be clear. The claim of neo-Darwinism is that whole cellular subsystems (organelles and more) can be constructed incrementally in small steps, where each step is functionally advantageous in some environmental context. This presupposes an information-bearing system with the functional capacity to replicate itself. To say that random errors can occur during replication concedes nothing important. It’s just a statement of observation. To suggest that some error or two might produce a net fitness gain even if it results in a loss of function does not suggest that random errors can construct whole systems or the information which specifies them. There is simply no impetus to make that leap.

    My claim is modest. Replication errors beget substitutions. Again, this is sufficiency, not necessity. Accepting the sufficiency of a random error to produce an AA substitution does not concede that any substitution is necessarily random, nor that the information in which the error occurs is the product of random forces.

  32. @Gregory #20

    Zdravstvuyte Gregory,

    Although my Russian is focused almost exclusively on reading math and physics literature (the prices of Russian textbooks were irresistible for my student stipends), I didn’t have any trouble following your Russian passages (they were also more colorful than the English sections). Although we both have one foot in the Eastern European and one in the Western culture, our migration paths were in the opposite directions — you went from Canada to Russia, while I came from (a country formerly known as) Yugoslavia to USA (to my second grad school). Either path is a form of mental reboot into a new OS, quite disruptive and disorienting at first, but very refreshing and stimulating over time. This straddling of the same two realms seems to have resulted in both of us often “fighting” against both sides in the ID vs ND war of ideas.

    Checking out your blog and some earlier posts in UD, I see strong resonances between your concept of “Human Extension” and several other thought currents, such as “Extended Phenotype” of Dawkins (which generalizes his ‘selfish gene’ and ‘meme’ patterns), “Omega Point” by Teilhard de Chardin, mystical egregore, social organisms, as well as the zeitgeist of ‘internet as a superbrain’ emerging from numerous authors more recently.

    All of these ideas (which go way back to Aristotle, at least) identify a very interesting organizing principle of the universe, albeit each capturing only a segment of the whole pattern. After pursuing these white rabbits (and few others) each down his own trail, toward what seemed to be a common hidden treasure, each trail would somehow terminate unfinished and in dead end, driving me to the next one. I am beginning to suspect it is how this process is supposed to go and how it will continue, although each one appears at first as the final awaking into the real thing. My current “final” trail, which I call “Planckian networks”, combines the best insights of those that went before with few new elements from fundamental physics (pregeometry), math & computer science.

    Thanks to the stimulating questions and discussion from the folks here at UD, the key elements of the “Planckian networks” model were sketched in a recent thread here. The thread was unfortunately archived before I could gather links to the scattered posts into a coherent TOC as my concluding post on the thread, so for convenience of quick intro, here is how that goes:

    Planckian Networks intro

    Matrix, Leibniz, Teilhard de Chardain
    Model of consciousness, after death, Bell inequalities
    Harmonization, biochemical networks
    Free will, hybrid intelligence, quantum magic
    SFED, QED, quantum topics
    Carleman Linearization, SFED, panpsychism
    Additive intelligence, pregeometry, fine tuning
    Consciousness after death, exceptional path, limits of theories
    Self-programming networks
    Internal modeling, physics & information (rock in mud)
    Science, Russian dolls, mind stuff, internal models, laws vs patterns
    Goal oriented, anticipatory systems from pattern recognizers
    Attractors as memories, internal models, front loading
    Digital physics, complexity science, laws vs patterns
    Free will in fractal internal model, crossword puzzle
    Participatory front loading
    How it works, additive intelligence, composition problem
    Quantum measurement theory vs Glauber
    Limits of computations, irreducibility of laws, constraints, Fermi paradox
    Levels & power of computation, Max Plancks, broken bone
    Creation smarter than creator?
    Ontological levels, Game of Life, chess
    Genetic Algorithms vs Networks vs Dembski
    CSI vs networks, capacity of NNs, stereotyping, knowability
    Meyer, empirical consciousness
    CSI vs networks, limits of Abel, Dembski
    Counterexample for Abel
    Thinking vs computing
    Why simple building blocks

    Evolution process vs theory conflation

    Map vs Terrain
    Chess, consciousness vs computation
    Concession on microevolution, dice example
    Concessions, technological evolution

    Natural Science schema

    Algorithmically effective elements vs consciousness
    Science schema re-explained
    Qualia, science
    General algorithms
    Necessary vs sufficient, algorithmic effectiveness
    Algorithm semantics, parasitic elements
    Meyer, why cringe?
    Semantic wall (KF)
    Meyer, citation
    Meyer, sloppiness?
    Meyer, inductive strength
    Meyer’s leap
    Wisdom of leap
    Other links to intelligent mind
    Leap details
    Dembski, Mere Creation.. mind/intelligence conflation
    Wisdom of leap vs James Shapiro
    Science won’t leap over the edge more on edge
    Key holders, missing ID hypothesis
    Missing ID hypothesis

    _______

    Seven or more links (not sure these days) will probably trip the mod filter. KF

  33. Chance Ratcliff #27: It only implies that physical processes contain a degree of uncertainty, which presents as randomness. We wouldn’t step out onto the slippery slope of conceding that manufacturing processes were the result of random forces, just by admitting that random events can occur during the process.

    That’s exactly the problem — from your ID perspective you don’t think it is a slippery slope since you presuppose the intelligent context.

    But for neo-Darwinian side, as well as for the curious who are outside and are listening to their counterpoint, you appear to be conceding scribe-less creation of meaningful novelty (such as whole word, or AA in DNA) i.e. your implied context isn’t their implied context. Substitution generated novelty, like a mistyped word or a mangled sentence, is a much more intelligent error than a sequence of random smudges (which is what they are talking about under “random” alteration of DNA, a.k.a. “random mutation”). While you don’t assume absence of scribe in the creation of that meaningful error, it is taken and understood as such by the other side and the outsiders, from their perspectives.

    The only way to avoid getting trapped into the usual ND style equivocation of “random typo” (by an intelligent agency, which is what you mean) and “random smudge” (which is what they mean by “random error” which they claim is how everything came together), followed by your inevitable backing off to ever higher thresholds of complexity, is to enforce the distinction between the above two meanings throughout.

    The only way to keep the two distinct is to use their “random smudge” and insist they prove that this, the random smudge of an ink drop in the absence of intelligent context (scribe), is how the word ‘sun’ was produced on paper. To do that they need to model and enumerate all possible random smudges from ink drops and compute the odds of the ink pattern for ‘sun’, then match these odds against the number of tries available for alleged smudging. Only then they can justify that the whole novelty generation process which produced “sun” is random i.e. capable of occurring in the absence of any intelligent agency.

    Translated to biology, one has to insist that intelligent agency is active throughout the AA substitution (just as scribe was active when the wrong word ‘sun’ was put on paper). The AA substitution is a higher level/intelligent error that can arise only within an activity of an intelligent process (such as DNA replication). The AA substitution is not just any random alternation of DNA since there are astronomical numbers of possible molecules which can be produced by truly random DNA alternation consistent with laws of physics & chemistry, which is the kind of “random” process they claim to explain life & its evolution.

    In short, with pervasive semantic ambiguities in this debate, one cannot drop the intelligent context, hence intelligent agency acting through it, at any point. The intelligent agency is acting via this intelligent context throughout every transformation of a cell (or phenotype), just as scribe is acting through the ink, pen and paper at all points in the writing of the manuscript (through errors and all). If you drop the intelligent scribe out of the picture for writing the wrong word ‘sun’ instead of ‘son’, then you leave gaps which, according to your opponents, can produce word ‘sun’ without scribe (i.e. via sequence of random smudges only).

    Intelligence in this type of systems is additive and surrendering of even a sand grain sized bit of intelligent product to “random” process, can be combined through many such grains into a mountain of intelligent product. For example, any time they can connect couple of such scribe-less size gaps via a random smudge produced in the lab, you will be backing off to the next level of complexity as the one requiring the scribe while everything lesser can occur scribe-less, via random process, needlessly conceding ever greater domain to ‘random process’ as the creator of novelty.

    PS: my response to Gregory I sent right before this one, shows as ‘awaiting moderation’. Why is that?

  34. 34

    nightlight,

    “my response to Gregory I sent right before this one, shows as ‘awaiting moderation’. Why is that?”

    Sometimes this happens, for instance if you include a lot of hyperlinks in the message body, it’ll trip the moderation filter.

  35. @Gregory #20

    Zdravstvuyte Gregory,

    Although my Russian is focused almost exclusively on reading math and physics literature (the prices of Russian textbooks were irresistible for my student stipends), I didn’t have any trouble following your Russian passages (they were also more colorful than the English sections). Although we both have one foot in the Eastern European and one in the Western culture, our migration paths were in the opposite directions — you went from Canada to Russia, while I came from (a country formerly known as) Yugoslavia to USA (to my second grad school). Either path is a form of mental reboot into a new OS, quite disruptive and disorienting at first, but very refreshing and stimulating over time. This straddling of the same two realms seems to have resulted in both of us often “fighting” against both sides in the ID vs ND war of ideas.

    Checking out your blog and some earlier posts in UD, I see strong resonances between your concept of “Human Extension” and several other thought currents, such as “Extended Phenotype” of Dawkins (which generalizes his ‘selfish gene’ and ‘meme’ patterns), “Omega Point” by Teilhard de Chardin, mystical egregore, social organisms, as well as the zeitgeist of ‘internet as a superbrain’ emerging from numerous authors more recently.

    All of these ideas (which go way back to Aristotle, at least) identify a very interesting organizing principle of the universe, albeit each capturing only a segment of the whole pattern. After pursuing these white rabbits (and few others) each down his own trail, toward what seemed to be a common hidden treasure, each trail would somehow terminate unfinished and in dead end, driving me to the next one. I am beginning to suspect it is how this process is supposed to go and how it will continue, although each one appears at first as the final awaking into the real thing. My current “final” trail, which I call “Planckian networks”, combines the best insights of those that went before with few new elements from fundamental physics (pregeometry), math & computer science.

    Thanks to the stimulating questions and discussion from the folks here at UD, the key elements of the “Planckian networks” model were sketched in a recent thread. The thread was unfortunately archived before I could gather links to the scattered posts into a coherent TOC as my concluding post on the thread, so for convenience of quick intro, here is how that goes:

    Planckian Networks intro

    #35.. Matrix, Leibniz, Teilhard de Chardain
    #58.. Model of consciousness, after death, Bell inequalities
    #64.. Harmonization, biochemical networks
    #67.. Free will, hybrid intelligence, quantum magic
    #92.. SFED, QED, quantum topics
    #95.. Carleman Linearization, SFED, panpsychism
    #98.. Additive intelligence, pregeometry, fine tuning
    #100. Consciousness after death, exceptional path, limits of theories
    #103. Self-programming networks
    #107. Internal modeling, physics; information (rock in mud)
    #109. Science, Russian dolls, mind stuff, internal models, laws vs patterns
    #116. Goal oriented, anticipatory systems from pattern recognizers
    #171. Attractors as memories, internal models, front loading
    #128. Digital physics, complexity science, laws vs patterns
    #141. Free will in fractal internal model, crossword puzzle
    #143. Participatory front loading
    #152. How it works, additive intelligence, composition problem
    #155. Quantum measurement theory vs Glauber
    #161. Limits of computations, irreducibility of laws, constraints, Fermi paradox
    #164. Levels & power of computation, Max Plancks, broken bone
    #165. Creation smarter than creator?
    #174. Ontological levels, Game of Life, chess
    #175. Genetic Algorithms vs Networks vs Dembski
    #179. CSI vs networks, capacity of NNs, stereotyping, knowability
    #182. Meyer, empirical consciousness
    #183. CSI vs networks, limits of Abel, Dembski
    #188. Counterexample for Abel
    #189. Thinking vs computing
    #192. Why simple building blocks

    Evolution process vs theory conflation

    #20.. Map vs Terrain
    #35.. Chess, consciousness vs computation
    #191. Concession on microevolution, dice example
    #214. Concessions, technological evolution

    Natural Science schema

    #101. Algorithmically effective elements vs consciousness
    #109. Science schema re-explained
    #117. Qualia, science
    #119. General algorithms
    #128. Necessary vs sufficient, algorithmic effectiveness
    #135. Algorithm semantics, parasitic elements
    #186. Meyer, why cringe?
    #210. Semantic wall (KF)
    #217. Meyer, citation
    #222. Meyer, sloppiness?
    #232. Meyer, inductive strength
    #233. Meyer’s leap
    #237. Wisdom of leap
    #240. Other links to intelligent mind
    #245. Leap details
    #247. Dembski, Mere Creation.. mind/intelligence conflation
    #250. Wisdom of leap vs James Shapiro, #255 more on edge
    #256. Key holders, missing ID hypothesis
    #262. Missing ID hypothesis

    PS: The previous copy of this post which had links for the above numbered posts is stuck in the moderation.

  36. There’s nothing original in the OP.

    It’s all been said before.

  37. Thanks for posting Eric’s primer, KF!
    @ EA
    That’s one of the best discussions of intelligent design I’ve ever read. Top marks for clarity and focus. I also enjoyed your link to Dembski’s takedown of Allen Orr. The incisiveness of his prose is quite delightful to read.

  38. PJ @ 26
    Agreed!

  39. @ KF
    It’s truly amazing that with such careful expositions of the key issues (with the OP as a prime example) that objectors to ID can with straight faces continue to sidetrack discussion, motive-monger, disdain correction, willfully misinterpret, and present patently infantile objections. Who can really say how the discussion will progress in the next ten years? For now I’m happy to take comfort in sound argumentation and provocative prose.

  40. Random/ chance mutations vs non-random/ directed mutations- Read “Not By Chance” by Dr Lee Spetner

  41. Gregory:

    A quiet, sad note.

    For cause, you do not have the trust of this thread owner, and so any further posts in Russian — I believe — or another language will be deleted. (And no, no stories that the post is harmless will be good enough. The danger is plain and no precedent will be set.)

    I only let what is above stand, as this is documentation of a problem.

    Please understand the problem, given the amount of abusive behaviour in and around the Internet regarding the design issue.

    KF

  42. Mung:

    When it was originally said almost a decade ago, it was probably quite innovative.

    I think it is also pretty clear and helpful, even today.

    Takes away ever so many excuses, as in one may send to this post and ask, now what is it that you say you do not understand about the basic premises and principles of the design inference again? (Next stop, WACs.)

    KF

  43. Optimus: I agree, this is an oldie but goodie, that is why I asked EA’s permission to re-post it. I’ll bet it will be studiously ignored or derisively dismissed at the usual objector sites. At this point with someone pretending that objections to censorship or explulsion of design thinkers, and to an objection to invidious association with Nazism etc (on a subject where there are serious issues of principle at work that need to be addressed in a sober fashion instead of the resort to such nasty well poisoning tactics as I objected to) is strange or incomprehensible [I -- for cause -- call that enabling beahviour, Madam EL . . . and the screen clips are there to show why], I am pretty short on expecting reasonable behaviour from, or sympathy for objectors who act like that, and as well to those who harbour them on any excuse. KF

  44. 44

    nightlight @33,

    “That’s exactly the problem — from your ID perspective you don’t think it is a slippery slope since you presuppose the intelligent context.”

    Actually ID infers the intelligent cause from the context — specific and complex arrangements of matter that are not amenable to undirected causes.

    But for neo-Darwinian side, as well as for the curious who are outside and are listening to their counterpoint, you appear to be conceding scribe-less creation of meaningful novelty (such as whole word, or AA in DNA) i.e. your implied context isn’t their implied context.

    That’s just it. The efficacy of random causes to produce trivial results is intuitive to our perception and validated by our experience. Nobody is really surprised when a chaotic arrangement of Scrabble tiles might reveal the word “cat” or “tip” or perhaps even “fore” but the arrangement “Went for a walk, be back soon” is immediately understood to be a message arranged by agency for a purpose. This is the same sort of arrangement we find in DNA code — specific and complex sequences of code for the purpose of specifying biological machinery. I can hardly see the benefit of attributing both random and purposeful causes to intelligence as a matter of general reasoning.

    The only way to avoid getting trapped into the usual ND style equivocation of “random typo” (by an intelligent agency, which is what you mean) and “random smudge” (which is what they mean by “random error” which they claim is how everything came together), followed by your inevitable backing off to ever higher thresholds of complexity, is to enforce the distinction between the above two meanings throughout.

    I’m not really sure what that means. ID does not become trapped by attributing random effects to random causes. I seriously doubt that the real equivocation — attributing effects explicable by random events to intelligent causes — is less confusing to the lay person.

    There are many actual examples of random causes having some effect on designed systems, and this reasoning is so generally intuitive, that I can hardly see a downside to allowing that replication errors produce actual errors; it’s pretty much inherent to biology in general. There are scores of “loss of function” mutations which may or may not produce a net fitness gain given some environmental factor, such as with malaria. There are also genetic diseases with no fitness value at all. If these can be caused by random mutations, should we attribute these to intelligent causes as well? Losing the distinction between random factors and intelligent causes seems genuinely unhelpful here. I’m fairly sure that no advantage comes from attributing intelligent causes to events which can be explained by replication errors and other events for which randomness is a sufficient cause. However I’m sure we won’t reach agreement on this point.

    ID reaches design inferences by the examination of effects which are not attributable to random causes, such as the complex arrangement of long strings for the purpose of performing a function or transmitting a message. This lines up well with our uniform experience — such arrangements are the products of intelligent beings. Trying to assign noise an intelligent cause is not productive to ID methodology.

    Regardless of our disagreement, my claim still stands. Random mutations are a sufficient but not a necessary cause for a limited number of substitutions. We’ll have to agree to disagree as to whether this distinction is favorable to design inferences.

    P.S. Besides replacing random events with intelligent causes, is there any significant distinctions to be made between your view of biological reality and Darwinian evolution?

  45. 45

    KF @42, I suspect Mung was being tonge-in-cheek in the exact context you specify. ;)

    I wonder why Eric Anderson, a clear, insightful and long-time design thinker and commenter here at UD, is not authoring instead of just commenting.

  46. F/N: Re NL and claims on semantic walls, here. Sadly, he is recycling long since answered (often, corrected) assertions; e.g. on definitions of science. A by now all too familiar pattern. KF

  47. CR: I second the motion in re EA. KF

    PS: Good job with NL above, and yup Mung is probably tongue in cheek, but unfortunately the thing he describes is all too real. Hence the OP as a point of reference.

  48. 48

    KF, thanks. The two of us are pretty much going ’round in circles now, but the conversation has been fun and challenging.

  49. Chance Ratcliff #44: I’m fairly sure that no advantage comes from attributing intelligent causes to events which can be explained by replication errors and other events for which randomness is a sufficient cause. However I’m sure we won’t reach agreement on this point.

    Although we’ll likely end up agreeing to disagree, let me crystallize three main reasons why an ID position which is tenable in the longer run (instead of one which has to keep backing off, being squeezed into ever narrower gaps) requires continued activity of ‘intelligent agency’ at all times and all places, from physical laws and up, so-called “errors” included.

    To distinguish below between this type of “universal ID” (U-ID) and the conventional ID, I will label the latter as “part-time ID” (PT-ID; due to allowing for absence of intelligent agency some of the time; the extreme point of PT-ID is classical deism, the ultimate part-time). Alternative labeling which would fit as well is hard-ID vs soft-ID. I will use U-ID vs PT-ID.

    1) Neo-Darwinian theory (ND=RM+NS) is hitching a free ride on top of already highly intelligent system, cellular biochemical networks (CBN). These are intelligent networks i.e. distributed self-programming computers running anticipatory algorithms of the same general kind as human brain (both are modeled in these key traits by neural networks).

    The ND has picked out one positive feedback loop M(utations) + NS (natural selection), which is one of CBN’s intelligent algorithms, and declared it as a sole driver of evolution. But then, they also gratuitously attach a parasitic attribute “random” to the M(utation) half, i.e. change M ==> RM. Their motivation for this over-specification M ==> RM is purely ideological, serving to promote atheism (with all its social and moral corollaries).

    Requirement of U-ID that intelligent agency (whose immediate tool or technology at this level are CBN-s) is continually active during any M-process (mutation), guiding it and shaping it to some anticipated objectives, calls the ND on the above critical sleight of hand M ==> RM.

    Namely, U-ID can’t let them change M ==> RM without legitimate proof and explicit elimination of ‘intelligently guided’ M-process (mutation), since M vs RM is a perfectly falsifiable distinction. The falsification requires modeling & computing probabilities of all possible adjacent states of DNA (e.g. via quantum theory of molecular transitions) and establishing that the actual M-processes (mutations) observed are a fair sample from this large event space. This is exactly the same type of falsification one would have to use to falsify some fairness or randomness claim about rolling dice (such as the dice example discussed earlier).

    Presently they cannot prove anything of the sort (since quantum modeling of such large molecules is far beyond the current techniques), hence M ==> RM bluff of ND should be called and rejected as an extraneous (ideological) addition. Namely, there is no falsifiable or empirical effect they can point to, if one were to simplify their theory via reversal RM ==> M, i.e. strip the extra attribute R of M-process they have gratuitously added.

    If they can’t show an empirical or falsifiable difference between M and RM, then what is left is the general M-process which leaves on equal scientific footing ‘intelligently guided’ (IGM) and ‘random’ (RM) M-process. Hence, they have no scientific basis to claim that “random” attribute of M-process is more scientific than the “intelligently guided” attribute (which is a hypothesis of the U-ID’s continuously active intelligent agency).

    That’s the vital point that PT-ID (conventional ID as expressed in your and other posts) is needlessly surrendering on (and unsurprisingly, losing in courts). There is no need for that concession since RM and IGM are both elements of M of equal a priori standing, absent any falsifications (which requires the above probabilistic procedure for evaluating fairness of the observed samples of M-processes). Hence there is no scientific reason why should ID lose in courts as being less scientific than ND, provided ID is U-ID branch, hypothesizing a continuously active intelligent agency involved in IGM as an alternative hypothesis to RM hypothesis.

    2) The intelligent networks such as CBN-s (like those underlying them via physical laws), have additive intelligence, hence by enlarging these networks, or merely reverse engineering to reveal more existent detail, their guiding intelligence and its tools increase. For example, once CBN-s construct multi-cellular organism ‘technology’ with sexual reproduction, suddenly the sensory organs and resulting mate selection ‘technologies’ of CBN-s can augment their capabilities to guide even more optimally the DNA transformations between generations. Of course, even lower level technologies (revealed via reverse engineering of CBN-s), such as ‘horizontal gene transfer’, endosymbiosis etc, augment the intelligence of CBN-s.

    With number of such demonstrable mechanisms increasing, the PT-ID will keep conceding the effects of such mechanisms as phenomena being “naturally” explained hence not requiring actions of intelligent agency. In contrast, the U-ID sees all such mechanisms as technologies or tools being created and operated by the continuously active intelligent agency.

    Unlike PT-ID, the U-ID doesn’t leave any gap for the ever expanding “natural” process vs shrinking “intelligent” process. With U-ID, every process is manifestation of the ongoing intelligent activity, upholding our physical, chemical, biological,… processes at all times and all levels, at the razor’s edge (this metaphor is used typically for fine tuned physical constants; within U-ID it applies it at all levels).

    There is no “natural” vs “un-natural” or “super-natural” distinction within U-ID. There is a single type of activity by the underlying intelligent agency, the pattern which we only know to a greater (“natural”) or to a lesser (“super-natural”) degree at different stages of harmonization at our level.

    Hence, with U-ID, any such discoveries of new biochemical and higher optimization mechanisms, demonstrate ever increasing level of intelligence needed to design, build and operate them. Such discoveries expand and amplify U-ID, not shrink it and weaken it as they do with PT-ID.

    3) The ID is not only about biological evolution, but also about origin of life and fine tuning of physical laws (including physical constants). The U-ID spans all of those since it requires the common intelligence to uphold all those levels in operation at all times and all places. Hence, nothing exists from our physical level and up, without continuous intelligent action of the ‘intelligent agency’.

    Within U-ID, all our laws are merely coarse grained regularities of the “thought patterns” of the intelligent agency. What is outside of our present laws is not super-natural but merely a less known/understood aspects or feature of the same pattern of intelligent activity.

    In contrast, the PT-ID concedes present physical laws as “natural” that require no continuous intelligence to run. Hence, any time physics expands to explain yet another finely tuned physical constant, the PT-ID will have to back off from claiming need of an ‘intelligence’ for that one. I.e. PT-ID will repeat the same shrinking pattern it exhibits at the level of evolution — any time a specific mechanism is uncovered (reverse-engineered), the space for actions of ‘intelligent agency’ diminishes.

    In summary of reasons (1)-(3): PT-ID is a weak position, self-condemned to keep shrinking as science expands and losing in courts. Its final natural endpoint in the long run is a classical deism — intelligent agency which designed and set universe into motion in the initial act of creation, then got out of it, which is the ultimate form of part-time ID, shrunk to a point.

  50. 50

    nightlight @19,

    “That is a far stronger position to hold, since if they wish to claim some property of the process (such as randomness), the burden of proof is on them to show that such property is probabilistically plausible, not on me to prove it is implausible…”

    I actually agree with that statement pretty strongly. As a matter of fact, ID proponents regularly lean on Darwinists to show that random mutations can build complex structures. I do believe that there is a solid burden of proof upon those who make a positive claim about the efficacy of a natural process to account for certain patterns otherwise consistent with the activity of intelligent beings, whether by chance or necessity or some combination of the two.

    “…It is also internally much more coherent position, since it doesn’t hypothesize an absurd kind of intelligent agency which designs full sentences, but also leaves sentence size gaps for random smudges to somehow form almost correct full sentences.”

    This is the harder part to relate to. Transference of information implies errors — noise, the loss of fidelity. This is a directional movement from states of order to states of disorder. It’s part of our uniform and repeated experience. Systems based on physical processes are error prone, and all types of measures go into assuring the fidelity of information transfer. We find exactly these sorts of countermeasures in the replication processes of biological systems. In the case of e.coli, there is a two-stage error correction mechanism, one stage of which involves a protein cascade for signalling error detection and performing the subsequent correction. These are specific, complex, interacting hardware elements whose purpose is to increase the intrinsic DNA polymerase error rate of 10^-5 to an impressive 10^-9. It’s hard to see why such a definite, physical, mechanical process for error correction should exist if we can attribute intelligent cause to the errors in the first place.

  51. 51

    nightlight @49, just some quick comments here while I have time.

    “Although we’ll likely end up agreeing to disagree, let me crystallize three main reasons why an ID position which is tenable in the longer run (instead of one which has to keep backing off, being squeezed into ever narrower gaps) requires continued activity of ‘intelligent agency’ at all times and all places, from physical laws and up, so-called “errors” included.”

    Noting your parenthetical statement, since you’ve mentioned ID in the context of narrowing gaps before, I’ll say that ID explanations are not being squeezed by narrowing gaps, because ID doesn’t rely on explanatory gaps — such a statement presupposes that all of biology can be explained by physical law, and that ID just attempts to account for what cannot be explained by mechanistic processes. Such is not the case. There are no viable physical explanations for either the origin of life or the subsequent diversification of it, not even close. ID offers a better causative element, one that can be currently seen in operation. The necessity for design as a cause of functionally complex and specified organization and information (there are numerous examples apart from biological systems) is intuitively obvious, and no viable alternatives exist. This fact is becoming more illuminated as discoveries unfold, discoveries of biological nanotechnology, and of signalling systems, elements that James Shapiro uses terms like “cybernetic” and “cognitive” to account for. All of our direct experience indicates that such systems require intelligent design. ID is not being squeezed at all, but rather the arrow of time has been favorable with regard to new discoveries, always pointing toward new systems and interactions more sophisticated than previously thought. Nowhere is it apparent that we’re converging on a more simple account of biology; the trend is moving in the other direction, with new functions being discovered regularly, for DNA elements previously considered “junk” by many. The list goes on: epigenetics, plasticity, homoplasy, etc.

    Neither is ID being squeezed in the shrinking gaps of random occurrences, since it has no stake there. Even if it’s shown that presumed random elements are not as random as imagined, design grows as a result, it does not shrink like neo-Darwinism, nor is ID bound to NDE’s fate.

  52. CR:

    Transference of information implies errors — noise, the loss of fidelity. This is a directional movement from states of order to states of disorder. It’s part of our uniform and repeated experience. Systems based on physical processes are error prone, and all types of measures go into assuring the fidelity of information transfer. We find exactly these sorts of countermeasures in the replication processes of biological systems.

    Well said.

    We know that temperature exists (as a basic physical property), and so we have a framework in which there is a credible, random statistical distribution of translation, rotation and vibration energies at molecular levels, in any material system. Linked to this, we know per basic communication theory that noise is a characteristic of any informational, communication system so that there is a proneness to error which we have every reason to accept will follow statistical distributions characteristic of chance.

    Going further, there are observed, evidently accident-driven, rates of error in relevant processes of protein synthesis, etc. Similarly, in genome replication, we have reason to believe that there are errors that get into populations and indeed are used to trace the distribution/ancestry of human populations.

    It is a reasonable inference that — absent decisive evidence to the contrary — such variations are chance occurrences.

    Ironically, that can even be built into the design of the living system, as within an island of function, it may be useful to have built in robustness and adaptability due to ability to shift around within the zone of function. Cf. here, dogs and it seems the Red Deer family — which includes the North American Elk. Circumpolar species may also be a similar example.

    That is no problem for a design-centric view.

    Nor, is the role of error correction systems — a well known feature of the technology of code based communication systems — that work to to keep the phenomenon within bounds.

    All of this does not undermine the basic concerns and challenges relating to the Darwinian macro-evolutionary view: the ability of life forms to find islands of function in vast config spaces for the organised complex components of life: origin of body plans. This starts with the body plan of the very first living cell. No roots, no shoots, branches or twigs.

    And that is why the implied concept of a vast connected continent of function traversible incrementally by the said tree of life, is so pivotal to the whole evolutionary materialism dominated Darwinist view of origins of the world of life. The Darwinist view implies such, but there is no good reason to infer such from the fossil record — which is dominated by suddenness of appearance, stasis of forms [with variation being within the form], gaps and disappearances.

    The actual evidence of the world of life, and the wider evidence of complex, functionally specific systems, is that systems will be characterised by having requisites of well matched parts being properly arranged and coupled together for function to emerge or be present.

    That is, there is good reason to see that he tree of life implied picture is not credible at all. Islands of function in config spaces is a reasonable and empirically based expectation.

    This speaks to the issue of noise and its effects beyond certain limits: deleterious.

    It may even be associated with something very much like Sanford’s genetic entropy. Gradual deterioration of the genome leading to exhaustion of the functional capacity of the species, and vulnerability to disappearance.

    Indeed, the problem of over-specialisation and loss of robustness crops up too.

    I do not think it is accident that some of the more spectacular varieties of dogs, horses, goldfish etc show signs that they are at limits for the species. The situation of top class race horses that have to live in air conditioning as they lack adequate ability to sweat, is just one example.

    Behind all of this is the very opposite of God of the gaps reasoning.

    Which you noted on:

    ID explanations are not being squeezed by narrowing gaps, because ID doesn’t rely on explanatory gaps — such a statement presupposes that all of biology can be explained by physical law, and that ID just attempts to account for what cannot be explained by mechanistic processes. Such is not the case. There are no viable physical explanations for either the origin of life or the subsequent diversification of it, not even close. ID offers a better causative element, one that can be currently seen in operation. The necessity for design as a cause of functionally complex and specified organization and information (there are numerous examples apart from biological systems) is intuitively obvious, and no viable alternatives exist. This fact is becoming more illuminated as discoveries unfold, discoveries of biological nanotechnology, and of signalling systems, elements that James Shapiro uses terms like “cybernetic” and “cognitive” to account for. All of our direct experience indicates that such systems require intelligent design. ID is not being squeezed at all, but rather the arrow of time has been favorable with regard to new discoveries, always pointing toward new systems and interactions more sophisticated than previously thought . . . .

    Neither is ID being squeezed in the shrinking gaps of random occurrences, since it has no stake there. Even if it’s shown that presumed random elements are not as random as imagined, design grows as a result, it does not shrink like neo-Darwinism, nor is ID bound to NDE’s fate.

    Well said.

    KF

  53. NL:

    I add one little point to CR’s response to you above, in re your:

    There is no “natural” vs “un-natural” or “super-natural” distinction within U-ID. There is a single type of activity by the underlying intelligent agency, the pattern which we only know to a greater (“natural”) or to a lesser (“super-natural”) degree at different stages of harmonization at our level.

    The pivotal problem here is empirical detectability, multiplied by a failure to see/ acknowledge the significance of inference to best explanation in light of empirically grounded reliable signs. (I have already had to challenge your attempted definition of science and its methods of investigation and reasoning, cf here and here onward when you came back on much the same ground with the same basic problems.)

    What you are in effect doing here is reverting to the NOMA idea, of Gould. Which fails.

    For, in a case where there is no distinction and there are no signs that distinguish between chance and/or mechanical necessity and design on detectable, reliable signs, we have a case of worldview level faith and a matter of ungrounded decision. The default will be to impose evolutionary materialism and to dismiss an assumed — this is not per discriminating evidence amenable to observations — behind the scenes intelligence as a myth. In short, such an inference is operationally indistinguishable from a priori evolutionary materialism.

    The real world situation is quite different.

    We do have such things as may be properly characterised as “natural” and those that may be characterised as “ART-ificial.” With empirically observable discriminating evidence that can be used to construct and greound empirically an explanatory filter. That is, there are such things as empirically reliable signs of design.

    Namely, first, the natural can properly be characterised by that which follows stochastic laws that allow for chance, necessity or a combination of the two. A dropped heavy object falls, reliably, at initial acceleration 9.8 N/kg here on earth. The Moon swings by in the sky at a rate reducible to the same attenuated by spreading out of the flux of the relevant field with distance through the surface of a sphere. As Newton observed and inferred in the 1660′s. Where the object falling on Earth’s surface is a fair die, the uppermost surface after tumbling is effectively the result of a random distribution tracing to all sorts of factors on uncontrollable small variations of initial circumstances leading through sensitive dependence on initial and intervening circumstances, to such an outcome.

    Such can easily be observed and are very properly ascribed to chance and necessity, without inferring to an ultimate cause of the existence of such things. (We will get to that later, in its place.)

    But if we found together a tray with 200 dice in it neatly arranged in a linear pattern where we have a code that spells out the first 72 or so characters of this post in succession [e.g. pairs of dice [36 states] could easily be mapped to letters of the alphabet [26] plus decimal numbers [10] . . . cf. Vietnam war era prisoner knock code matrices as well as how classically alphabetic characters were used to represent numbers in effect a –> 1, b –> 2 etc. Indeed ASCII retains some features of that, cf discussion and table linked here which shows the ASCII table, in a context of grounding digital productivity], we would very properly infer to another empirically grounded causal factor that is well able to do such, and where the other source of high contingency, chance, is not a credible explanation per the deep isolation of such islands of function in the space of possible configs.

    That is what the design inference is about, and it is a fairly simple issue of emphasising what we may directly support through empirically based, inductive reasoning.

    The onward inference therefrom, to cases where we were not present to observe directly the actual events — the situation in many criminal trials, and that of historical sciences and origins science — is simple. Once we have a reasonable framework of inference on tested, reliable signs, we may use the signs to infer per best empirically warranted observation.

    That is how modern geology was founded, and it is how Wallace and Darwin reasoned, all of which has antecedents in Newton’s well known four rules of reasoning.

    The surprise in the process for the committed a priori materialist and his fellow travellers, is that there are identifiable, reliable signs of ART as cause, i.e. of design. Such as, functionally specific complex organisation and/or associated information [FSCO/I], and the like. This is a matter of well confirmed and patently obvious fact, with billions of cases around us.

    With this sitting at the table of inference to best explanation, it is now evident that the a priori injection of materialism, as has been used to try to even redefine science and its methods in the teeth of experience, history, and logic, to try to insist on such a priori materialism (on whatever excuses and strawman tactics such as accusations of “giving up” or “God of the gaps” etc) is little more than question-begging.

    Pulling back, we can see cases all around us in a technological world that underscore just how effective FSCO/I etc are as reliable signs of design. The strained attempts to avoid this would be laughable, save that it is such a sad indictment of where we have reached as a civilisation.

    Next, we see that the world of cell based life is chock full of reliable signs of design,starting from D/RNA and proteins.

    But the matter does not stop there.

    Pulling back, we see that we live in an observed cosmos that is evidently quite finely tuned in many, many ways that set it up on underlying laws, parameters and initial circumstances, to an operating point that sets up a habitation for cell based life of a type that is also compatible with intelligent cell based life. Just one example is that the first four most abundant Chemically active atoms are those that give us water, organic chemistry, and proteins: H, C & O, N. That, in a context where there is a well known bit of fine tuning that addresses the abundance and balance of C & O.

    That points to design as credible explanation of the functionally specific, complex organisation at the origin of a cosmos in which such is the case.

    Indeed, it turns out that the multiverse alternative proffered, is not only speculation without empirical — observational — warrant, but it also simply pushes back the fine tuning one step: the cosmos bread making machine that bakes up a fit habitat for life is just as much subject to fine tuning as the directly observed cosmos. (That’s no surprise, Dembski pointed out as much in NFL, pp. 149 ff. and subsequently. Namely, the search for a search [S4S] becomes just as hard as the original search. That is, once functionally specific complex organisation and associated information are on the table, the radical contingency implied and the siting of FSCO/I in isolated islands of function in the space of plausibly accessible configs, is not a credible result of chance.)

    And notice, at every step, the emphasis has fallen on things that are empirically well grounded, leading to inference on known reliable and observable sign.

    Lewontin’s a priori evolutionary materialism fails, and Gould’s NOMA fails too. And God of the gaps is a strawman.

    Last but not least, these things are not exactly news, nor are they particularly inaccessible, even here at UD.

    I therefore think it would be reasonable to expect that onward discussion should reflect a due diligence reckoning with what design theory is actually about and what science as a discipline is actually about in light of the underlying issues in logic and epistemology. (The 101 here may be of help.)

    Otherwise, we are simply looking at going in deadlocked circles, because of a refusal to address evidence and reasoning on the merits, but instead the all too commonly encountered strawmen.

    KF

  54. 54

    KF @52, thanks for the acknowledgment and for the contributing and supporting thoughts. As usual your commentary is helpful and appreciated.

    “Going further, there are observed, evidently accident-driven, rates of error in relevant processes of protein synthesis, etc. Similarly, in genome replication, we have reason to believe that there are errors that get into populations and indeed are used to trace the distribution/ancestry of human populations.

    It is a reasonable inference that — absent decisive evidence to the contrary — such variations are chance occurrences.”

    This is essentially my main point. I’m not against non-random explanations even for trivial changes, but those require positive evidence, given the sufficiency of transmission errors to account for these types of small changes. Furthermore, while presumed random changes might be goal directed, we must also account for events such as the development of genetic diseases, which are better explained by random events than purposeful ones, imo.

    Ironically, that can even be built into the design of the living system, as within an island of function, it may be useful to have built in robustness and adaptability due to ability to shift around within the zone of function. Cf. here, dogs and it seems the Red Deer family — which includes the North American Elk. Circumpolar species may also be a similar example.

    That is no problem for a design-centric view.

    Yes, precisely. I think it’s reasonable to infer that organisms are robust precisely because of random factors which can lead to information degradation. It’s rather remarkable to note that biological systems exhibit specified systems which appear to be for the purpose of keeping organisms functional in the face of errors and damage.

    All of this does not undermine the basic concerns and challenges relating to the Darwinian macro-evolutionary view: the ability of life forms to find islands of function in vast config spaces for the organised complex components of life: origin of body plans. This starts with the body plan of the very first living cell. No roots, no shoots, branches or twigs.

    And that is why the implied concept of a vast connected continent of function traversible incrementally by the said tree of life, is so pivotal to the whole evolutionary materialism dominated Darwinist view of origins of the world of life. The Darwinist view implies such, but there is no good reason to infer such from the fossil record — which is dominated by suddenness of appearance, stasis of forms [with variation being within the form], gaps and disappearances.

    Not only does this make randomness a poor explanation for the rich diversity of life (as well as its emergence), but imposes some severe constraints upon gradualism as well. To my mind, this means that even if we can apply some intelligence as an underlying factor of random events, there is no good explanation for arriving at islands of form and function through small, incremental changes. This may be somewhat controversial, but I think it’s relevant.

  55. 55

    nightlight @49,

    “Although we’ll likely end up agreeing to disagree, let me crystallize three main reasons why an ID position which is tenable in the longer run (instead of one which has to keep backing off, being squeezed into ever narrower gaps) requires continued activity of ‘intelligent agency’ at all times and all places, from physical laws and up, so-called “errors” included.”

    I have enjoyed our conversation and appreciate your continued attempts to clarify your views. While I don’t always understand what you’re getting at, your patience has helped to identify areas of agreement as well as disagreement.

    To distinguish below between this type of “universal ID” (U-ID) and the conventional ID, I will label the latter as “part-time ID” (PT-ID; due to allowing for absence of intelligent agency some of the time; the extreme point of PT-ID is classical deism, the ultimate part-time). Alternative labeling which would fit as well is hard-ID vs soft-ID. I will use U-ID vs PT-ID.

    While I don’t really like your PT-ID label because it has a pejorative flavor, your U-ID puts a label on what you propose, making it a little easier to deal with. The reason Part-Time ID is not appreciated is because ID proponents don’t deny that everything may indeed be designed, from the regular phenomena that are describable by physical laws, to the random aspects of contingency, to the intentional and contingent configurations of matter which result in a category of objects which are not amenable to either physical law, random chance, or certain propositions of gradualism. Contingency in material arrangements can be partitioned between chaos and purposeful configurations, each potentially destructive to the other. If needed, I will use Intelligent Design Theory (IDT) to account for what you term PT-ID, which is a composite of hypotheses about the universe and living systems, which posit that the products of intelligent activity — designed objects, systems, messages, etc. — have features which distinguish them from the products of unguided phenomena, such as geological processes.

    1) Neo-Darwinian theory (ND=RM+NS) is hitching a free ride on top of already highly intelligent system, cellular biochemical networks (CBN). These are intelligent networks i.e. distributed self-programming computers running anticipatory algorithms of the same general kind as human brain (both are modeled in these key traits by neural networks).

    (My emphasis in this and any subsequent quotes, unless otherwise noted.)

    Without acquiescing to your CBN terminoligy, since I’m not entirely sure exactly what it encompasses, I can fully agree that neo-Darwinism is “hitching a free ride” on an intelligently designed system. For that matter, so is Darwinian evolution. By not having a viable mechanism for generating formal novelties or their underlying specification, it is tacitly presumed that whatever organisms do is “natural”. However that’s potentially a weasel-word, because it can be presumed to imply that unguided processes can account for life’s existence and diversity.

    The ND has picked out one positive feedback loop M(utations) + NS (natural selection), which is one of CBN’s intelligent algorithms, and declared it as a sole driver of evolution. But then, they also gratuitously attach a parasitic attribute “random” to the M(utation) half, i.e. change M ==> RM. Their motivation for this over-specification M ==> RM is purely ideological, serving to promote atheism (with all its social and moral corollaries).

    Agreed. Depending on the context, random does not imply unguided. The immune system makes use of a type of genetic algorithm to produce variations in antibodies. This relies on a random factor, but is goal-directed. ID proponents are generally careful to make distinctions between targeted behaviors, which may still involve random components, and uniformly random occurrences, such as replication errors in e.coli.

    Also, I think I agree that neo-Darwinism assumes M→RM. However that’s the converse of what I’ve been arguing for all along, which I think can be viewed as RM→M. This is the sufficient causal relationship as opposed to the necessary one.

    Requirement of U-ID that intelligent agency (whose immediate tool or technology at this level are CBN-s) is continually active during any M-process (mutation), guiding it and shaping it to some anticipated objectives, calls the ND on the above critical sleight of hand M ==> RM.

    The criticism I have with that is the failure to distinguish between events which are goal-directed and those which are actually random. Random processes destroy information; and while it’s logically possible that some limited forms of intentional creative change could be introduced into seemingly random processes, a truly random outcome is distinguishable from the purposeful addition of specified complexity. Any thesis which conflates the two, I can’t really accept.

    Intelligence moves specification in a positive direction, and random influences move in the opposite direction. Impose randomness upon specified information and it will eventually overtake it. Impose specification atop random occurrences and the sequences will no longer conform to a definition of randomness. These two forces, intelligent input and chaotic processes, move in entirely different directions, and in their rawest most unconstrained forms are not compatible.

    Namely, U-ID can’t let them change M ==> RM without legitimate proof and explicit elimination of ‘intelligently guided’ M-process (mutation), since M vs RM is a perfectly falsifiable distinction. The falsification requires modeling & computing probabilities of all possible adjacent states of DNA (e.g. via quantum theory of molecular transitions) and establishing that the actual M-processes (mutations) observed are a fair sample from this large event space. This is exactly the same type of falsification one would have to use to falsify some fairness or randomness claim about rolling dice (such as the dice example discussed earlier).

    I think this may be a two-way street. You appear to propose that intelligence can act through seemingly random processes, yet targeted or goal-directed mutations are not random by definition. So a distinction need to be made between “random” and “designed” here. If intelligence can design through otherwise random processes, how does one make a distinction between design through the influence of random factors, and the explicit design inference warranted by goal-directed processes?

    Additionally, since intelligence is capable of simulating randomness, for example with cryptographically secure pseudo-random number generators, then providing a falsifiability criterion for whether or not some seemingly random occurrence is actually random, might impose an undue burden of proof.

    That’s the vital point that PT-ID (conventional ID as expressed in your and other posts) is needlessly surrendering on (and unsurprisingly, losing in courts). There is no need for that concession since RM and IGM are both elements of M of equal a priori standing, absent any falsifications (which requires the above probabilistic procedure for evaluating fairness of the observed samples of M-processes). Hence there is no scientific reason why should ID lose in courts as being less scientific than ND, provided ID is U-ID branch, hypothesizing a continuously active intelligent agency involved in IGM as an alternative hypothesis to RM hypothesis.

    This is an explicit area of disagreement that will not be resolved for reasons I’ve given and repeated. RM→S (random mutations implies substitutions) is warranted, regardless of whether one accepts that intelligence might be able to influence random factors from some layer underlying particle physics. With regard to courts, such actual evidence of intelligence versus randomness is not what is being considered. I think you overestimate the judiciary with regard to it’s judgments on scientific matters and ID to date.

    With number of such demonstrable mechanisms increasing, the PT-ID will keep conceding the effects of such mechanisms as phenomena being “naturally” explained hence not requiring actions of intelligent agency. In contrast, the U-ID sees all such mechanisms as technologies or tools being created and operated by the continuously active intelligent agency.

    That’s just not the direction that discoveries are moving in. See my post #51 above. As the actual mechanisms are elucidated, and presumed randomness falls victim to purposeful design, ID is vindicated, not reduced. I don’t think you’ve made this case very well, although you’ve commented about it frequently. ID is not squeezed by discoveries of new purposeful, integrated, goal-directed mechanisms, and is not squeezed by the reduced role of random mutations as explanations for apparent design. I can only guess that you’re not very familiar with ID literature, or its actual claims. And to attribute the very noteworthy effects of random degradations to intelligent forces, causes more problems than it solves, imo.

    3) The ID is not only about biological evolution, but also about origin of life and fine tuning of physical laws (including physical constants). The U-ID spans all of those since it requires the common intelligence to uphold all those levels in operation at all times and all places. Hence, nothing exists from our physical level and up, without continuous intelligent action of the ‘intelligent agency’.

    Not only does IDT address origins, evolution, and cosmological fine tuning, it doesn’t presuppose an underlying force which upholds the entire universe, nor does it disallow such a force. Perhaps you should ask more questions and make fewer assumptions. There are people here better equipped than I to make clarifications, but in all your commenting here, I get the impression that you’re more of a salesman than a seeker of knowledge. ID does not claim to explain all of reality, but specific aspects of our observation of it, for instance specified and irreducible complexity. ID seeks to account for patterns in nature that are better explained by an intelligent cause, than unguided processes. It’s part of our uniform and repeated experience, and does not rely on an Intelligent Universal Theory of Everything.

    In contrast, the PT-ID concedes present physical laws as “natural” that require no continuous intelligence to run. Hence, any time physics expands to explain yet another finely tuned physical constant, the PT-ID will have to back off from claiming need of an ‘intelligence’ for that one. I.e. PT-ID will repeat the same shrinking pattern it exhibits at the level of evolution — any time a specific mechanism is uncovered (reverse-engineered), the space for actions of ‘intelligent agency’ diminishes.

    Again, ID’s scope is limited. Because it doesn’t pretend to explain all of physical reality in a single unifying theory of everything, but rather make sense of a subset of observations withing reality, it can accommodate a situation where we find intelligence may actually be required to uphold it. These are all accusations that appear to come from your general impression of ID, and not from what prominent ID proponents actually say. I really suggest you read Behe, Meyer, Dembski, Denton, Wells, Richards, Gonzales, etc, and ask more questions and make fewer assumptions. More text gets spent here at UD correcting misrepresentations of IDT than is warranted.

    Also, I think you could make more clear your claims about specifics. For instance, what role intelligence plays in raw random factors — does intelligence cause randomness, does intelligence design through randomness, are purely random effects distinguishable from specification by objective qualification or quantification? Does U-ID take issue with Darwinian evolutionary assumptions, such as gradualism, or does it just replace the “random” factor in “random variation” with “intelligence”? Since U-ID considers that random effects are intelligently guided, how does it account for purely negative effects such as genetic diseases, loss-of-function, and general degradation; are these effects just as intelligent as the constructive, design-generating ones, and how do we distinguish?

    Anyway nightlight, I’ve enjoyed or dialog, despite airing some frustrations. If you want to take the last word on the subject, be my guest. I can’t guarantee I won’t respond to something you say if you bring up new material, but I get the sense that this conversation should probably wind down now. Thanks much for your indulgence.

    Best,
    Chance

    P.S. Apologies for the hasty composition and length of this post. :)

  56. kf, this post from ENV may interest you:

    Information, Past and Present – Winston Ewert – April 15, 2013
    http://www.evolutionnews.org/2.....71201.html

  57. Chance Ratcliff #55, kairosfocus #53

    Thanks both for some clarifications on positions of conventional ID, which I labeled above PT-ID (part time ID; that was not meant as derogatory term but as a literal description of its intermittent activations).

    Ultimately, it seems both of you ended up reaffirming the “part time” label — you divide processes (or systems) into “natural” (explainable by known physical, chemical… laws; these could include random and deterministic elements) and the “un-natural” (those not explainable by known laws), which include some that are “intelligently guided” as sub-category.

    Besides several major practical weaknesses of that approach listed previously (which are due to its unambigious ‘part-time’ aspect), its more fundamental flaw is in projecting (or conflating) the epistemological into ontological traits of these processes (or systems).

    Namely, while you treat “natural” vs “intelligently guided” as intrinsic/ontological properties of some processes, they are in fact only properties of our present knowledge about these processes.

    For example, if we were cavemen discussing similar topics, you would be one arguing that movements of stars, rain, lightening… were all ‘intelligently guided’ (by some spirits du jour) processes, while, say a rock falling down onto ground is a “natural” process (being easily controllable & reproducible). Some millennia later, all those “intelligently guided” processes have magically transmuted into “natural”, without anything at all changing about them. Hence, they couldn’t have been intrinsic properties of those processes.

    In other words, this division of processes by PT-ID into “natural” vs “intelligently guided” is a conflation of map with the territory, analogous to insisting that nations of the world are red, yellow, blue, green,… based on the particular coloring used on the currently most authoritative map of the world. This is clearly neither well founded nor sustainable position for the “intelligent guidance”.

    The universal ID (U-ID) I am advocating, is simply a more coherent position, exploring the ontological properties proper, without confusing them with the relation of those processes to our present knowledge about them. Hence, my inference of intelligent design & guidance is not based on presently unexplained complexity of biological systems (these merely amplify it) but on knowability of the world despite its phenomenological richness, especially on the mathematical elegance and coherence of the physical laws.

    Hence, U-ID is an offshoot of classical ID of ancient Greeks (such as Pythagoreans), or of some even more ancient caveman philosophers arguing with those other cavemen mentioned above, warning them that their divisions on “natural” vs “intelligently guided” processes are superficial and unsustainable.

  58. nightlight,

    Just a note of thanks for your message #32. I was disallowed by KF for posting in Russian (even though I wasn’t telling jokes about him ;). But nevertheless glad that you could understand my words, as transliteration is oftentimes a challenge! (The institution where I work has 4 official languages, two with Slavic script, so these problems come up regularly.) I’ll be out of the country for awhile and won’t respond again here soon (so expect name calling like above!), but would welcome private contact if you like, which you can find from the links you’ve already followed. You’ve made an interesting impression here at UD, and living in the USA, being a ‘scientist’ coming from ‘East’ provides a unique viewpoint most at UD are not used to.

    “my inference of intelligent design & guidance is not based on presently unexplained complexity of biological systems (these merely amplify it) but on knowability of the world despite its phenomenological richness, especially on the mathematical elegance and coherence of the physical laws.” – nightlight

    This sounds a bit like Romanian-American Adrian Bejan’s views of ‘design in nature’ as a given (though, without a ‘Designer,’ as an agnostic/atheist, and without the adjective ‘intelligent’ behind ‘design’). I wonder if you’ve come across Bejan’s work? UD has thus far (not surprisingly) avoided his ‘Design in Nature’ (2012) book. I’d be quite curious to hear your thoughts about it if you have.

    “PT-ID is a weak position, self-condemned to keep shrinking as science expands and losing in courts. Its final natural endpoint in the long run is a classical deism — intelligent agency which designed and set universe into motion in the initial act of creation, then got out of it, which is the ultimate form of part-time ID, shrunk to a point.” – nightlight

    Yeah, that pretty much nails it to a point. But be warned from personal experience; IDists, neo-creationists and Discovery Institute aficionados don’t like their acronym ‘ID’ being played with or experimented with. They are trying for a static, established, monumental definition of ‘ID’ (even if they’ve widely little-big-tent approach failed so far). My ‘Big-ID’ vs. ‘small-id’ distinction, while already well-established in the very small portion of mainstream science, philosophy, religion literature that takes IDism seriously, has been violated and attacked by supposedly peace-loving IDists here at UD. Perhaps they are not the only ‘victims’ of injustice?!

    @Eric Anderson #24: would be crushed (read: opening round defeat) in an actual ‘evolution debate’ outside of friendly-ID territory, especially in a ‘live’ situation unchained from mere black-and-white text.

    KF, posing as a competent person in philosophy of science demarcation, which he has repeatedly shown he is not, writes a ‘gem’ of a distortion about “the amount of abusive behaviour in and around the Internet regarding the design issue”, in the poor victimized IDists ‘expelled’ genre.

    No, KF – GEM, ‘design’ is a perfectly usable and well-explored concept in a variety of fields. I will again be presenting a paper that includes the concept of ‘design’ next week. But it has *nothing* to do with Intelligent Design Theory, the Discovery Institute, Uncommon Descent, ASA, IDEA, etc. – i.e. evangelical IDism sites and sources. Indeed, such views of ‘design’ as ‘ID’ are best understood as ‘deviant,’ as sociologists and criminologists (of science!) call it. The vast majority of ‘design issues’ are well-worth discussing, and fascinating indeed, free as they are from the conspiracy-crazed Public Relations world of ‘Intelligent Design Theory,’ the neo-creationism built by and for American evangelicals and those few others that have been deluded to adopt the DI’s ‘designist’ language.

    It’s rather sad that IDists don’t realise the abuse they perpetrate against other ‘design issue’ fields by their deviant actions and strategies. Really, it is a shame that they’ve given ‘design’ such a bad name (and will insist on denying it)! And all of the religious folks who took the time and effort to honestly read and even really tried to ‘buy’ ‘Intelligent Design Theory’ and then saw clearly through it and have now wisely and either steadfastly or emphatically rejected it, are disgraced by IDists. The latter still claim to have a mirage of a monopoly over the terms ‘intelligent design & Intelligent Design,’ while most Abrahamic believers have for a long time and still do accept the percept of ‘intelligent design,’ meaning a theistic worldview, which seems to be what is meant properly by U-ID.

    nighlight’s writing is shining big and wide on the glaring ‘gaps’ in IDist ‘theories’. That doesn’t mean, however, that IDists should take it as a personal insult or that it means they are thought of as ‘bad guys/gals.’ The point is not to moralise about IDists, just to speak as objectively as possible (even as reflexive subjectivity inevitably always creeps in, as human persons) about this supposedly ‘natural scientific’ theory that was *undeniably* concocted in the 1980’s and 90’s by Thaxton, Meyer, Behe, Dembski, et al., and which is still supposedly ‘evolving,’ as folks like Sewell like to say about ‘technologies.’ IDists aren’t ‘designing’ the future of their theory anyway, but instead placing their bets on using ‘historical science’ to pontificate how wrong a 19th century British naturalist was using the tools and ideas available during his epoch.

    The IDM’s strategy of IDism doesn’t count as one well-designed or planned in my books or in views expressed from relations with respectable and sometimes top-quality scientists, scholars and theologians from around the world. But my voice is just one small thread in a larger yarn, that IDists will (until the end of the IDM) spin and try to weave their own way.

    Gregory

    p.s. nightlight, I like your style of bolding certain words or phrases and, unlike some fickle-editorial folk at UD, don’t assume that you are yelling by this stylistic approach, but identifying important emphasis.

  59. Gregory #58: I like your style of bolding certain words… [I] don’t assume that you are yelling by this stylistic approach, but identifying important emphasis.

    Yep, that was the idea. This would be YELLING. The bold in a paragraph in its natural case is meant to help reader scan the text more quickly and in chunks, with bold fragments setting up the point or context for the whole paragraph ahead of the sequential reading. Web/hypertext reading is as different from book/linear reading as flying is from rowing in a boat.

    Yeah, that pretty much nails it to a point. But be warned from personal experience; IDists, neo-creationists and Discovery Institute aficionados don’t like their acronym `ID’ being played with or experimented with.

    I find it helpful for my own understanding to assign distinct labels to distinct concepts or entities. In short time I have spent here on UD, there seem to be multiple currents of ID even in this small forum. The descriptive labels I used, such as “universal ID” (U-ID) and “part time ID” (PT-ID) merely reflect some of the more obvious distinctions between the observed currents. Since labels are arbitrary conventions anyway, if anyone is offended by my plainly descriptive labels, they’re welcome to offer some less descriptive or euphemistic variants for their own branch, or even an abstract ones such as X-ID or Y-ID.

    This sounds a bit like Romanian-American Adrian Bejan’s views of `design in nature’ as a given…

    This is the first time I ran into that material. Checking his web site about his “Constructal Theory”, this seems to be a pursuit of the ‘holy grail’ of ‘complexity science’ (Santa Fe Institute, SFI), which is to formulate the 4-th law of thermodynamics that captures in some way the essence of ‘complex systems’. Some other characterizations include highly non-linear or chaotic or dissipative systems by Prigogine, systems at the edge between order and chaos by SFI & Wolfram, ‘principle of computational equivalence’ by Wolfram, etc. While each captures a bit of the same pattern, it’s still an unfinished business.

    Bejan’s version of 4th law amplifies the 2nd law by saying that the (complex) system will not only seek to maximize its entropy, but will also organize itself so it can do it the fastest way possible, by improving/facilitating the flows between the system components. That pattern is indeed easily perceived in nature.

    As a bit of synchronicity here, I have recently been tackling this same optimization problem for the switching networks (such as those used in large scale Data Centers). Through some lucky guesses I discovered that the problem of maximizing the flows (or throughput) of a certain large class of networks (Cayley graphs) is mathematically exactly the same problem as that of optimizing Error Correcting Codes (maximizing Hamming distance between codewords).

    Since ECC field is much more mature than the field of network throughput optimization, there are tens of thousands of optimal ECC solutions which can now be easily translated, via a simple recipe given in the paper (pp. 28-29), into optimal throughput networks.

    If one then interprets the resulting networks (which I call “Long Hop” networks) as state diagrams of Markov processes, then they represent the fastest mixing Markov processes i.e. processes with the fastest approach to max entropy state. Hence, they are a realization of Bejan’s law in this context (i.e. for processes which have these Cayley graphs as Markov state diagrams).

  60. Good job with NL above, and yup Mung is probably tongue in cheek

    :)

  61. NickMatzke_UD #5: You can’t just go say ID is not about the immune system, but instead about the origin of life and the Cambrian explosion.

    The ‘irreducible complexity’ examples by Behe, or CSI examples by Dembski, serve as counterexamples to neo-Darwinian theory of evolution (ND=RM+NS), pointing to some instances where ND’s RM+NS mechanism seems incapable of explaining the particular biological artifacts.

    The existence of direct counterexamples has no bearing on whether RM+NS mechanism is capable of explaining some other biological artifacts, such as micro-evolution (e.g. bacterial resistance to antibiotics).

    For example, say you offer a theory NM_UD that declares among others:

    … x*x > 10 for all integers x.

    To invalidate NM_UD, it suffices to show an integer x, such as x=3, for which this NM_UD statement is false. That’s a falsification by counterexample. Whether there are some integers x for which x*x>10 holds, or whether NM_UD has some other statements which are valid, is irrelevant regarding the established fact that NM_UD is a falsified theory.

    There is also no logical or scientific requirement that a falsification by counterexample must also provide an alternative theory that explains the phenomena that NM_UD sought to explain in order to declare NM_UD a falsified theory.

  62. Ooops, sorry, #61 was posted into a wrong thread. It was meant to go here.

Leave a Reply