Uncommon Descent Serving The Intelligent Design Community

ID Foundations: The design inference, warrant and “the” scientific method

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

It has been said that Intelligent design (ID) is the view that it is possible to infer from empirical evidence that “certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection” . . .”  This puts the design inference at the heart of intelligent design theory, and raises the questions of its degree of warrant and relationship to the — insofar as a  “the” is possible — scientific method.

Leading Intelligent Design researcher, William Dembski has summarised the actual process of  inference:

“Whenever explaining an event, we must choose from three competing modes of explanation. These are regularity [i.e., natural law], chance, and design.” When attempting to explain something, “regularities are always the first line of defense. If we can explain by means of a regularity, chance and design are automatically precluded. Similarly, chance is always the second line of defense. If we can’t explain by means of a regularity, but we can explain by means of chance, then design is automatically precluded. There is thus an order of priority to explanation. Within this order regularity has top priority, chance second, and design last”  . . . the Explanatory Filter “formalizes what we have been doing right along when we recognize intelligent agents.” [Cf. Peter Williams’ article, The Design Inference from Specified Complexity Defended by Scholars Outside the Intelligent Design Movement, A Critical Review, here. We should in particular note his observation: “Independent agreement among a diverse range of scholars with different worldviews as to the utility of CSI adds warrant to the premise that CSI is indeed a sound criterion of design detection. And since the question of whether the design hypothesis is true is more important than the question of whether it is scientific, such warrant therefore focuses attention on the disputed question of whether sufficient empirical evidence of CSI within nature exists to justify the design hypothesis.”]

The design inference process as described can be represented in a flow chart:

explan_filter

Fig. A: The Explanatory filter and the inference to design, as applied to various  aspects of an object, process or phenomenon, and in the context of the generic scientific method. (So, we first envision nature acting by low contingency law-like mechanical necessity such as with F = m*a . . . think of a heavy unsupported object near the earth’s surface falling with initial acceleration g = 9.8 N/kg or so. That is the first default. Similarly, we may see high contingency knocking out the first default — under similar starting conditions, there is a broad range of possible outcomes. If things are highly contingent in this sense, the second default is: CHANCE. That is only knocked out if an aspect of an object, situation, or process etc. exhibits, simultaneously: (i) high contingency, (ii) tight specificity of configuration relative to possible configurations of the same bits and pieces, (iii)  high complexity or information carrying capacity, usually beyond 500 – 1,000 bits. In such a case, we have good reason to infer that the aspect of the object, process, phenomenon etc. reflects design or . . . following the terms used by Plato 2350 years ago in The Laws, Bk X . . .  the ART-ificial, or contrivance, rather than nature acting freely through undirected blind chance and/or mechanical necessity. [NB: This trichotomy across necessity and/or chance and/or the ART-ificial, is so well established empirically that it needs little defense. Those who wish to suggest no, we don’t know there may be a fourth possibility, are the ones who first need to show us such before they are to be taken seriously. Where, too, it is obvious that the distinction between “nature” (= “chance and/or necessity”) and the ART-ificial is a reasonable and empirically grounded distinction, just look on a list of ingredients and nutrients on a food package label. The loaded rhetorical tactic of suggesting, implying or accusing that design theory really only puts up a religiously motivated way to inject the supernatural as the real alternative to the natural, fails. (Cf. the UD correctives 16 – 20 here on. as well as 1 – 8 here on.) And no, when say the averaging out of random molecular collisions with a wall gives rise to a steady average, that is a case of empirically  reliable lawlike regularity emerging from a strong characteristic of such a process, when sufficient numbers are involved, due to the statistics of very large numbers  . . . it is easy to have 10^20 molecules or more . . . at work there is a relatively low fluctuation, unlike what we see with particles undergoing Brownian motion. That is in effect low contingency mechanical necessity in the sense we are interested in, in action. So, for instance we may derive for ideal gas particles, the relationship P*V = n*R*T as a reliable law.] )

Explaining (and discussing) in steps:

1 –> As was noted in background remarks 1 and 2, we commonly observe signs and symbols, and infer on best explanation to underlying causes or meanings. In some cases, we assign causes to (a) natural regularities tracing to mechanical necessity [i.e. “law of nature”], in others to (b) chance, and in yet others we routinely assign cause to (c) intentionally and intelligently, purposefully directed configuration, or design.  Or, in leading ID researcher William Dembski’s words, (c) may be further defined in a way that shows what intentional and intelligent, purposeful agents do, and why it results in functional, specified complex organisation and associated information:

. . . (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.)

2 –>As an illustration, we may discuss a falling, tumbling die:

A pair of dice showing how 12 edges and 8 corners contribute to a flat random distribution of outcomes as they first fall under the mechanical necessity of gravity, then tumble and roll influenced by the surface they have fallen on. So, uncontrolled small differences make for maximum uncertainty as to final outcome. (Another way for chance to act is by  quantum probability distributions such as tunnelling for alpha particles in a radioactive nucleus)
A pair of dice showing how 12 edges and 8 corners contribute to a flat random distribution of outcomes as they first fall under the mechanical necessity of gravity, then tumble and roll influenced by the surface they have fallen on. So, uncontrolled small differences make for maximum uncertainty as to final outcome. (Another way for chance to act is by quantum probability distributions such as tunnelling for alpha particles in a radioactive nucleus)

Heavy objects tend to fall under the law-like natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance.

But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes!

[Also, the die may be loaded, so that it will be biased or even of necessity will produce a desired outcome. Or, one may simply set a die to read as one wills.]

3 –> A key aspect of inference to cause is the significance of observed characteristic signs of causal factors, where we may summarise such observation and inference on sign as:

I observe one or more signs [in a pattern], and infer the signified object, on a warrant:

I: [si] –> O, on W

a –> Here, the connexion is a more or less causal or natural one, e.g. a pattern of deer tracks on the ground is an index, pointing to a deer.

b –> If the sign is not a sufficient condition of the signified, the inference is not certain and is defeatable; though it may be inductively strong. (E.g. someone may imitate deer tracks.)

c –> The warrant for an inference may in key cases require considerable background knowledge or cues from the context.

d –> The act of inference may also be implicit or even intuitive, and I may not be able to articulate but may still be quite well-warranted to trust the inference. Especially, if it traces to senses I have good reason to accept are working well, and are acting in situations that I have no reason to believe will materially distort the inference.

4 –> Fig. A highlights the significance of contingency in assigning cause. If a given aspect of a phenomenon or object is such that under similar circumstances, substantially the same outcome occurs, the best explanation of the outcome is a natural regularity tracing to mechanical necessity.  The heavy object in 2 above, reliably and observably falls at 9.8 m/s^2 near the earth’s surface. [Thence, via observations and measurements of the shape and size of the earth, and the distance to the moon, the theory of gravitation.]

5 –> When however, under sufficiently similar circumstances, the outcomes vary considerably on different trials or cases, the phenomenon is highly contingent. If that contingency follows a statistical distribution and is not credibly directed, we assign it to chance.  For instance, given eight corners and twelve edges plus a highly non-linear behaviour, a standard, fair die that falls and tumbles, exhibits sensitive dependency to initial and intervening conditions, and so settles to a reading pretty much by chance. Things that are similar to that — notice the use of “family resemblance” [i.e. analogy] — may confidently be seen as chance outcomes.)

6 –> However, under some circumstances [e.g. a suspicious die], the highly contingent outcomes are credibly intentionally, intelligently and purposefully directed. Indeed:

a: When I type the text of this post by moving fingers and pressing successive keys on my PC’s keyboard,

b: I [a self, and arguably:  a self-moved designing, intentional, initiating agent and initial cause] successively

c: choose alphanumeric characters (according to the symbols and rules of a linguistic code)  towards the goal [a purpose, telos or “final” cause] of writing this post, giving effect to that choice by

d: using a keyboard etc, as organised mechanisms, ways and means to give a desired and particular functional form to the text string, through

e: a process that uses certain materials, energy sources, resources, facilities and forces of nature and technology  to achieve my goal.

. . . The result is complex, functional towards a goal, specific, information-rich, and beyond the credible reach of chance [the other source of high contingency] on the gamut of our observed cosmos across its credible lifespan.  In such cases, when we observe the result, on common sense, or on statistical hypothesis-testing, or other means, we habitually and reliably assign outcomes to design.

7 –> For further instance, we could look at a modern version of Galileo’s famous cathedral chandelier as pendulum experiment.

i: If we were to take several measures of the period for a given length of string and [small] arc of travel, we would see a strong tendency to have a specific period. This, is by mechanical necessity.

ii: However, we would also notice a scattering of the result, which we assign to chance and usually handle by averaging out [and perhaps by plotting a frequency distribution).

iii: Also, if we were to fix string length and gradually increase the arc, especially as the arc goes past about six degrees, we would notice that the initial law no longer holds. But, Galilleo — who should have been able to spot the effect — reported that period was independent of arc. (This is a case of “cooking.” Similarly, had he dropped a musket ball, a feather and a cannon ball over the side of the tower in Pisa,  the cannon ball should hit the ground just ahead of the musket ball, and of course considerably ahead of the feather.)

iv: So, even in doing, reporting and analysing scientific experiments, we routinely infer to law, chance and design, on observed signs.

8 –> But, are there empirically reliable signs of design that can be studied scientifically, allowing us to confidently complete the explanatory filter process? Design theorists answer, yes, and one definition of design theory is, the science that studies signs of design. Thus, further following Peter Williams, we may suggest that:

. . . abstracted from the debate about whether or not ID is science, ID can be advanced as a single, logically valid syllogism:

(Premise 1)    Specified complexity reliably points to intelligent design.

(Premise 2)    At least one aspect of nature exhibits specified complexity.

(Conclusion) Therefore, at least one aspect of nature reliably points to intelligent design.

9 –> For instance, in the 1970’s Wicken saw that organisation, order and randomness are very distinct, and have characteristic signs:

‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and note added. )]

10 –> We see here, the idea-roots of a term commonly encountered at UD, functionally specific, complex information [FSCI]. (The onward restriction to digitally coded FSCI [dFSCI] as is seen in DNA — and as will feature below, should also be obvious. I add [11:01:18], based on b/g note 1:  once we see digital code and a processing system, we are dealing with a communication system, and so the whole panoply of the code [a linguistic artifact], the message in the code as sent and as received, the apparatus for encoding, transmitting, decoding and applying, all speak to a highly complex –indeed, irreducibly so — system of intentionally directed configuration, and messages that [per habitual and reliable experience and association] reflect intents. From b/g note 2, the functional sequence complexity of such a coded data entity also bespeaks organisation as distinct from randomness and order, which can in principle and in practice be measured and shows that beyond a reasonable threshold of complexity, the coded message itself is an index-sign pointing to its nature as an artifact of design, thence its designer as the best explanation for a design. )

11 –> Earlier, in reflecting on the distinctiveness of living cells, Orgel had observed:

In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity.[Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. Emphases added.]

12 –> This seems to be the first technical use of the term “specified complexity,” which is now one of the key — and somewhat controversial — terms of design theory.  As the second background note summarises, Dembski and others have quantified the term, and have constructed metrics that allow measurement and decision on whether or not the inference to design is well-warranted.

13 –> However, a much simpler rule of thumb metric can be developed, based on a common observation highlighted in points 11 – 12 of the same background note:

11 –>We can compose a simple metric . . .  Where function is f, and takes values 1 or 0 [as in pass/fail], complexity threshold is c [1 if over 1,000 bits, 0 otherwise] and number of bits used is b, we can measure FSCI in functionally specific bits, as the simple product:

FX = f*c*b, in functionally specific bits

12 –> Actually, we commonly see such a measure; e.g. when we see that a document is say 197 kbits long, that means it is functional as say an Open Office Writer document, is complex and uses 197 k bits storage space.

13a –> Or [added Nov 19 2011] we may use logarithms to reduce and simplify the Dembski Chi metric of 2005, thusly:

>>1 –> 10^120 ~ 2^398

 

I = – log(p) . . .  eqn n2

 

3 –> So, we can re-present the Chi-metric:

 

[where, from Dembski, Specification 2005,

 

χ = – log2[10^120 ·ϕS(T)·P(T|H)]  . . . eqn n1]

 

Chi = – log2(2^398 * D2 * p)  . . .  eqn n3

 

Chi = Ip – (398 + K2) . . .  eqn n4

 

4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities.

 

5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits . . . .

 

6 –> So, the idea of the Dembski metric in the end — debates about peculiarities in derivation notwithstanding — is that if the Hartley-Shannon- derived information measure for items from a hot or target zone in a field of possibilities is beyond 398 – 500 or so bits, [then] it is so deeply isolated that a chance dominated process is maximally unlikely to find it, but of course intelligent agents routinely produce information beyond such a threshold.

 

7 –> In addition, the only observed cause of information beyond such a threshold is the now proverbial intelligent semiotic agents.
8 –> Even at 398 bits that makes sense as the total number of Planck-time quantum states for the atoms of the solar system [most of which are in the Sun] since its formation does not exceed ~ 10^102, as Abel showed in his 2009 Universal Plausibility Metric paper. The search resources in our solar system just are not there.

 

9 –> So, we now clearly have a simple but fairly sound context to understand the Dembski result, conceptually and mathematically [cf. more details here]; tracing back to Orgel and onward to Shannon and Hartley . . . .

 

As in (using Chi_500 for VJT’s CSI_lite [UPDATE, July 3: and S for a dummy variable that is 1/0 accordingly as the information in I is empirically or otherwise shown to be specific, i.e. from a narrow target zone T, strongly UNREPRESENTATIVE of the bulk of the distribution of possible configurations, W]):

 

Chi_500 = Ip*S – 500,  bits beyond the [solar system resources] threshold  . . . eqn n5

 

Chi_1000 = Ip*S – 1000, bits beyond the observable cosmos, 125 byte/ 143 ASCII character threshold . . . eqn n6

 

Chi_1024 = Ip*S – 1024, bits beyond a 2^10, 128 byte/147 ASCII character version of the threshold in n6, with a config space of 1.80*10^308 possibilities, not 1.07*10^301 . . . eqn n6a

 

[UPDATE, July 3: So, if we have a string of 1,000 fair coins, and toss at random, we will by overwhelming probability expect to get a near 50-50 distribution typical of the bulk of the 2^1,000 possibilities W. On the Chi-500 metric, I would be high, 1,000 bits, but S would be 0, so the value for Chi_500 would be – 500, i.e. well within the possibilities of chance.  However, if we came to the same string later and saw that the coins somehow now had the bit pattern of the ASCII codes for the first 143 or so characters of this post, we would have excellent reason to infer that an intelligent designer, using choice contingency, had intelligently reconfigured the coins. that is because, using the same I = 1,000 capacity value, S is now 1, and so Chi_500 = 500 bits beyond the solar system threshold. If the 10^57 or so atoms of our solar system, for its lifespan, were to be converted into coins and tables etc, and tossed at an impossibly fast rate, it would be impossible to sample enough of the possibilities space W to have confidence that something from so unrepresentative a zone T,  could reasonably be explained on chance. So, as long as an intelligent agent capable of choice is possible, choice — i.e. design — would be the rational, best explanation on the sign observed, functionally specific, complex information.]

 

10 –> Similarly, the work of Durston and colleagues, published in 2007, fits this same general framework . . . .

 

We use the formula log (20) – H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability [info and probability are closely related], in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space . . . .

 

11 –> So, Durston et al are targetting the same goal, but have chosen a different path from the start-point of the Shannon-Hartley log probability metric for information. That is, they use Shannon’s H, the average information per symbol, and address shifts in it from a ground to a functional state on investigation of protein family amino acid sequences. They also do not identify an explicit threshold for degree of complexity. [Added, Apr 18, from comment 11 below:] However, their information values can be integrated with the reduced Chi metric:

 

Using Durston’s Fits from his Table 1, in the Dembski style metric of bits beyond the threshold, and simply setting the threshold at 500 bits:

 

RecA: 242 AA, 832 fits, Chi: 332 bits beyond

 

SecY: 342 AA, 688 fits, Chi: 188 bits beyond;

Corona S2:445 AA, 1285 fits, Chi: 785 bits beyond  . . . results n7

The two metrics are clearly consistent . . .  (Think about the cumulative fits metric for the proteins for a cell . . . )

 

In short one may use the Durston metric as a good measure of the target zone’s actual encoded information content, which Table 1 also conveniently reduces to bits per symbol so we can see how the redundancy affects the information used across the domains of life to achieve a given protein’s function; not just the raw capacity in storage unit bits [= no.  of  AA’s * 4.32 bits/AA on 20 possibilities, as the chain is not particularly constrained.]>>

13b –> Some [GB et al] have latterly tried to discredit the idea of a dummy variable in a metric, as a question-begging a priori used to give us the result we “want.”  Accordingly, in correction, let us consider:

1 –> The first thing is why is S = 0 the default? ANS: Simple, that is the value that says we have no good warrant, no good objective reason, to infer that it is a serious candidate that anything more than chance and necessity acting on matter in space across time is at work.

2 –> In the case of Pinatubo [a well-known volcano], that is tantamount to saying that however complex the volcano edifice may be, its history can be explained on it being a giant sized, aperiodic relaxation oscillator that tends to go in cycles of eruption from quiescent to explosive eruption depending on charging up, breaking through erupting, discharging and reblocking. In turn, driven by underlying plate tectonics. As SA just said: S=0 means it’s a volcano!

3 –> In short, we are looking at an exercise in doing science, per the issue of scientific warrant on empirically based inference to best explanation . . . .

5 –> But as was repeatedly laid out with examples, there is another class of known causal factors capable of explaining highly contingent outcomes that we do not have a good reason to expect on blind chance and mechanical necessity, thanks to the issue of the needle in the haystack.

6 –> Namely, the cause as familiar as that which best explains the complex, specified information — significant quantities of contextually responsive text in English coded on the ASCII scheme — in this thread. Intelligence, working by knowledge and skill, and leaving characteristic signs of art behind.

7 –> Notice, how we come to this: we see complexity, measured by the scope of possible configurations, and we see objectively, independently definable specificity, indicated by descriptors that lock down the set of possible or observed events E, to a narrow zone T within the large config space W, such that a blind search process based on chance plus necessity will only sample so small a fraction that it is maximally implausible for it to hit on a zone like T. indeed, per the needle in the haystack or infinite monkeys type analysis, it is credibly unobservable.

8 –> Under those circumstances, once we see that we are credibly in a zone T, by observing an E that fits in a T, the best explanation is the known, routinely observed cause of such events, intelligence acting by choice contingency, AKA design.

9 –> In terms of the Chi_500 expression . . .

a: I is a measure of the size of config space, e.g. 1 bit corresponds to two possibilities, 2 bits to 4, and n bits to 2^n so that 500 bits corresponds to 3 * 10^150 possibilities and 1,000 to 1.07*10^301.

b: 500 is a threshold, whereby the 10^57 atoms of our solar system could in 10^17 s carry out 10^102 Planck time quantum states, giving an upper limit to the scope of search, where the fastest chemical reactions take up about 10^30 PTQs’s.

c: In familiar terms, 10^102 possibilities from 10^150 is 1 in 10^48, or about a one-straw sample of a cubical haystack about 3 1/2 light days across. An entire solar system could lurk in it as “atypical” but that whole solar system would be so isolated that — per well known and utterly uncontroversial sampling theory results — it is utterly implausible that any blind sample of that scope would pick up anything but straw; straw being the overwhelming bulk of the distribution.

d: In short not even a solar system in the haystack would be credibly findable on blind chance plus mechanical necessity.

e: But, routinely, we find many things that are like that, e.g. posts in this thread. What explains these is that the “search” in these cases is NOT blind, it is intelligent.

f: S gives the criterion that allows us to see that we are in this needle in the haystack type situation, on whatever reasonable grounds can be presented for a particular case, noting again that the default is that S = 0, i.e. unless we have positive reason to infer needle in haystack challenge, we default to explicable on chance plus necessity.

g: What gives us the objective ability to set S = 1? ANS: Several possibilities, but the most relevant one is that we see a case of functional specificity as a means of giving an independent, narrowing description of the set T of possible E’s.

h: Functional specificity is particularly easy to see, as when something is specific in order to function, it is similar to the key that fits the lock and opens it. That is, specific function is contextual, integrative and tightly restricted. Not any key would do to open a given lock, and if fairly small perturbations happen, the key will be useless.

i: The same obtains for parts for say a car, or even strings of characters in a post in this thread, or notoriously, computer code. (There is an infamous case where NASA had to destroy a rocket on launch because a coding error put in I think it was a comma not a semicolon.)

j: In short, the sort of reason why S = 1 in a given case is not hard to see, save if you have an a priori commitment that makes it hard for you to accept this obvious, easily observed and quite testable — just see what perturbing the functional state enough to overwhelm error correcting redundancies or tolerances would do — fact of life. This is actually a commonplace.

k: So, we can now pull together the force of the Chi_500 expression:

i] If we find ourselves in a practical cosmos of 10^57 atoms — our solar system . . . check,

ii] where also, we see that something has an index of being highly contingent I, a measure of information-storing or carrying capacity,

iii] where we may provide a reasonable value for this in bits,

iv] and as well, we can identify that the observed outcome is from a narrow, independently describable scope T within a much larger configuration space set by I, i.e. W.

v] then we may infer that E is or is not best explained on design according as I is greater or less than the scope 500 bits.

10 –> So, we have a metric that is reasonable and is rooted in the same sort of considerations that ground the statistical form of the second law of thermodynamics.

11 –> Accordingly, we have good reason to see that claimed violations will consistently have the fate of perpetual motion machines: they may be plausible to the uninstructed, but predictably will consistently fail to deliver the claimed goods.

12 –> For instance, Genetic Algorithms consistently START from within a zone (“island”) of function T, where the so-called fitness function then allows for incremental improvements along a nice trend to some peak.

13 –> Similarly, something like the canali on Mars, had they been accurately portrayed, would indeed have been a demonstration of design. However, these were not actual pictures of the surface of Mars but drawings of what observers THOUGHT they saw. They were indeed designed, but they were an artifact of erroneous observations.

14 –> Latterly, the so-called Mars face, from the outset, was suspected to be best explained as an artifact of a low-resolution imaging system, and so a high resolution test was carried out, several times. The consistent result, is that the image is indeed an artifact. [Notice, since it was explicable on chance plus necessity, S was defaulted to 0.]

15 –> Mt Pinatubo is indeed complex, and one could do a sophisticated lidar mapping and radar sounding and seismic sounding to produce the sort of 3-D models routinely used here with our volcano, but the structured model of a mesh of nodes and arcs, is entirely within the reach of chance plus necessity, the default. There is no good reason to infer that we should move away from the default.

16 –> If there were good evidence on observation that chance and necessity on the gamut of our solar system could explain origin of the 10 – 100 million bits of info required to account for major body plans, dozens of times over, there would be no design theory movement. (Creationism would still exist, but that is because it works on different grounds.)

17 –> If there were good evidence on observation that chance and necessity on the gamut of our observed cosmos could account for the functionally specific complex organisation and associated information  for the origin of cell based life, there would be no design theory movement. (Creationism would still exist, but that is because it works on different grounds.)

18 –> But instead, we have excellent, empirically based reason to infer that the best explanation for the FSCO/I in body plans, including the first, is design.

13c –> So, we have a more sophisticated metric derived from Dembski’s Chi metric, that does much the same as the simple product metric, and is readily applied to actual biological cases.

14 -> The 1,000 bit information storage capacity threshold can be rationalised:

The number of possible configurations specified by 1,000 yes/no decisions, or 1,000 bits, is ~ 1.07 * 10^301; i.e. “roughly” 1 followed by 301 zeros. While, the ~ 10^80 atoms of the observed universe, changing state as fast as is reasonable [the Planck time, i.e. every 5.39 *10^-44 s], for its estimated lifespan — about fifty million times as long as the 13.7 billion years that are said to have elapsed since the big bang — would only come up to about 10^150 states. Since 10^301 is ten times the square of this number, if the whole universe were to be viewed as a search engine, working for its entire lifetime, it could not scan through as much as 1 in 10^150 of the possible configurations for just 1,000 bits. That is, astonishingly, our “search” rounds down very nicely to zero: effectively no “search.” [NB: 1,000 bits is routinely exceeded by the functionally specific information in relevant objects or features, but even so low a threshold is beyond the credible random search capacity of our cosmos, if it is not intelligently directed or constrained. That is, the pivotal issue is not incremental hill-climbing to optimal performance by natural selection among competing populations with already functional body forms. Such already begs the question of the need to first get to the shorelines of an island of specific function in the midst of an astronomically large sea of non-functional configurations; on forces of random chance plus blind mechanical necessity only. Cf. Abel on the Universal Plausibility Bound, here.]

15 –> So far, all of this will probably seem to be glorified common sense, and quite reasonable. So, why is the inference to design so controversial, and especially the explanatory filter?

[ Continued, here ]

Comments
Chirp, chirp, chirp, chirp . . . [HT: BA]kairosfocus
January 25, 2011
January
01
Jan
25
25
2011
05:58 AM
5
05
58
AM
PDT
Update: I have added a key cite from Dr Dembski on the design process that shows what intelligence means, how designers use it, and why the result often reflects functionally specific complex organisation and information. HT: ENV. GEM of TKIkairosfocus
January 21, 2011
January
01
Jan
21
21
2011
01:00 PM
1
01
00
PM
PDT
TM: Nope. Strictly it is the number of baryons. (Counting as atoms, at the scale involved is being generous sand conservative.) Dark matter is several times the scale, but of mysterious composition, as it interacts gravitationally [how it was and is detected] but apparently not electromagnetically. The Bullet Cluster case loks like a galactic cluster collision with the atomic matter interacting -- X-ray source [high energy interactions!] -- but he dark matter halos have evidently acted almost like ghosts, and so are displaced from the centre of X-ray emissions. GEM of TKIkairosfocus
January 20, 2011
January
01
Jan
20
20
2011
11:07 PM
11
11
07
PM
PDT
I suppose it probably doesn't, since it's an estimate of hydrogen atoms anyway and most dark matter is supposed to be non-atomic.tragic mishap
January 20, 2011
January
01
Jan
20
20
2011
05:44 PM
5
05
44
PM
PDT
Quick question: Does the commonly cited number of 10^80 atoms in the universe include dark matter?tragic mishap
January 20, 2011
January
01
Jan
20
20
2011
05:08 PM
5
05
08
PM
PDT
Breaking: Astronomer Gaskell was awarded US$ 125,000 in a settlement of his discrimination suit against U Kentucky. [HT: UD Thread, follow developments there.] ENV's money shot comment:
What this case shows is that if you express any form of doubt about Darwin--even if you are totally open to a theistic evolution position--you might be labeled a "creationist" and face discrimination in the academy. What you actually believe doesn't matter. And whether your views are scientifically defensible doesn't matter. What matters are the perceptions and fears of your colleagues and conforming to a climate of intolerance towards Darwin-skeptics. Sadly, this culture of intolerance cost a highly qualified astronomer an excellent job at UK.
And that climate of hostility is being stirred up by the NCSE and ilk. For shame! GEM of TKIkairosfocus
January 19, 2011
January
01
Jan
19
19
2011
02:53 PM
2
02
53
PM
PDT
BA: Let's just say that when a Journal with pretensions to sober scholarship hands over its introductory essay to the deputy director of an agit-prop agency demonstrably pursuing an atmosphere-poisoning false accusation smear, i.e. NCSE, then devotes the whole journal more or less to the partyline talking points that duck the main issues, then the state of scholarship is soberlingly low; as in, it is not clear that the patient will make it out of the ICU. If the NCSE propagandists were really confident of their case at the level of a phil journal, what they would have done is invited a panel of ID and Creationism supporters to present their cases, in a context where there would be critiques form a Darwinist panel and responses to critiques, and do likewise on the other side. Then, a panel of philosophers of science or better yet, experienced jurists with knowledge of scientific matters would render their verdicts, with explanation. Instead, we saw a clear shoot 'em in the back bushwhacking. Shameless. But, ENV has caught a very interesting slip-up by Kelly C. Smith:
"what we need to do is develop a single example of macroevolution which presents a representative sample of the evidence behind the construction of the series in a very simple, user-friendly fashion."
This, ten years after Wells' book, Icons of Evolution blew up the ten leading icons of evolution over the past 150 years. What an eloquent inadvertent admission on the true state of the evidence on the claimed "fact" of evolution! (Cf. here and here on NSTA, NAS and NCSE on that claimed "fact." Also cf the critical review here on OOL and here on origin of biodiversity.) Maybe several leading ID scholars should check out whether the journal has a circulation of any size in the UK -- in one case, 23 copies sold was enough -- and sue for libel there. But, on the merits, the evident failure to speak cogently to the substantial matters at stake, tells us that science is rapidly losing its integrity, and huge swathes of philosophy -- the vaunted meta discipline -- are happy to go along. Telling GEM of TKIkairosfocus
January 19, 2011
January
01
Jan
19
19
2011
10:32 AM
10
10
32
AM
PDT
kf this hot off the press article from ENV should really get your dander up: Condescension, Sneers, and Outright Misrepresentations of Intelligent Design Pass For Scholarship in Synthese http://www.evolutionnews.org/2011/01/condescension_sneers_and_outri042641.htmlbornagain77
January 19, 2011
January
01
Jan
19
19
2011
09:15 AM
9
09
15
AM
PDT
Onlookers: You might find an interesting comparison at Climate Audit, on the balance of issues and rhetorical strategies. Especially, in light of my earlier remarks on the NCSE's endorsement and hosting of the ID = Creationism smear. Wagon-circling, distractive atmosphere-poisoning and posing on one's magisterial power do not address the issue on the merits. So, let us wait . . . GEM of TKIkairosfocus
January 19, 2011
January
01
Jan
19
19
2011
09:06 AM
9
09
06
AM
PDT
BA: As in "chirp, chirp, chirp . . . " little cricket? Let's see if they can take time from the NCSE talking points about creationism in cheap tuxedos -- as already addressed -- to answer on the merits. Waiting . . . (If no cogent answer is forthcoming on the merits in any reasonable time [it's 2+ days on this post already . . . ], that strongly suggests that -- atmosphere poisoning rhetorical distractors aside [cf comment no. 9 above] -- the issue of the basic legitimacy of the inference to design as a properly scientific inference is over.) GEM of TKIkairosfocus
January 19, 2011
January
01
Jan
19
19
2011
06:36 AM
6
06
36
AM
PDT
But kairos, we have already received our reply from 'the champions of Darwinism and evolutionary materialism' on the 'information problem'. It is best stated as thus: http://www.youtube.com/watch?v=CQFEY9RIRJAbornagain77
January 19, 2011
January
01
Jan
19
19
2011
06:10 AM
6
06
10
AM
PDT
BA: The point that however you calculate them, the odds against spontaneous origin of life by chance and necessity triggering favourable chemistry in some still warm pond [or whatever scenario is being favoured today] is well made. Simply on DNA, the odds of getting to OBSERVED life by chance and necessity only are staggering. Then, look at how DNA is a functional component in a metabolic system that embeds a von Neumann self-replicating facility. Such a vNSR as an additional facility requires:
(i) an underlying storable code to record the required information to create not only (a) the primary functional machine [[here, for a "clanking replicator" as illustrated, a Turing-type “universal computer”; in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also (b) the self-replicating facility; and, that (c) can express step by step finite procedures for using the facility; (ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with (iii) a tape reader [[called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling: (iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by (v) either: (1) a pre-existing reservoir of required parts and energy sources, or (2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment. Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor. That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [[Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).] This irreducible complexity is compounded by the requirement (i) for codes, requiring organised symbols and rules to specify both steps to take and formats for storing information, and (v) for appropriate material resources and energy sources. Immediately, we are looking at islands of organised function for both the machinery and the information in the wider sea of possible (but mostly non-functional) configurations. In short, outside such functionally specific -- thus, isolated -- information-rich hot (or, "target") zones, want of correct components and/or of proper organisation and/or co-ordination will block function from emerging or being sustained across time from generation to generation. So, once the set of possible configurations is large enough and the islands of function are credibly sufficiently specific/isolated, it is unreasonable to expect such function to arise from chance, or from chance circumstances driving blind natural forces under the known laws of nature.
This is found in your friendly local "simple" -- what a misnomer -- living cell. That is why the explanatory filter so strongly points tothe cell as a product of design. Then, to move up to accounting for major body plans starting with say teh Cambrian fossil life revolution, we have to account for 10's of millions of additional bits of bio-information and systems for embryological development. Dozens of times over. Again, the explanatory filter strongly implicates design. And, in reply we meet only a priori materialism. That is why prof Philip Johnson's reply to Lewontin is so cutting:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
Now, let us hear the response on the merits from the champions of Darwinism and evolutionary materialism. GEM of TKIkairosfocus
January 19, 2011
January
01
Jan
19
19
2011
04:50 AM
4
04
50
AM
PDT
F/N: I have updated the OP point 6 to bring out the issues of the self-moved agent-designer implicit in e.g. my acting to compose and transmit a textual post in English. I particularly must draw attention to the following remarks by Plato on the self-moved agent, as he speaks in the voice of the Athenian Stranger in The Laws, Bk X: ____________________ >> Ath. . . . when one thing changes another, and that another, of such will there be any primary changing element? How can a thing which is moved by another ever be the beginning of change? Impossible. But when the self-moved changes other, and that again other, and thus thousands upon tens of thousands of bodies are set in motion, must not the beginning of all this motion be the change of the self-moving principle? . . . . self-motion being the origin of all motions, and the first which arises among things at rest as well as among things in motion, is the eldest and mightiest principle of change, and that which is changed by another and yet moves other is second. [[ . . . .] Ath. If we were to see this power existing in any earthy, watery, or fiery substance, simple or compound-how should we describe it? Cle. You mean to ask whether we should call such a self-moving power life? Ath. I do. Cle. Certainly we should. Ath. And when we see soul in anything, must we not do the same-must we not admit that this is life? [[ . . . . ] Cle. You mean to say that the essence which is defined as the self-moved is the same with that which has the name soul? Ath. Yes; and if this is true, do we still maintain that there is anything wanting in the proof that the soul is the first origin and moving power of all that is, or has become, or will be, and their contraries, when she has been clearly shown to be the source of change and motion in all things? [he here moves to a form of cosmological argument] Cle. Certainly not; the soul as being the source of motion, has been most satisfactorily shown to be the oldest of all things. >> ____________________ This raises the point that to act, we need to be able to freely choose, then to move say our fingers to type. In this case to thence compose that which has in it dFSCI. And the issue of freedom of action, to be self-moved or free enough in will to do so, comes to the fore. Thus, the issue of design, an empirical reality, raises serious questions about the source of designs, and onward -- on the worldviews plane [not the scientific one addressed in the OP] -- the source of the design and configuration of the world. GEM of TKIkairosfocus
January 19, 2011
January
01
Jan
19
19
2011
04:36 AM
4
04
36
AM
PDT
kf, it seems some of these probabilities just can't even be properly fathomed by mere mortal minds: "The probability for the chance of formation of the smallest, simplest form of living organism known is 1 in 10^340,000,000. This number is 10 to the 340 millionth power! The size of this figure is truly staggering since there is only supposed to be approximately 10^80 (10 to the 80th power) electrons in the whole universe!" (Professor Harold Morowitz, Energy Flow In Biology pg. 99, Biophysicist of George Mason University) Probabilities Of Life - Don Johnson PhD. - 38 minute mark of video a typical functional protein - 1 part in 10^175 the required enzymes for life - 1 part in 10^40,000 a living self replicating cell - 1 part in 10^340,000,000 http://www.vimeo.com/11706014 Dr. Morowitz did another probability calculation working from the thermodynamic perspective with a already existing cell and came up with this number: DID LIFE START BY CHANCE? Excerpt: Molecular biophysicist, Horold Morowitz (Yale University), calculated the odds of life beginning under natural conditions (spontaneous generation). He calculated, if one were to take the simplest living cell and break every chemical bond within it, the odds that the cell would reassemble under ideal natural conditions (the best possible chemical environment) would be one chance in 10^100,000,000,000. You will have probably have trouble imagining a number so large, so Hugh Ross provides us with the following example. If all the matter in the Universe was converted into building blocks of life, and if assembly of these building blocks were attempted once a microsecond for the entire age of the universe. Then instead of the odds being 1 in 10^100,000,000,000, they would be 1 in 10^99,999,999,916 (also of note: 1 with 100 billion zeros following would fill approx. 20,000 encyclopedias) http://members.tripod.com/~Black_J/chance.htmlbornagain77
January 19, 2011
January
01
Jan
19
19
2011
04:23 AM
4
04
23
AM
PDT
BA: That is one way to try to imagine the size and significance of a stupendously large number. By comparison, there are credibly some 10^80 atoms in the observable universe, about 10^60 times the number in a grain of sand. The configuration space of just 1,000 bits [125 bytes, or about 20 words worth] is 1.07*10^301, or about 10^150 times the number of Planck-time quantum states of the observed cosmos across its thermodynamic lifespan, in turn about 50 million times the time often held to have elapsed since the big bang, some 13.7 BYA. That is why 1,000 bits worth of linguistically or algorithmically functional text is well beyond the credible reach of our observed cosmos, on undirected chance plus blind mechanical necessity. A search in the cosmic haystack on the scope of our cosmos, would not even begin to be significant as a sample of the config space of just 1,000 bits. So, if you see 1,000 bits worth of digitally coded textual information, that is a message or is algorithmically and specifically functional, you can be highly confident that it is the product of intentional and intelligent configuration. That is, of design. And so, for instance, we can be confident that the DNA of living systems is designed, as the DNA starts at over 100,000 bits and goes up into billions, and is indisputably functional based on codes. And if you doubt that analysis, produce a case where dFSCI, of 1,000 or more bits -- remember, about 20 words of typical English will do -- has been credibly observed to have resulted form blind chance and mechanical necessity. If you do so, the design inference will collapse [and probably statistical thermodynamics with it]. Of course, no such case is presented, and we can be quite confident on the above analysis, that none will be forthcoming. No wonder we see only silence or dismissive distractive red herring and strawman tactics from the ever present ID critics, once the above original post was put up. Silence can speak loudly indeed . . . The root reason this is disputed, is that many are in the grips of a priori evolutionary materialism, as Lewontin so plainly documents:
It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [From: “Billions and Billions of Demons,” NYRB, January 9, 1997.]
GEM of TKIkairosfocus
January 19, 2011
January
01
Jan
19
19
2011
04:06 AM
4
04
06
AM
PDT
kairosfocus, since you are very good at math, and deal with extremely low probabilities all the time, I thought you might really appreciate this article trying to put 1 in 10^157 in context: The Case for Jesus the Messiah — Incredible Prophecies that Prove God Exists By Dr. John Ankerberg, Dr. John Weldon, and Dr. Walter Kaiser, Jr. Excerpt: But, of course, there are many more than eight prophecies. In another calculation Stoner used 48 prophecies (even though he could have used 456) and arrived at the extremely conservative estimate that the probability of 48 prophecies being fulfilled in one person is one in 10^157. How large is the number 10^157? 10^157 contains 157 zeros! Let us try to illustrate this number using electrons. Electrons are very small objects. They are smaller than atoms. It would take 2.5 times 10^15 of them, laid side by side, to make one inch. Even if we counted four electrons every second and counted day and night, it would still take us 19 million years just to count a line of electrons one inch long. But how many electrons would it take if we were dealing with 10^157 electrons? Imagine building a solid ball of electrons that would extend in all directions from the earth a length of 6 billion light years. The distance in miles of just one light year is 6.4 trillion miles. That would be a big ball! But not big enough to measure 10^157 electrons. In order to do that, you must take that big ball of electrons reaching the length of 6 billion light years long in all directions and multiply it by 6 x 10^28! How big is that? It’s the length of the space required to store trillions and trillions and trillions of the same gigantic balls and more. In fact, the space required to store all of these balls combined together would just start to “scratch the surface” of the number of electrons we would need to really accurately speak about 10^157. But assuming you have some idea of the number of electrons we are talking about, now imagine marking just one of those electrons in that huge number. Stir them all up. Then appoint one person to travel in a rocket for as long as he wants, anywhere he wants to go. Tell him to stop and segment a part of space, then take a high-powered microscope and find that one marked electron in that segment. What do you think his chances of being successful would be? It would be one in 10^157. Remember, this number represents the chance of only 48 prophecies coming true in one person (there are 456 total prophecies concerning Jesus). http://www.johnankerberg.org/Articles/ATRJ/proof/ATRJ1103PDF/ATRJ1103-3.pdfbornagain77
January 19, 2011
January
01
Jan
19
19
2011
03:28 AM
3
03
28
AM
PDT
F/N: This, from my always linked online note App 8 [HT: Frosty], may also be stimulating: _______________________ >> 7 --> Further, as UD commenter Frosty pointed out in the linked UD thread, Leibnitz long ago highlighted one of the key challenges to an emergentist, property- and/or emanation- of- matter view of perception [and thence consciousness etc.], in The Monadology, 16 - 17. So, giving a little context to see what Leibnitz means by monads etc, and without endorsing, let us simply reflect on what is now probably a very unfamiliar way to look at things; noting his astonishing remarks on the analogy of the mill in no 17:
1. The monad, of which we will speak here, is nothing else than a simple substance, which goes to make up compounds; by simple, we mean without parts. 2. There must be simple substances because there are compound substances; for the compound is nothing else than a collection or aggregatum of simple substances. 3. Now, where there are no constituent parts there is possible neither extension, nor form, nor divisibility. These monads are the true atoms [i.e. "indivisibles," the original meaning of a-tomos] of nature, and, in a word, the elements of things . . . . 6. We may say then, that the existence of monads can begin or end only all at once, that is to say, the monad can begin only through creation and end only through annihilation. Compounds, however, begin or end by parts . . . . 14. The passing condition which involves and represents a multiplicity in the unity, or in the simple substance, is nothing else than what is called perception. This should be carefully distinguished from apperception or consciousness . . . . 16. We, ourselves, experience a multiplicity in a simple substance, when we find that the most trifling thought of which we are conscious involves a variety in the object. Therefore all those who acknowledge that the soul is a simple substance ought to grant this multiplicity in the monad . . . . 17. It must be confessed, however, that perception, and that which depends upon it, are inexplicable by mechanical causes, that is to say, by figures and motions. Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill. Now, on going into it he would find only pieces working upon one another, but never would he find anything to explain perception. It is accordingly in the simple substance, and not in the compound nor in a machine that the perception is to be sought. Furthermore, there is nothing besides perceptions and their changes to be found in the simple substance. And it is in these alone that all the internal activities of the simple substance can consist.
8 --> We may bring this up to date by making reference to more modern views of elements and atoms, through an example from chemistry. For instance, once we understand that ions may form and can pack themselves into a crystal, we can see how salts with their distinct physical and chemical properties emerge from atoms like Na and Cl, etc. per natural regularities (and, of course, how the compounds so formed may be destroyed by breaking apart their constituents!). However, the real issue evolutionary materialists face is how to get to mental properties that accurately and intelligibly address and bridge the external world and the inner world of ideas. This, relative to a worldview that accepts only physical components and must therefore arrive at other things by composition of elementary material components and their interactions per the natural regularities and chance processes of our observed cosmos. Now, obviously, if the view is true, it will be possible; but if it is false, then it may overlook other possible elementary constituents of reality and their inner properties. Which is precisely what Liebnitz was getting at. >> _______________________ Worth at least a thought or two.kairosfocus
January 18, 2011
January
01
Jan
18
18
2011
06:24 AM
6
06
24
AM
PDT
Dr [I, Ro]Bot: [At least, I assume that is what you are hinting at, pardon. :) .] I am thinking that a key part of the fear-factor is the meme that a DESIGN CENTRIC VIEW of science is a progress stopper. An examination of fig A in the OP, will show that it should not be. Once a design is identified in nature, that opens up reverse-engineering and forward-engineering our own way. And so, scien ce becomes an exercise in reverse-engineering the world: identifying he principles used to build it and make it work, with the confidence that if 'twerdun once we can do it too. And in fact, an honest survey of the rise of modern science will show that this is the basic view of the pioneers over the past 350 - 450 or so years. (I note that even the much-despised "fundy" pretrib premil eschatology has a variant by Bloomfield, where our planet's story is phase I of in effect a cosmos development project. The redeemed humanity becomes a -- why not a network of such sites for in effect a federation of races, including the mysterious Angels? -- site for infinite expansion across the cosmos through endless ages. In that view, BTW, the New Jerusalem envisioned by John looks astonishingly like a large artificial satellite-port (probably of pyramidical design) as a gateway to the cosmos for our planet! In short, I am suggesting that we call a truce in the culture war and rethink a lot of hostile assumptions.) Okay, let us now look at a fascinating set of issues, step by step: 1: I answered my own question last night re does/did god create engines of creation. The answer is clearly yes because we were created (designed) but we are also capable of design (creating) – we are one of those engines. Yes, and that is pregnant with import. It is possible to create embodied, creative designing agents. And in our case, we are also procreative, so the possibility of self-replicating agents a la von Neumann's self-replicator arises. It may even be sensible to base such a self replicator -- we are now at the Drexler self-replicating automaton -- on a small modular, adaptable unit, the analogue of the living cell. And since Carbon is a very handy molecule, why not do it with C-tech nanomachines in an artifical cell with a built in storage bank? Cyborgs, in short, not just robots. But, robots would be interesting too. Just, I think the need for governance controls a la conscience will become vital. Asimov's 3 laws are relevant. [Think of what a robot suicide bomber with a built in nuke weapon could do. Maybe, that it is hard to do such, is a safeguard to keep us from blowing ourselves up until we sort ourselves out on our dilemma of being finite, fallible, fallen and too often destructively ill-willed.] 2: from the perspective of computers and design the question is then – can we create designers? and more specifically, can we create designers that can create designers? (etc, etc) Providing we can crack the imaginative, self-directing supervisory controller problem. It is plainly doable, for we are like that and to a limited extent so are higher animals. Notice, I am here explicitly putting us on a spectrum as autonomous, carbon technology robots that are self-replicating, through a sexual cycle that allows for genetic mix-match. This, explicitly, also includes the ability to observe, to infer, and respond actively to the world, taking in feedback on what works and what does not. The observer model in B/G note 1, is not confined to us:
I: [si] --> O, on W
Once we are able to observe and infer, we can construct world models and act on them, adjusting to increase success. thus, we see how entities like that on a internal education program, can become learning systems. (By contrast, we can speculate, the necessary being cosmos Architect would be already deeply knowledgeable, and would probably be able to access all space-time points through some sort of hyper-net. But, that is speculative as already said.) 3: d mental abilities nessecitates us being seperate, rather than just more advanced, that other animals – remember they were created as well and could have the same ‘ingredients’, just to a lesser degree or not fully enabled. As you will see form my remarks this morning to GP, we agree here. My point was that the analogy used by AIG was fundamentally misdirected. Using Tigerton, Dolphinstein and Chimpck physics would not make a difference to the point that the locus of capability in imaginative, powerfully abstract conceptual thought is mental, not bodily. And among humans -- with the same basic biological equipment and capacities, only the knowledgeable and skilled need apply for computer engineering jobs. Further to this, we know that we know very little about the cosmos as a whole: the dark matter conundrum is decisive. Notice, we have observational evidence from the Bullet Cluster [and the train-wreck cluster], that dark matter acts gravitationally, but apparently NOT electromagnetically. Even the atomic nucleus is as much an electrical as a strong force system: the neutrons dilute down electrostatic repulsions and contribute to the short-range gluing action of the strong force. And, Dark matter dwarfs atomic, electrically acting matter on the cosmic scale. So, why should it be suddenly so strange and derided to think that there is what we could call a mental substance capable of feeding into the brain-body system and interacting with it? Time for materialists to wheel and tun, and come again . . . 4: This takes us back to classic issues of philospohy and the problems of introspection, how do we tell if something else has a mind (consciousness) when we have no emperical measure as yet? (e.g. John Searle and the Chinese Room problem in AI) So, we should keep an open mind, and accept the testimony of the first facts of our experience and observation: we are minded, conscious, enconscienced creatures with FSCI-rich, intricately designed bodies, in a world that also seems to be --SB would say: screams that it is -- designed. It is only the pall cast by a priori materialism that holds back the force of that fairly obvious and common-sense view. 5: Animals do solve problems and some even create objects. I don’t think we are able to say with any certainty yet that they don’t use some form of reasoning, or even employ symbolic abstractions in some primitive way I agree. I am only pointing to the spectrum to emphasise that it is mindedness, not embodiment, that is the locus of designing ability. 6: your argument, that our mental abilities imply something extra because of seperation, isn’t warranted That is not my argument. My point is that embracing the higher animals as manifesting similar but more primitive forms of mental abilities and consciousness, and observing the diversity among human beings, we can see that it is not embodiment but midnednes thatis the true locus of comparison for design. So, looking back to the Derek Smith model, we have a way in which we can see a lower order input-output MIMO [multiple input, multiple output] control loop with internal state and orientation in the world feedback through proprioception cybernetic loop with a supervising higher order controller. That cybernetic model is rich with possibilities. Once we have the loop, we can then integrate the higher order subsystem that senses and directs, without being locked up in the loop. Bring to bear the now more or less observational fact that we know there is at least one more class of substance in our cosmos, dark matter. Just for fun, put in the Penrose Hameroff hypothesis of gravitonic, influencing and informational interaction at neural microtubule level. (Maybe it works another way, but this allows us to at least think and discuss in terms of what we can observe to date. Remember, there is more dark matter around than atomic matter.) And, voila, we have a viable crude model for a minded, embodied entity that has a mind that is not merely emergent from the body and supervenes on it without causal efficacy. And, what if mind is another substance entirely, that still has the capabilities for informational interaction with the brain-body MIMO cybernetic entity? Just to be provocative, let us call that substance: SPIRIT or SOUL. Do we not see that it might be possible to integrate such with a brain-body loop through quantum level interfaces, along which qu-bits travel back and forth happily? Giving us massively parallel processing power. And, in the context of somehow being self-conscious and self-directing [taken as plausible facts of introspection of conscious being . . . on the Feyerabend principle that if it looks fruitful, add it to the scientific toolbox, without locking into any hard and fast set of tools, techniques and principles that define all and only scientific methods], do we not now see that agent cause is a reasonable thing? ______________ At this point, we are deep into gedankenexperiment type speculations, but the SCIENTIFIC point is if we do not re-open our imaginative space to think about possibilities and embrace credible and relevant facts, we cannot confidently infer to a truly reasonable best [albeit provisional] explanation. So, let us re-open our minds. GEM of TKI PS: I forgot, the 5th Imperium sci fi series has also another class of less than virtuous conscious computing engines that captivate an entire race into a high tech Plato's Cave world that turns them into cosmic scale destructive monsters . . . , in short, once self-directing machines are in our imaginative prospect, ethics is dead centre as a serious issue.kairosfocus
January 18, 2011
January
01
Jan
18
18
2011
05:42 AM
5
05
42
AM
PDT
Onlookers (and Dr Bot): It will take a while to properly respond on points to Dr [I, Ro]Bot -- I have already set up my Safari panel in a parallel window for step by step reference. But in the meanwhile, my remarks to GP are a foretaste of where I will be going. So, please, enjoy the onward links as a window on a fascinating area of intellectual and technological history. Pardon the time to respond in a way that does justice to DrBot's thoughtful and positive contributions. One wishes that more UD threads would develop like this one is. G PS: Dr Bot, do you have an early prototype of R Daneel hiding in your lab basement? If you have that or anything of significant interest on the design and development of intelligent automata [and any notions of how consciousness can arise beyond, if we are a matter-energy world and consciousness arose spontaneously once it can do so again -- cf here for a sci fic world that premises off that, the Dahak world and the notion of a galaxy-spanning imperium -- the trilogy by Weber shows a moon-size ship that has a computer core that over 50,000 years becomes spontaneously conscious and becomes a pivotal character in a story; available in print and as ebook], why not tell us a bit of the story?kairosfocus
January 18, 2011
January
01
Jan
18
18
2011
04:09 AM
4
04
09
AM
PDT
Hi GP: Quite good thoughts as usual. FSCI, especially when it is digitally coded -- notice my addition overnight to point 10 of the original post [and the remarks in points i to k of b/g note 1 on the implications of the comunication network as a system, once we have received, recognised and decoded a message: this is an inference to design in the face of the abstract possibility of "lucky noise"] -- is a pretty direct index of mind at work, at some level in the chain of causes. That is, sufficiently complex and meaningful clusters of symbols of language [and phonemes are as much symbols as are letters or ideograms and numerals], whether used to communicate or to provide data and give instructions for an algorithmic process, bespeak intentional, choosing, acting mind at work. And yes, the computer shows how we may automate the process, as the numerically controlled machine tool did before, or going back to C18, the Jacquard loom Similarly the cam-bar driven mechanical device -- going back to C18 [and beyond to antiquity] and used to make automatons -- is also a programmed entity, but the information there is analogue and non-verbal. (NB: That is a part motivation for my discussion here on how 2-d and 3-d networks of nodes, arcs and interfaces can be reduced to digitally coded FSCI. Indeed, Babbage's analytical engine, 1837 - 1871 could be seen as a digitalisation of the cam-bar type automaton, as a general, programmable calculating device i.e. a computer. Unfortunately, even though the attempt drastically advanced machining technology and was apparently at least marginally feasible, wiki very properly laments that "funding and political support" on adequate scale were not there. The time for big, gov't funded science was not yet.) We might even profitably discuss how an algorithm could be set up to establish the physics of a life-habitable cosmos. Or even to set up a multiverse that scans the domain of possibilities in the neighbourhood of our sub-cosmos, in such a way that we get life-viable sub cosmi. (I make the detour through the multiverse in anticipation of a rhetorical counter; I hold that on Occam's razor, in absence of direct evidence of such a multiverse, we have no good reason to infer to a multiverse. A quasi-infinite multiplication without necessity is cut away by Occam on steroids. That is as opposed to the possibility that the designer/architect and builder of our cosmos might have good reasons to build other cosmi. In that sense,t eh biblical view traditional in our culture, is a multiverse view, as, e.g. heaven is obviously seen therein as another world that seems to be able to be present to and intersect with space-time in our own.) I agree that consciousness is a very distinct part of our experience (as well as that of some higher animals, it seems), and that it is certainly not true of present computers and those -- digital or analogue [a cam-bar is a program! and, an analogue computer is a computer!] -- programmable automata we call robots. I add, that in our case, it is joined to a superlative degree of verbal-linguistic, logical-analytical and imaginative ability. We can literally create model-worlds in our heads, and envision what it would be to live in them -- BTW, a gateway to both the gedankenexperiment so beloved of Einstein, and today's scientific visualisation simulations. [The potential problem being when we lose the ability to distinguish such an imaginative world from the one we live in; hence also that collective, manipulated madness that Plato described in his Parable of the Cave, on false vs true enlightenment and the implications for not only epistemology and metaphysics but the socio-political sphere. Was that what the AZ shooter of sad recent events, was thinking about on his conscious dreaming metaphor?] Since it is so mysterious, and in light of the Derek Smith model, I am not so sure that we will not be able to eventually find a way to trigger this. Our own existence -- including here the eloquent testimony of the dFSCI in our DNA -- shows that it is POSSIBLE to create a conscious, physically instantiated entity. Biological reproduction, that it is possible for such entities to reproduce themselves, with future generations of such conscious creatures. How twerdun, I know not, but that would be a wonderful discovery for AI, if it ever can attain to that. The ultimate design, I would say: R Daneel Olivaw. (Though I rather doubt the need for positrons!) Certainly the Derek Smith model provides a general architecture for such an entity. GEM of TKIkairosfocus
January 18, 2011
January
01
Jan
18
18
2011
03:26 AM
3
03
26
AM
PDT
vjtorley: As usual, you raise very interesting points. I would like to add some personal comments about them to the very good work already done by kf. "So my question is: how would you explicate the difference between the way in which the laws of nature are designed, and the way in which FCSI is the product of design?" I would say that we have to distinguish between a designed algorithm which generates regular outputs, and its outputs. While the outputs, which exhibit some form of regularity, and are therefore compressible and explainable by a necessity mechanism (the algorithm), do not exhibit FSCI, the algorithm which generates them, if complex enough, does. IOWs, a computer, including its software, is certainly an example of designed object, exhibiting a lot of FSCI. A computer's output, even if very complex, can anyway be explained by the computer which generates it (including all the input information). Even if the output were more complex of the computer itself, it could anyway be explained by the system which has, by necessity, generated it, and therefore its K complexity would at most be equal to the initial complexity of the system. That's, IMO, the fundamental limit of necessity: it cannot create new, truly original complex information. Obviously, a follower of strong AY would object that humans are computers too, and that therefore their outputs are the result of necessity. But that is simply false. Strong AI is simply the most stupid theory ever conceived. Because the difference is in a simple word: consciousness. Consciousness cannot be explained in terms of mechanisms and necessity. It is a completely different level of reality. And the amazing ability of us humans to generate FSCI is absolutely related to our being conscious intelligent free beings. That is obvious in our direct perceptions, but is also supported by the fundamental observation that true FSCI is never found in any non conscious system. So, just to go back to an old example: could a computer write Hamlet, or something similar? The answer is: no. Noy without first having Hamlet in input, or as an oracle in its software. The reason is simple: Hamlet is a very complex bundle of meanings, feelings, purposes, and beauty. That is its true structure. Only a conscious intelligent being can have representations of meanings, feelings, purposes and beauty. Those concepts cannot even be defined without a reference to conscious representations. Therefore, a complex output whose intrinsic structure is fully dedicated to expressing those concepts in a rich and satisfying and unique form can never, never come out of any system which does not include a conscious, intelligent agent who can represent those states and then is able to express them through matter. So, my point is very clear: a computer will never become conscious, will never represent meanings and feelings and purposes, and threfore will never write Hamlet. IOWs, a computer will never generate new, truly original FSCI. So, to answer your original question: to speak of the laws of nature is a difficult task, because anyway it implies a regress to a "pre-observed universe" condition. It can be done, but it inevitably implies strong philosophical choices. That said, I could imagine the laws of nature as some algorithm which rules the manifested universe as its output (or at least the necessity part of it). I believe they are designed, but to affirm that in terns of the concept of FSCI is not a simple task, because we have really no definite idea of what those laws are, of how they work, and least of all of their complexity. The cosmological argument, in terms of search space of the fundamental constants, is a very good argument, but IMO it still leaves many open problems. It is good, but not so good and purely empirical as the argument for design in biological beings.gpuccio
January 18, 2011
January
01
Jan
18
18
2011
01:42 AM
1
01
42
AM
PDT
KF, briefly because I have a busy day and probably won't get a chance to participate again for a few ... I realised I answered my own question last night re does/did god create engines of creation. The answer is clearly yes because we were created (designed) but we are also capable of design (creating) - we are one of those engines. I guess from the perspective of computers and design the question is then - can we create designers? and more specifically, can we create designers that can create designers? (etc, etc) To re-phrase - can we 'put in' to our creations this ingredient that allows us to function in the way we do? Relating this back to animals for a moment - I still don't think your answer regarding our advanced mental abilities nessecitates us being seperate, rather than just more advanced, that other animals - remember they were created as well and could have the same 'ingredients', just to a lesser degree or not fully enabled. Lets go back a moment:
the spectrum’s extremes between typical animals and people shows that mere embodiment and having brains does not explain the ability to do technically sophisticated reasoning
This implies that something else is required for us to reason (beyond just a physical brain and body) - this something may also be present in other animals! This takes us back to classic issues of philospohy and the problems of introspection, how do we tell if something else has a mind (consciousness) when we have no emperical measure as yet? (e.g. John Searle and the Chinese Room problem in AI) All we can do at the moment is talk to other people and infer that they have minds like us, we can't talk to animals in any meaningful way (yet) or critically - experience their world - what it is truly like to be them. Animals do solve problems and some even create objects. I don't think we are able to say with any certainty yet that they don't use some form of reasoning, or even employ symbolic abstractions in some primitive way - given out tendancy to anthropomorphise things in our world it is also hard to study!). It is for this reason that your argument, that our mental abilities imply something extra because of seperation, isn't warranted - indeed I don't think it is necessary for your wider argument!DrBot
January 18, 2011
January
01
Jan
18
18
2011
12:33 AM
12
12
33
AM
PDT
BA: useful points to ponder, as usual. Robert Marks in particular is one real bright boy. G PS: For those who were offput by the objection elsewhere that to excerpt and post significant materials is not an argument, it plainly is. I have learned some very useful things form BA's video scoops and quotes.kairosfocus
January 17, 2011
January
01
Jan
17
17
2011
05:29 PM
5
05
29
PM
PDT
F/N: Intelligent design, by itself as a scientific endeavour, has no commitment on the nature of intelligence. It is an inference to design as artifact, not to the intelligence behind the design. As a matter of philosophy, the cosmological and teleological issues on our contingent, fine tuned cosmos, point to a necessary being who is architect of the cosmos. As matter as we know it is contingent, that necessary being cannot be of material substance like that of our world.kairosfocus
January 17, 2011
January
01
Jan
17
17
2011
05:25 PM
5
05
25
PM
PDT
DrBot: I will briefly follow up before going off to help a son with his math HW, having had to help with astronomy part of geography just a bit earlier.[Turns out he had an impossible triangle construction to do.] We are saying the same thing on spectrums, just with different emphases: my point hinges on the fact of the spectrum and that whether we are within humans or going across species lines, it is not embodiment as such that is the crucial point of comparison but mental function. That -- as I pointed out too -- would hold in a world where the Newton analogue was a Tigeroid, the Einstein analogue a Delphinoid, and the Planck analogue a Chimpoid. When it comes to the designs, speaking of the PCs as designers is comparable to the people who talk to their PCs, pleading with Word to give them back their document. There ain't no smarts dere dat wazn't put in. PC's as we both know, are passive, dynamically inert machines that are organised into complex combinations of parts that under certain initial and onward intervening conditions, will carry out algorithms that we find useful. Smarts in, smarts out, and GIGO, too. As to the MPU designs, my understanding -- haven't been keeping in close touch recently once we started going well beyond 1 mn transistors on a chip -- is that basically we have a hierarchy of modules, from gates up to subsystems, and we have algorithms for making sure the interconnexions are right. Yup, the masks used to be made by hand, and are so complex now they cannot be made by hand and we probably have to interface as users at very high level [BTW, how are the micro stripline techniques keeping up with the RF wave effects . . . or are there heuristics that allow us to set rules of thumb to minimise the issue], but there is nowt there that is not in principle already there in any automation. The PC is carrying out a detailed programme, but it has no common sense. We have to set it up right,and make sure it keeps right every step of the way. On engines of creation, some would view the fine tuning of the cosmos as an engine of creation. Chance variation is simply not capable of generating FSCI for the reasons laid out above:too much config space, too fast. Smart heuristics or beacons or maps would have to be built in. And that is why most evolutionists are theistic evolutionists when pressed hard enough. In education circles, the issue of embodiment is that the abstraction is based on the concrete. But, a PC should tell us that unless there is an abstraction engine there with capacity to do it in the first place, no go sir. Napoleon once took a complaining officer to some mules and told him these two have been with me on every campaign, but are still mules; no ability for reflective observation, inference and warrant -- as b/g note discussed -- and no capacity. This capability is distinctly mental. I suspect this may be a valid point to Plato's idea of forms and the world of forms. I repeat, we simply do not know enough about the cosmos to be materialists yet; and evolutionary materialism is inherently self referentially absurd, on multiple grounds. Materialism is a quasi-religion living off promissory notes that it simply cannot redeem. GEM of TKIkairosfocus
January 17, 2011
January
01
Jan
17
17
2011
04:58 PM
4
04
58
PM
PDT
Please excuse kf and Dr.Bot, but I have a few points that may be of interest: The following is an excellent recent interview with Dr. Marks beginning about 5 minutes into the podcast. Robert Marks explains exactly why Artificial Intelligence for computers has been a failure, as it was originally conceived, since it is found computers cannot create 'information' as 'minds' can... Robert J. Marks II interview with Tom Woodward, on "Darwin or Design?" http://podcast.den.liquidcompass.net/mgt/podcast/podcast.php?podcast_id=15595&encoder_id=153&event_i Here are a few of the papers Marks-Dembski have published: LIFE’S CONSERVATION LAW - William Dembski - Robert Marks - Pg. 13 Excerpt: Simulations such as Dawkins’s WEASEL, Adami’s AVIDA, Ray’s Tierra, and Schneider’s ev appear to support Darwinian evolution, but only for lack of clear accounting practices that track the information smuggled into them.,,, Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information. Active information enables us to see why this is the case. http://evoinfo.org/publications/lifes-conservation-law/ Conservation of Information in Computer Search (COI) - William A. Dembski - Robert J. Marks II - Dec. 2009 Excerpt: COI puts to rest the inflated claims for the information generating power of evolutionary simulations such as Avida and ev. http://evoinfo.org/publications/bernoullis-principle-of-insufficient-reason/ Evolutionary Synthesis of Nand Logic: Dissecting a Digital Organism - Dembski - Marks - Dec. 2009 Excerpt: The effectiveness of a given algorithm can be measured by the active information introduced to the search. We illustrate this by identifying sources of active information in Avida, a software program designed to search for logic functions using nand gates. Avida uses stair step active information by rewarding logic functions using a smaller number of nands to construct functions requiring more. Removing stair steps deteriorates Avida’s performance while removing deleterious instructions improves it. http://evoinfo.org/publications/evolutionary-synthesis-of-nand-logic-avida/ Evolutionary Informatics - William Dembski & Robert Marks Excerpt: The principal theme of the lab’s research is teasing apart the respective roles of internally generated and externally applied information in the performance of evolutionary systems.,,, Evolutionary informatics demonstrates a regress of information sources. At no place along the way need there be a violation of ordinary physical causality. And yet, the regress implies a fundamental incompleteness in physical causality's ability to produce the required information. Evolutionary informatics, while falling squarely within the information sciences, thus points to the need for an ultimate information source qua intelligent designer. http://evoinfo.org/ “Computers are no more able to create information than iPods are capable of creating music.” Robert Marks further note: The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010 Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.” http://www.scitopics.com/The_Law_of_Physicodynamic_Insufficiency.html further note: Though the authors of the 'Evolution of the Genus Homo' paper appear to be thoroughly mystified by the fossil record, they never seem to give up their blind faith in evolution despite the disparity they see first hand in the fossil record. In spite of their philosophical bias, I have to hand it to them for being fairly honest with the evidence though. I especially like how the authors draw out this following 'what it means to be human' distinction in their paper: "although Homo neanderthalensis had a large brain, it left no unequivocal evidence of the symbolic consciousness that makes our species unique." -- "Unusual though Homo sapiens may be morphologically, it is undoubtedly our remarkable cognitive qualities that most strikingly demarcate us from all other extant species. They are certainly what give us our strong subjective sense of being qualitatively different. And they are all ultimately traceable to our symbolic capacity. Human beings alone, it seems, mentally dissect the world into a multitude of discrete symbols, and combine and recombine those symbols in their minds to produce hypotheses of alternative possibilities. When exactly Homo sapiens acquired this unusual ability is the subject of debate." The authors of the paper try to find some evolutionary/materialistic reason for the extremely unique 'information capacity' of humans, but of course they never find a coherent reason. Indeed why should we ever consider a process, which is utterly incapable of ever generating any complex functional information at even the most foundational levels of molecular biology, to suddenly, magically, have the ability to generate our brain which can readily understand and generate functional information? A brain which has been repeatedly referred to as 'the Most Complex Structure in the Universe'? The authors never seem to consider the 'spiritual angle' for why we would have such a unique capacity for such abundant information processing. Genesis 3:8 And they (Adam and Eve) heard the voice of the LORD God walking in the garden in the cool of the day... John 1:1-1 In the beginning, the Word existed. The Word was with God, and the Word was God. The following video is far more direct in establishing the 'spiritual' link to man's ability to learn new information, in that it shows that the SAT (Scholastic Aptitude Test) scores for students showed a steady decline, for seventeen years from the top spot or near the top spot in the world, after the removal of prayer from the public classroom by the Supreme Court in 1963. Whereas the SAT scores for private Christian schools have consistently remained at the top, or near the top, spot in the world: The Real Reason American Education Has Slipped – David Barton – video http://www.metacafe.com/watch/4318930 The following video, which I've listed before, is very suggestive to a 'spiritual' link in man's ability to learn new information in that the video shows that almost every, if not every,founder of each discipline of modern science was a devout Christian: Christianity Gave Birth To Science - Dr. Henry Fritz Schaefer - video http://vimeo.com/16523153bornagain77
January 17, 2011
January
01
Jan
17
17
2011
04:39 PM
4
04
39
PM
PDT
KF, thanks for the considered response, if you will forgive me I'll respond briefly on a few points (I'm up late, just finished writing a lecture on AI, now I need to sleep!) My phrasing 'designed by computers' was not implying that humans werent the ultimate source of design, I was highlighting how we use computers (and other tech) to perform design tasks for us, in particular nowadays design tasks that are seemingly intractable when approached with a pen and paper (i.e created by a person from the bottom up). In the case of computers they, and their design synthesis and optimisation algorithms, can generate vastly complex systems from much simpler specifications (provided by us) using mechanistic rules (designed by us) I agree that we are ultimatly the designer but we can (or can we?) usefully use the word designer to refer to the automated system - we create the rules, the computer generates the design - from analysis of this design we can infer that at some point in the causal chain an intelligence was involved. Going back to computer cores for the moment - people don't 'design' masks for etching microprocessors any more, we design systems that perform this task for us. The person who gave the lecture that highlighted this was the emminent computer scientist Professor Stephen Furber - he used the words (paraphrasing) 'back when we created the ARM processor we designed these by hand but now they are just so complex that they have to be designed by computers - it is to hard for us) - Is it valid or useful to use this language? This leads to a couple of interesting questions. The first, already asked in a way, is can we concieve of an ultimate creator that, rather than designing everything from the bottom up, creates mechanisms (they don't have to be material in the sense of functioning in our material universe) to generate designs for them - Can or does God use engines of creation? The second question from this is is it possible when we infer design to start to examine if these hypothetical engines of creation were involved - is some of the complexity we see (and consist of) a result of, forgive the phrase, a 'sub designer'? Quickly, on this bit (because I'm rambling on more than I intended ;) )
the spectrum’s extremes between typical animals and people shows that mere embodiment and having brains does not explain the ability to do technically sophisticated reasoning
I'm just not convinced that this reasoning holds up on its own - why can't we just be further along a path - why does the extreme distance nessecitate some extra (new and unique) stuff and not just some order of magnitude more of the existing stuff. One thing I've learned from studying AI is that embodiment is (in some camps) regarded as critical for intelligence - it is needed to ground abstract symbols in the real (But I'm not sure I agree yet!)DrBot
January 17, 2011
January
01
Jan
17
17
2011
04:15 PM
4
04
15
PM
PDT
Folks: Did a quick Google search. Found this criticism as the first to show up under the post title:
ID Foundations: The design inference, warrant and “the” scientific method Try using the explanatory filter on the Old Testament….in fact, you can’t because the data is obviously fiction, epic poetry. The irony is that a design sense (but not necessarily ‘intelligent’ design) is present in the Axial Age as a whole.
Sounds like the old red herring led away to a Creationist strawman to be soaked in oil of ad hominems and ignited to me. On the technical point the OT has in it hundreds and hundreds of pp of materials, replete with dFSCI, leading to the inference that it is designed. Its text, whether in Hebrew and Aramaic, or Septuagint, or translations including English, is also directly known to be intelligently and intentionally configured to express a particular message. So, it is intelligently designed on inference and on observation; i.e yet another case supporting the correctness of the design inference on signs such as FSCI. Of course the focus of the comment was a bit of mockery based on twisting the meaning of "intelligent," showing the objector's contempt. If that is what the objector has had to resort to, then the point in the original post above is well made. And, methinks there are many eminently qualified scholars who would beg to differ with the broad-brush dismissive evaluation of the Bible being given above. ________________ ADDED: Cf Hugenberger on historicity of OT esp, here Also, Gaskell's notes on Modern Astronomy, the Bible and Creation, here (for which he has been subjected to disgraceful "expulsion") For the NT and gospel, cf here, noting here on the general question of building a sound worldview _________________ But that is not a focus for this blog thread. GEM of TKI _____________ (F/N: Sir Darwiniana, do kindly look here to the take-down of the NCSE for its endorsing the ID = Creationism smear. Onlookers, see why first we had to do some rubble clearing?)kairosfocus
January 17, 2011
January
01
Jan
17
17
2011
03:54 PM
3
03
54
PM
PDT
BA 77: Thanks. Also, I found out the bug: I had manipulated the Adobe flash settings manager control page parameters a bit too aggressively, and set the caches on my PC to zero. When I thought on how I saw the problem in my no 2 browser, Safari, it dawned: it has to be in-common software for vids . . . Flash. And yes there is an Adobe page that will have in it your flash video downloads and look-ats etc. (What, you didn't know that? Until not so long ago, neither did I -- thought Flash lived completely on my machine, like a good little download. Better take a look and see what they have on you!) Anybody got a 3rd party Flash viewer that does not play games like that? GEM of TKIkairosfocus
January 17, 2011
January
01
Jan
17
17
2011
03:32 PM
3
03
32
PM
PDT
DrBot Thanks for a usefully stimulating comment, even if you had to snatch a few minutes from a busy day for it. Pardon a few notes: 1: If you take a look at the silicon microarchetecture of modern processors they are, apart from the orderly memory, a mess . . . . they are designed by computers. I disagree, a bit. Algorithms are developed and programs are written by programmers; validated then run, creating a constellation of interacting modules to allow speculative, out of order instruction execution, pipelines of astonishing depth, parallel processing etc. The computers running these programs have no goals, they have no intentions, they simply execute instructions, closing and opening electrical circuits. The modern equivalent of Liebniz's mill wheels grinding away at one another mindlessly. And so, programs have no common sense: GIGO still obtains, unless someone was clever enough to write an error trap that catches the problem before it wreaks havoc -- like that reversed solidus in comment no 1. All of the intelligent, functional organisation came in from without. And though the rumours that Uncle Billy was seen buying up banana plantations over in central America are not true, some would suggest that hat is not too far from the truth. (And nope, I am actually allergic to raw bananas: they have to be boiled, baked or the like before I can eat them.) But even so, very imperfect design -- including rather clumsy or convoluted text of posts -- is still design and it is still detectable by the inference filter. 2: We use our intelligence to specify target behaviours and create processes to generate systems that meet those requirements but the resulting systems can be difficult for us to understand. 30+ years back, so was a hand-drawn complex circuit diagram for a storage tube cathode ray oscilloscope. 3: I was wondering (rather vaguely :) ) how the limited abilities of human designers link into the chain of reasoning that allows us to infer that we were designed, and if it has any implications at all? We are designers, and that is what is relevant. That we are finite and fallible does not change that fact -- just, it means that we have to spend a fairly long time debugging and troubleshooting in a multiple, interacting fault environment to get the complex system right. (The echo of remembered frustration and long hours tracking down yet another subtle bug, is real.) 4: is it reasonable to infer that the creator might have created mechanisms to aid further creation – I realise I’m skimming dangerously close to the idea of theistic evolution here but the question is independent of evolutionary arguments – there are plenty of other mechanisms we can concieve of that can aid a designer! Actually, modern Young Earth Creationists often believe that ability to vary to fit niches within more or less taxonomic families [cats, dogs, etc] is a part of the original design. Much of that not so much by injection of additional info by mutations, but by isolation and extraction of specialised sub-pops from an original blended pop, like a good part of how dogs seem to have come from the original dog-wolf. The immune system seems to use targetted random search strategies. Robustness due to adaptability is a reasonable design goal, if you can get it. Hard for us to do just yet. 5: How do we know that other higher animals can’t reason and use abstract symbols in the same way, just not to our level. n other words could it be that we are just (much) further along a continuum of cognitive abilities (rooted in embodied brains), rather than on the other side of a wall nessecitated by something extra. You will note that my point was that there is a relative difference, and that the spectrum's extremes between typical animals and people shows that mere embodiment and having brains does not explain the ability to do technically sophisticated reasoning, analyses and designs that rely on proficiency with abstract symbols and concepts. Fundamentally, if we were using Tigerton's laws of motion, or Dolphinstein's theory of relativity, or Chimpck's Qunatum theory it would not affect the basic point. It is not the mere brain, but the quality of mind and knowledge that count. Notice how, when I turned to computer engineering, I pointed out how most people who use the machines don't understand them in detail, nor are they able to design or develop them. That takes deep knowledge, high skill in analysis and synthesis, and years of experience in the disciplines. In short the issue is not having a brain or a body, but having a mind. And, the Derek Smith model -- looks like I am going to have to do a mind-body issues and design theory foundation post at some point, DV -- points out ways in which a two-tier controller model allows for the brain-body subsystem to serve as an input-output multiple input, multiple output smart control loop that is then supercvised by a higher order controller. I suspect tha thigher order controller can be in some cases done in Silicon and software, in others may be in different aspects of brains, and in yet others may be open to a mind of fundamentally different substance from atomic matter, that influences matter though some sort of quantum gateway. If you are inclined to doubt me on this, consider how dark matter swamps out the palpable atomic matter on the cosmic scale and seems to be non-electromagnetic, if the bullet galaxy cluster collision with the dark halo separate from the X-ray emitting collision is to be believed. Dark energy is similarly mysterious, and between the two, we are looking at about 4% of the detected cosmos that we know anything of serious substance about. Here is wiki on the Bullet cluster:
The most direct observational evidence to date for dark matter is in a system known as the Bullet Cluster. In most regions of the universe, dark matter and visible material are found together,[29] as expected because of their mutual gravitational attraction. In the Bullet Cluster, a collision between two galaxy clusters appears to have caused a separation of dark matter and baryonic matter. X-ray observations show that much of the baryonic matter (in the form of 107– 108 Kelvin[30] gas, or plasma) in the system is concentrated in the center of the system. Electromagnetic interactions between passing gas particles caused them to slow down and settle near the point of impact. However, weak gravitational lensing observations of the same system show that much of the mass resides outside of the central region of baryonic gas. Because dark matter does not interact by electromagnetic forces, it would not have been slowed in the same way as the X-ray visible gas, so the dark matter components of the two clusters passed through each other without slowing down substantially. This accounts for the separation. Unlike the galactic rotation curves, this evidence for dark matter is independent of the details of Newtonian gravity, so it is held as direct evidence of the existence of dark matter.[30] Another galaxy cluster, known as the Train Wreck Cluster/Abell 520, seems to have its dark matter completely separated from both the galaxies and the gas in that cluster, which presents some problems for theoretical models.[31]
Frankly, we do not begin to know enough about the cosmos to be materialists with any confidence. The exotic stuff we are beginning to know about is already 25 times the familiar stuff we know! So, there is a lot of room for a real mind that has real interactions with a brain-body system. And, with our seeing -- ever since Ein-/Dolphin-stein -- that matter is interconvertible with energy, we know that matter is inherently contingent. Indeed, that is a part of our big bang model of origins of atomic matter. That, in the end calls for the root cause of a material cosmos (even through a multiverse) being a necessary and non-matter being. One who on the local isolation and precision fine tuning of our cosmos, is capable of specific, complex subtle intelligent design. A mind before all matter, in short, and the ground of all matter. Sure such a being is mysterious. but in a cosmos riddled with dark matter and dark energy, we should be getting used to that by now. ________________ But, any way, this stuff is all based on the basic inference to design. The prime question is, does the original post help us understand that inference and its warrant in a scientific context? If there are gaps or obscurities, where and what do you think should be done about them? GEM of TKIkairosfocus
January 17, 2011
January
01
Jan
17
17
2011
03:21 PM
3
03
21
PM
PDT
1 2

Leave a Reply