Uncommon Descent Serving The Intelligent Design Community

Functionally Specific, Complex Organisation and Associated Information (FSCO/I) is real and relevant

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Over the past few months, I noticed objectors to design theory dismissing or studiously ignoring a simple — much simpler than a clock — macroscopic example of Functionally Specific, Complex Organisation and/or associated Information (FSCO/I) and its empirically observed source, the ABU-Garcia Ambassadeur 6500 C3 fishing reel:

abu_6500c3mag

Yes, FSCO/I is real, and has a known cause.

{Added, Feb 6} It seems a few other clearly paradigmatic cases will help rivet the point, such as the organisation of a petroleum refinery:

Petroleum refinery block diagram illustrating FSCO/I in a process-flow system
Petroleum refinery block diagram illustrating FSCO/I in a process-flow system

. . . or the wireframe view of a rifle ‘scope (which itself has many carefully arranged components):

wireframe_scope

. . . or a calculator circuit:

calc_ckt

. . . or the wireframe for a gear tooth (showing how complex and exactingly precise a gear is):

spiral_gear_tooth

And if you doubt its relevance to the world of cell based life, I draw your attention to the code-based, ribosome using protein synthesis process that is a commonplace of life forms:

Protein Synthesis (HT: Wiki Media)
Protein Synthesis (HT: Wiki Media)

Video:

[vimeo 31830891]

U/D Mar 11, let’s add as a parallel to the oil refinery an outline of the cellular metabolism network as a case of integrated complex chemical systems instantiated using molecular nanotech that leaves the refinery in the dust for elegance and sophistication . . . noting how protein synthesis as outlined above is just the tiny corner at top left below, showing DNA, mRNA and protein assembly using tRNA in the little ribosome dots:

cell_metabolism

Now, the peculiar thing is, the demonstration of the reality and relevance of FSCO/I was routinely, studiously ignored by objectors, and there were even condescending or even apparently annoyed dismissals of my having made repeated reference to a fishing reel as a demonstrative example.

But, in a current thread Andre has brought the issue back into focus, as we can note from an exchange of comments:

Andre, #3: I have to ask our materialist friends…..

We have recently discovered a 3rd rotary motor [ –> after the Flagellum and the ATP Synthase Enzyme] that is used by cells for propulsion.

http://www.cell.com/current-bi…..%2901506-1

Please give me an honest answer how on earth can you even believe or hang on to the hope that this system not only designed itself but built itself? This view is not in accrodance with what we observe in the universe. I want to believe you that it can build and design itself but please show me how! I’m an engineer and I can promise you in my whole working life I have NEVER seen such a system come into existence on its own. If you have proof of this please share it with me so that I can also start believing in what you do!

Andre, 22: I see no attempt by anyone to answer my question…

How do molecular machines design and build themselves?

Anyone?

KF, 23: providing you mean the heavily endothermic information rich molecules and key-lock fitting components in the nanotech machines required for the living cell, they don’t, and especially, not in our observation. Nor do codes (languages) and algorithms (step by step procedures) assemble themselves out of molecular noise in warm salty ponds etc. In general, the notion that functionally specific complex organisation and associated information comes about by blind chance and mechanical necessity is without empirical warrant. But, institutionalised commitment to Lewontinian a priori evolutionary materialism has created the fixed notion in a great many minds that this “must” have happened and that to stop and question this is to abandon “Science.” So much the worse for the vera causa principle that in explaining a remote unobserved past of origins, there must be a requirement that we first observe the actual causes seen to produce such effects and use them in explanation. If that were done, the debates and contentions would be over as there is but one empirically grounded cause of FSCO/I; intelligently directed configuration, aka design

Andre, 24: On the money.

Piotr is an expert on linguistics, I wonder if he can tell us how the system of speech transmission, encoding and decoding could have evolved in a stepwise fashion.

Here is a simple example…..

http://4.bp.blogspot.com/_1VPL…..+Model.gif

[I insert:]

Transactional_Model
[And, elaborate a bit, on technical requisites:]

A communication system
A communication system

I really want to know how or am I just being unreasonable again?

We need to go back to the Fishing reel, with its story:

[youtube bpzh3faJkXk]

The closest we got to a reasonable response on the design-indicating implications of FSCO/I in fishing reels as a positive demonstration (with implications for other cases), is this, from AR:

It requires no effort at all to accept that the Abu Ambassadeur reel was designed and built by Swedes. My father had several examples. He worked for a rival company and was tasked with reverse-engineering the design with a view to developing a similar product. His company gave up on it. And I would be the first to suggest there are limits to our knowledge. We cannot see beyond the past light-cone of the Earth.

I think a better word that would lead to less confusion would be “purposeful” rather than “intelligent”. It better describes people, tool-using primates, beavers, bees and termites. The more important distinction should be made between material purposeful agents about which I cannot imagine we could disagree (aforesaid humans, other primates, etc) and immaterial agents for which we have no evidence or indicia (LOL) . . .

Now, it should be readily apparent . . . let’s expand in step by step points of thought [u/d Feb 8] . . . that:

a –> intelligence is inherently purposeful, and

b –> that the fishing reel is an example of how the purposeful intelligent creativity involved in the intelligently directed configuration — aka, design — that

c –> leads to productive working together of multiple, correct parts properly arranged to achieve function through their effective interaction

d –> leaves behind it certain empirically evident and in principle quantifiable signs. In particular,

e –> the specific arrangement of particular parts or facets in the sort of nodes-arcs pattern in the exploded view diagram above is chock full of quantifiable, function-constrained information. That is,

f –> we may identify a structured framework and list of yes/no questions required to bring us to the cluster of effective configurations in the abstract space of possible configurations of relevant parts.

g –> This involves specifying the parts, specifying their orientation, their location relative to other parts, coupling, and possibly an assembly process. Where,

h –> such a string of structured questions and answers is a specification in a description language, and yields a value of functionally specific information in binary digits, bits.

If this sounds strange, reflect on how AutoCAD and similar drawing programs represent designs.

This is directly linked to a well known index of complexity, from Kolmogorov and Chaitin. As Wikipedia aptly summarises:

In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity (also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic entropy, or program-size complexity) of an object, such as a piece of text, is a measure of the computability resources needed to specify the object . . . .  the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language (the sensitivity of complexity relative to the choice of description language is discussed below). It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings, like the abab example above, whose Kolmogorov complexity is small relative to the string’s size are not considered to be complex.

A useful way to picture this is to recognise from the above, that the three dimensional complexity and functionally specific organisation of something like the 6500 C3 reel, may be reduced to a descriptive string. In the worst case (a random string), we can give some header contextual information and reproduce the string. In other cases, we may be able to spot a pattern and do much better than that, e.g. with an orderly string like abab . . . n times we can compress to a very short message that describes the order involved. In intermediate cases, in all codes we practically observe there is some redundancy that yields a degree of compressibility.

So, as Trevors and Abel were able to visualise a decade ago in one of the sleeping classic peer reviewed and published papers of design theory, we may distinguish random, ordered and functionally specific descriptive strings:

osc_rsc_fscThat is, we may see how islands of function emerge in an abstract space of possible sequences in which compressibility trades off against order and specific function in an algorithmic (or more broadly informational) context emerges. Where of course, functionality is readily observed in relevant cases: it works, or it fails, as any software debugger or hardware troubleshooter can tell you. Such islands may also be visualised in another way that allows us to see how this effect of sharp constraint on  configurations in order to achieve interactive function enables us to detect the presence of design as best explanation of FSCO/I:

csi_defnObviously, as the just above infographic shows, beyond a certain level of complexity, the atomic and temporal resources of our solar system or the observed cosmos would be fruitlessly overwhelmed by the scope of the space of possibilities for descriptive strings, if search for islands of function was to be carried out on the approach of blind chance and/or mechanical necessity. We therefore now arrive at a practical process for operationally detecting design on its empirical signs — one that is independent of debates over visibility or otherwise of designers (but requires us to be willing to accept that we exemplify capabilities and characteristics of designers but do not exhaust the list of in principle possible designers):

explan_filterFurther, we may introduce relevant cases and a quantification:

fscoi_facts

That is, we may now introduce a metric model that summarises the above flowchart:

Chi_500 = I*S – 500, bits beyond the solar system search threshold . . . Eqn 1

What this tells us, is that if we recognise a case of FSCO/I beyond 500 bits (or if the observed cosmos is a more relevant scope, 1,000 bits) then the config space search challenge above becomes insurmountable for blind chance and mechanical necessity. The only actually empirically warranted adequate causal explanation for such cases is design — intelligently directed configuration. And, as shown, this extends to specific cases in the world of life, extending a 2007 listing of cases of FSCO/I by Durston et al in the literature.

To see how this works, we may try the thought exercise of turning our observed solar system into a set of 10^57 atoms regarded as observers, assigning to each a tray of 500 coins. Flip every 10^-14 s or so, and observe, doing so for 10^17 s, a reasonable lifespan for the observed cosmos:

sol_coin_fliprThe resulting needle in haystack blind search challenge is comparable to a search that samples a one straw sized zone in a cubical haystack comparable in thickness to our galaxy. That is, we here apply a blind chance and mechanical necessity driven dynamic-stochastic search to a case of a general system model,

gen_sys_proc_model

. . . and find it to be practically insuperable.

By contrast, intelligent designers routinely produce text strings of 72 ASCII characters in recognisable, context-responsive English and the like.

[U/D Feb 5th:] I forgot to add, on the integration of a von Neumann Self Replication facility, which requires a significant increment in FSCO/I, which may be represented:

jvn_self_replicatorFollowing von Neumann generally, such a machine uses . . .

(i) an underlying storable code to record the required information to create not only
(a) the primary functional machine [[here, for a “clanking replicator” as illustrated, a Turing-type “universal computer”; in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also
(b) the self-replicating facility; and, that
(c) can express step by step finite procedures for using the facility; 
 
(ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with   
 
(iii) a tape reader [called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling:   
 
(iv) position-armimplementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by   
 
(v) either:   
 
(1) a pre-existing reservoir of required parts and energy sources, or
   
(2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment.

Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor.That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).]
Here, Mignea’s 2012 discussion [cf. slide how here and presentation here] of a minimal self replicating cellular form will be also relevant, involving duplication and arrangement then separation into daughter automata. This requires stored algorithmic procedures, descriptions sufficient to construct components, means to execute instructions, materials handling, controlled energy flows, wastes disposal and more.:
self_replication_migneaThis irreducible complexity is compounded by the requirement (i) for codes, requiring organised symbols and rules to specify both steps to take and formats for storing information, and (v) for appropriate material resources and energy sources.

Immediately, we are looking at islands of organised function for both the machinery and the information in the wider sea of possible (but mostly non-functional) configurationsIn short, outside such functionally specific — thus, isolated — information-rich hot (or, “target”) zones, want of correct components and/or of proper organisation and/or co-ordination will block function from emerging or being sustained across time from generation to generation. So, once the set of possible configurations is large enough and the islands of function are credibly sufficiently specific/isolated, it is unreasonable to expect such function to arise from chance, or from chance circumstances driving blind natural forces under the known laws of nature.

And ever since Paley spoke of the thought exercise of a watch that replicated itself in the course of its movement, it has been pointed out that such a jump in FSCO/I points to yet higher more perfect art as credible cause.

It bears noting, then, that the only actually actually observed source of FSCO/I is design.

That is, we see here the vera causa test in action, that when we set out to explain observed traces from the unobservable deep past of origins, we should apply in our explanations only such factors as we have observed to be causally adequate to such effects. The simple application of this principle to the FSCO/I in life forms immediately raises the question of design as causal explanation.

A good step to help us see why is to consult Leslie Orgel in a pivotal 1973 observation:

. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity.

These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [–> this is of course equivalent to the string of yes/no questions required to specify the relevant “wiring diagram” for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here, here and here (with here on self-moved agents as designing causes).]  One can see intuitively that many instructions are needed to specify a complex structure. [–> so if the q’s to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions.  [–> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes.

[The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196. Of course,

a –> that immediately highlights OOL, where the required self-replicating entity is part of what has to be explained (cf. Paley here), a notorious conundrum for advocates of evolutionary materialism; one, that has led to mutual ruin documented by Shapiro and Orgel between metabolism first and genes first schools of thought, cf here.

b –> Behe would go on to point out that irreducibly complex structures are not credibly formed by incremental evolutionary processes and Menuge et al would bring up serious issues for the suggested exaptation alternative, cf. his challenges C1 – 5 in the just linked. Finally,

c –> Dembski highlights that CSI comes in deeply isolated islands T in much larger configuration spaces W, for biological systems functional islands. That puts up serious questions for origin of dozens of body plans reasonably requiring some 10 – 100+ mn bases of fresh genetic information to account for cell types, tissues, organs and multiple coherently integrated systems. Wicken’s remarks a few years later as already were cited now take on fuller force in light of the further points from Orgel at pp. 190 and 196 . . . ]

. . . and J S Wicken in a 1979 remark:

Organized’ systems are to be carefully distinguished from ‘ordered’ systems.  Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’[[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]

. . . then also this from Sir Fred Hoyle:

 Once we see that life is cosmic it is sensible to suppose that intelligence is cosmic. Now problems of order, such as the sequences of amino acids in the chains which constitute the enzymes and other proteins, are precisely the problems that become easy once a directed intelligence enters the picture, as was recognised long ago by James Clerk Maxwell in his invention of what is known in physics as the Maxwell demon. The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true.” [[Evolution from Space (The Omni Lecture[ –> Jan 12th 1982]), Enslow Publishers, 1982, pg. 28.]

 Why then, the resistance to such an inference?

AR gives us a clue:

The more important distinction should be made between material purposeful agents about which I cannot imagine we could disagree (aforesaid humans, other primates, etc) and immaterial agents for which we have no evidence or indicia (LOL) . . .

That is, there is a perception that to make a design inference on origin of life or of body plans based on the observed cause of FSCO/I is to abandon science for religious superstition. Regardless, of the strong insistence of design thinkers from the inception of the school of thought as a movement, that inference to design on the world of life is inference to ART as causal process (in contrast to blind chance and mechanical necessity), as opposed to inference to the supernatural. And underneath lurks the problem of a priori imposed Lewontinian evolutionary materialism, as was notoriously stated in a review of Sagan’s A Demon Haunted World:

demon_haunted. . . the problem is to get them [hoi polloi] to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [–> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]. . . .

It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [–> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [–> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door . . .

[From: “Billions and Billions of Demons,” NYRB, January 9, 1997. In case you imagine this is “quote-mined” I suggest you read the fuller annotated cite here.]

 

 

 

A priori Evolutionary Materialism has been dressed up in the lab coat and many have thus been led to imagine that to draw an inference that just might open the door a crack to that barbaric Bronze Age sky-god myth — as they have been indoctrinated to think about God (in gross error, start here) — is to abandon science for chaos.

Philip Johnson’s reply, rebuttal and rebuke was well merited:

For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”
. . . .   The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
Darwin-ToL-full-size-copy
Tree of Life model, per Smithsonian Museum; note the root, OOL

And so, our answer to AR must first reflect BA’s: Craig Venter et al positively demonstrate that intelligent design and/or modification of cell based life forms is feasible, effective and an actual cause of observable information in life forms. To date, by contrast — after 150 years of trying — the observational base for bio-functional complex, specific information beyond 500 – 1,000 bits originating by blind chance and mechanical necessity is ZERO.

So, straight induction trumps ideological speculation, per the vera causa test.

That is, at minimum, design sits at the explanatory table regarding origin of life and origin of body plans, as of inductive right.

And, we may add that by highlighting the case for the origin of the living cell, this applies from the root on up and should shift our evaluation of the reasonableness of design as an alternative for major, information-rich features of life-forms, including our own. Particularly as regards our being equipped for language.

Going beyond, we note that we observe intelligence in action, but have no good reason to confine it to embodied forms. Not least, because blindly mechanical, GIGO-limited computation such as in a ball and disk integrator:

thomson_integrator

. . . or a digital circuit based computer:

mpu_model

. . . or even a neural network:

A neural network is essentially a weighted sum interconnected gate array, it is not an exception to the GIGO principle
A neural network is essentially a weighted sum interconnected gate array, it is not an exception to the GIGO principle

. . . is dynamic-stochastic system based signal processing, it simply is not equal to insightful, self-aware, responsibly free rational contemplation, reasoning, warranting, knowing and linked imaginative creativity. Indeed, it is the gap between these two things that is responsible for the intractability of the so-called Hard Problem of Consciousness, as can be seen from say Carter’s formulation which insists on the reduction:

The term . . . refers to the difficult problem of explaining why we have qualitative phenomenal experiences. It is contrasted with the “easy problems” of explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, their proposed solutions, regardless of how complex or poorly understood they may be, can be entirely consistent with the modern materialistic conception of natural phenomen[a]. Hard problems are distinct from this set because they “persist even when the performance of all the relevant functions is explained.”

Notice, the embedded a priori materialism.

2350 years past, Plato spotlighted the fatal foundational flaw in his The Laws, Bk X, drawing an inference to cosmological design:

Ath. . . . when one thing changes another, and that another, of such will there be any primary changing element? How can a thing which is moved by another ever be the beginning of change?Impossible. But when the self-moved changes other, and that again other, and thus thousands upon tens of thousands of bodies are set in motion, must not the beginning of all this motion be the change of the self-moving principle? . . . . self-motion being the origin of all motions, and the first which arises among things at rest as well as among things in motion, is the eldest and mightiest principle of change, and that which is changed by another and yet moves other is second. 

[[ . . . .]Ath. If we were to see this power existing in any earthy, watery, or fiery substance, simple or compound-how should we describe it?Cle. You mean to ask whether we should call such a self-moving power life?Ath. I do.

Cle. Certainly we should. 

Ath.
And when we see soul in anything, must we not do the same-must we not admit that this is life? [[ . . . . ]

Cle. You mean to say that the essence which is defined as the self-moved is the same with that which has the name soul?

Ath. Yes; and if this is true, do we still maintain that there is anything wanting in the proof that the soul is the first origin and moving power of all that is, or has become, or will be, and their contraries, when she has been clearly shown to be the source of change and motion in all things? 

Cle. Certainly not; the soul as being the source of motion, has been most satisfactorily shown to be the oldest of all things.

Ath. And is not that motion which is produced in another, by reason of another, but never has any self-moving power at all, being in truth the change of an inanimate body, to be reckoned second, or by any lower number which you may prefer?  Cle. Exactly. 
Ath. Then we are right, and speak the most perfect and absolute truth, when we say that the soul is prior to the body, and that the body is second and comes afterwards, and is born to obey the soul, which is the ruler?
[ . . . . ]
Ath.If, my friend, we say that the whole path and movement of heaven, and of all that is therein, is by nature akin to the movement and revolution and calculation of mind, and proceeds by kindred laws, then, as is plain, we must say that the best soul takes care of the world and guides it along the good path.[[Plato here explicitly sets up an inference to design (by a good soul) from the intelligible order of the cosmos.]

In effect, the key problem is that in our time, many have become weeded to an ideology that attempts to get North by insistently heading due West.

Mission impossible.

Instead, let us let the chips lie where they fly as we carry out an inductive analysis.

Patently, FSCO/I is only known to come about by intelligently directed — thus purposeful — configuration. The islands of function in config spaces and needle in haystack search challenge easily explain why, on grounds remarkably similar to those that give the statistical underpinnings of the second law of thermodynamics.

Further, while we exemplify design and know that in our case intelligence is normally coupled to brain operation, we have no good reason to infer that it is merely a result of the blindly mechanical computation of the neural network substrates in our heads. Indeed, we have reason to believe that blind GIGO limited mechanisms driven by forces of chance and necessity are utterly at categorical difference from our familiar responsible freedom. (And it is noteworthy that those who champion the materialist view often seek to undermine responsible freedom to think, reason, warrant, decide and act.)

To all such, we must contrast the frank declaration of evolutionary theorist J B S Haldane:

 “It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true.They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms. In order to escape from this necessity of sawing away the branch on which I am sitting, so to speak, I am compelled to believe that mind is not wholly conditioned by matter.” [[“When I am dead,” in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209. (Highlight and emphases added.)]

 And so, when we come to something like the origin of a fine tuned cosmos fitted for C-Chemistry, aqueous medium, code and algorithm using, cell-based life, we should at least be willing to seriously consider Sir Fred Hoyle’s point:

From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16.  Emphasis added.]

As he also noted:

I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [[“The Universe: Past and Present Reflections.” Engineering and Science, November, 1981. pp. 8–12]

That is, we should at minimum be willing to ponder seriously the possibility of creative mind beyond the cosmos, beyond matter, as root cause of what we see. If, we are willing to allow FSCO/I to speak for itself as a reliable index of design. Even, through a multiverse speculation.

For, as John Leslie classically noted:

One striking thing about the fine tuning is that a force strength or a particle mass often appears to require accurate tuning for several reasons at once. Look at electromagnetism. Electromagnetism seems to require tuning for there to be any clear-cut distinction between matter and radiation; for stars to burn neither too fast nor too slowly for life’s requirements; for protons to be stable; for complex chemistry to be possible; for chemical changes not to be extremely sluggish; and for carbon synthesis inside stars (carbon being quite probably crucial to life). Universes all obeying the same fundamental laws could still differ in the strengths of their physical forces, as was explained earlier, and random variations in electromagnetism from universe to universe might then ensure that it took on any particular strength sooner or later. Yet how could they possibly account for the fact that the same one strength satisfied many potentially conflicting requirements, each of them a requirement for impressively accurate tuning?

. . .  [.]  . . . the need for such explanations does not depend on any estimate of how many universes would be observer-permitting, out of the entire field of possible universes. Claiming that our universe is ‘fine tuned for observers’, we base our claim on how life’s evolution would apparently have been rendered utterly impossible by comparatively minor alterations in physical force strengths, elementary particle masses and so forth. There is no need for us to ask whether very great alterations in these affairs would have rendered it fully possible once more, let alone whether physical worlds conforming to very different laws could have been observer-permitting without being in any way fine tuned. Here it can be useful to think of a fly on a wall, surrounded by an empty region. A bullet hits the fly Two explanations suggest themselves. Perhaps many bullets are hitting the wall or perhaps a marksman fired the bullet. There is no need to ask whether distant areas of the wall, or other quite different walls, are covered with flies so that more or less any bullet striking there would have hit one. The important point is that the local area contains just the one fly. [Our Place in the Cosmos, 1998 (courtesy Wayback Machine) Emphases added.]

 In short, our observed cosmos sits at a locally deeply isolated, functionally specific, complex configuration of underlying physics and cosmology that enable the sort of life forms we see. That needs to be explained adequately, even as for a lone fly on a patch of wall swatted by a bullet.

And, if we are willing to consider it, that strongly points to a marksman with the right equipment.

Even, if that may be a mind beyond the material, inherently contingent cosmos we observe.

Even, if . . . END

Comments
Origines, precisely: correct folding and fitting are required, leading to tight constraints on acceptable proteins in AA sequence space, indeed we should not overlook the fold domains that have just one or a few members, which are deeply isolated in the space of possible AA sequences. Nor should we forget the implications of prions that more stable mis-folds seem to be possible, i.e. some proteins are metastable structures. KF PS: I am puzzled why there was a comment in a two-year-old thread.kairosfocus
March 7, 2017
March
03
Mar
7
07
2017
06:26 PM
6
06
26
PM
PDT
Swami: Here, we use simulations to demonstrate that sequence alignments are a poor estimate of functional information. The mutual information of sequence alignments fantastically underestimates of the true number of functional proteins.
Even if this were true, a new functional protein needs to fit perfectly in order to be truly functional — right amount, right location. Not any old function is 'functional' for the organism. There has to be a need for it otherwise it is most likely to be detrimental.Origenes
March 7, 2017
March
03
Mar
7
07
2017
01:22 PM
1
01
22
PM
PDT
You will probably be interested in the scientific paper I published looking at Durston's FI argument. http://www.biorxiv.org/content/early/2017/03/06/114132 One important to make is that Durston is a great guy. I appreciate his contributions, and this is not an attack on him personally. Feel free to find me on the BioLogos forums if you want to discuss more. Interesting stuff!Prof. S. Joshua Swamidass
March 7, 2017
March
03
Mar
7
07
2017
12:24 PM
12
12
24
PM
PDT
D, simple vs complex. Gkairosfocus
February 10, 2015
February
02
Feb
10
10
2015
04:31 AM
4
04
31
AM
PDT
KF, FYI Posts #194 & #199 in this discussion thread that GP started as per your suggestion: https://uncommondescent.com/intelligent-design/antibody-affinity-maturation-as-an-engineering-process-and-other-things/#comment-547368 https://uncommondescent.com/intelligent-design/antibody-affinity-maturation-as-an-engineering-process-and-other-things/#comment-547393Dionisio
February 10, 2015
February
02
Feb
10
10
2015
04:20 AM
4
04
20
AM
PDT
5th, schizophrenic numbers are really unusual oddball irrationals that in early digits act like they are rational then the other side swamps out. Sounds familiar! KFkairosfocus
February 9, 2015
February
02
Feb
9
09
2015
02:25 AM
2
02
25
AM
PDT
F/N: Of course, as a matter of overlooked science -- read here, the gap between an apple falling from a tree and the Moon swinging by in orbit* -- spotlighting the significance of FSCO/I (and especially digitally coded functionally specific complex information, dFSCI as GP stresses) is a crucial bridge between the world of technology and that of the nanotech of cell based life. That is, FSCO/I is the decisive scientific point in the whole controversy over inferring design on empirical signs. Which gives the above pattern of reactions by objectors quite a telling colour. KF *PS: That connexion between two everyday phenomena was the pivot on which Newton conceived his universal law of gravitation, cf discussion here with context on doing science: http://iose-gen.blogspot.com/2010/06/appendix-methods-and-tips-for-research.html PPS: Let's add a vid of protein synthesis to the OP . . .kairosfocus
February 9, 2015
February
02
Feb
9
09
2015
01:11 AM
1
01
11
AM
PDT
F/N: Let me clip for convenient reference from the OP, citing Orgel, Wicken and Hoyle . . . the OP has highlights, onward links etc:
A good step to help us see why is to consult Leslie Orgel in a pivotal 1973 observation:
. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [--> this is of course equivalent to the string of yes/no questions required to specify the relevant "wiring diagram" for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here, here and here (with here on self-moved agents as designing causes).] One can see intuitively that many instructions are needed to specify a complex structure. [--> so if the q's to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions. [--> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes. [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196. Of course, a --> that immediately highlights OOL, where the required self-replicating entity is part of what has to be explained (cf. Paley here), a notorious conundrum for advocates of evolutionary materialism; one, that has led to mutual ruin documented by Shapiro and Orgel between metabolism first and genes first schools of thought, cf here. b --> Behe would go on to point out that irreducibly complex structures are not credibly formed by incremental evolutionary processes and Menuge et al would bring up serious issues for the suggested exaptation alternative, cf. his challenges C1 - 5 in the just linked. Finally, c --> Dembski highlights that CSI comes in deeply isolated islands T in much larger configuration spaces W, for biological systems functional islands. That puts up serious questions for origin of dozens of body plans reasonably requiring some 10 - 100+ mn bases of fresh genetic information to account for cell types, tissues, organs and multiple coherently integrated systems. Wicken's remarks a few years later as already were cited now take on fuller force in light of the further points from Orgel at pp. 190 and 196 . . . ]
. . . and J S Wicken in a 1979 remark:
‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’[[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]
. . . then also this from Sir Fred Hoyle:
Once we see that life is cosmic it is sensible to suppose that intelligence is cosmic. Now problems of order, such as the sequences of amino acids in the chains which constitute the enzymes and other proteins, are precisely the problems that become easy once a directed intelligence enters the picture, as was recognised long ago by James Clerk Maxwell in his invention of what is known in physics as the Maxwell demon. The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true.” [[Evolution from Space (The Omni Lecture[ --> Jan 12th 1982]), Enslow Publishers, 1982, pg. 28.]
Why then, the resistance to such an inference? . . .
That, of course, is the pivotal question. KFkairosfocus
February 8, 2015
February
02
Feb
8
08
2015
03:17 AM
3
03
17
AM
PDT
AS, kindly see the above OP -- which you need to actually address on the merits and specifics. Note, FSCO/I is a description (in effect, first put on the table by Orgel and Wicken in the 1970's to address defining characteristics observed with life forms that strongly parallel patterns familiar from the world of technology, cf W's "wiring diagram" in the OP) based on our participation in a world of systems where function is dependent on particular interaction of correct, correctly oriented and coupled parts that will work effectively only in a very few of possible configs of said parts; with trillions of cases all around you. Indeed, to object you just composed a string of glyphs that to function as text in English had to conform to many specific rules and conventions, with a very strict node-arc-node pattern: *-*-*- . . . -*. Your rhetorical dismissal that actually exemplifies FSCO/I in attempting to brush it aside, in the teeth of addressing identification by concrete example, description of same, application to the world of life and quantification i/l/o say Orgel's approach laid out in 1973, is an inadvertent example of the point just underscored. KFkairosfocus
February 8, 2015
February
02
Feb
8
08
2015
03:08 AM
3
03
08
AM
PDT
Folks, notice the distinct lack of interest by objectors across the board (with one or two exceptions as seen above) in addressing the core issue of functionally specific, complex organisation and associated information, FSCO/I? And, its implications per empirically reliable, tested induction and the needle in haystack search challenge? That may be telling us something on the actual reality and relevance of FSCO/I, and where it points. KF PS: Rest assured, if there were obvious, major holes in recognising the reality and relevance of FSCO/I, objectors would be piling on to pound away.kairosfocus
February 8, 2015
February
02
Feb
8
08
2015
02:28 AM
2
02
28
AM
PDT
Below, is a link to an interesting article from the online Catholic Exchange by Dr Donald DeMarco, entitled, The Half-Truths of Materialist Design. I particularly like the neat formulation at the end of the piece, pointing to the fundamental truth of the harmony of religion and science: 'The notion of intelligent design is the logical complement of scientific research.' Read more: http://www.ncregister.com/daily-news/the-half-truths-of-materialist-evolution/#ixzz3R4BjvuksAxel
February 7, 2015
February
02
Feb
7
07
2015
05:48 AM
5
05
48
AM
PDT
What's really odd is that even while science has found at both ends of the spectrum (cosmic & microscopic/subatomic) exponentially increasing gaps in the materialist explanatory account, those materialists still insist that science has been on a trajectory of closing those gaps. Cosmology, biology and quantum physics have long since shredded the materialist explanation. Materialists are the true Victorian-age, anti-science cult today. One might as well be a flat-earther as to be a materialist in the light of evidence that's been around for decades and is growing every day.William J Murray
February 7, 2015
February
02
Feb
7
07
2015
05:06 AM
5
05
06
AM
PDT
5th, Thanks for thoughts. I feel a bit like someone trying to draw focussed attention to Newton's three laws of motion in a world where somehow, an empirically based ABC level inductive generalisation has suddenly become politically utterly incorrect. If you want to look more broadly at specification, try Dembski's paper here: http://designinference.com/documents/2005.06.Specification.pdf If you look here, you will see how I used a log reduction and a heuristic in the context of an exchange with May's MathGrrrl persona that had significant inputs from VJT and Dr Giem. I note that many objectors utterly refuse to acknowledge that a log probability is an information metric so that in an empirically oriented context one may transfer to the dual form and then look out there at more direct empirical metrics. Where, the underlying point is that had the world of life proceeded by blind chance and mechanical necessity, the patterns sampled across the history and diversity of life constitute a valid sampling of what is empirically possible. Where, just from the clustering of groups of functional proteins in AA sequence space it is evident that deeply isolated islands of function are very real. But however I got there (and earlier I used other metrics that do the job but do not tie in with Dembski's work), the expression above stands on its own merits. And it is glorified common sense. Information may be measured in several ways, starting with the premise that to store significant info one needs to have high contingency in an entity, leading to a configuration space of possibilities. So, one basic metric is the chain length of y/n q's to specify particular state. And yes this ties to Kolmogorov, but is even older than that cropping up in Shannon's famous 1948 paper. In a communicative context, that can lead to giving info values to essentially random states, i.e. noise can appear in a comms system, think snow and frying noise on the old fashioned analogue TV. So there is a premise of distinguishing signal and noise on characteristics that are empirically observable enough to define signal to noise ratio; a key quality metric. And as section A of my always linked note from comments I make at UD (click on my handle) points out, yes, there is thus a design inference in the heart of communications theory. It is recognising that and thinking about linked thermodynamics issues -- I am an applied physicist who worked with electronics and education before ending up in sustainability oriented policy matters . . . -- that led me to see the value of the design inference in the first place. So, in a context where many configs of parts are possible but only a relative few will carry out a specific function depending on particular configuration of parts per a wiring diagram, it makes sense to use an old modelling trick from economics to define being in/out of observable circumstances. (One use of the binary state dummy variable is to tell if you are in/out of a war in an Economic series.) So, We take the info metric value I and multiply by a dummy variable S tied to observable functionality based on wiring diagram organisation. Hence the 6500 C3 reel as example . . . BTW there is a whole family of 6500's out there, a case of tech evolution by design. And of course 3-d wiring diagrams reduce to descriptive strings. Discussion on strings is WLOG. Using the Trevors Abel sequence categories, a: a random sequence may have a high I value but will have S at default, 0. b: An orderly sequence driven by say crystallisation or simple polymerisation or the equivalent will have high specificity but with little contingency its I value will be low. c: A sequence that is functionally specific will have both S = 1 per observation, and I potentially quite high. The question is, at what threshold is one unlikely to achieve state c by blind chance and mechanical necessity. The answer is, to use the island of function in a large config space implication of wiring diagram interactive function to identify when needles will be too deeply isolated in large haystacks. 500 - 1,000 bits of complexity works for sol system to cosmos scale resources. And these are quite conservative. The first pivots on the idea that the atomic resources of the sol system searching at a fast chem rxn rate for a generous lifespan estimate will only be able to do the equivalent of plucking one straw from a cubical haystack comparable in thickness to our galaxy. And for 1,000 bits the comparable haystack would swallow up the observable cosmos. The 6500 C3 is a useful start point, giving a familiar relatively simple case that is accessible. It shows that FSCO/I is real, that wiring diagram organisation is real, and that he equivalent of trying to assemble a fishing reel by shaking up its parts in a bucket is not a reasonable approach. The question is whether such extends to the world of life. Orgel and Wicken answer yes, as the OP cites . . . notice not one objector above has tried to challenge that. A look at protein synthesis already gives several direct cases with emphasis on D/RNA and proteins. Where, as these are strings, we have direct applicability of sequence complexity and information metrics. RNA is a control tape for protein assembling ribosomes (much more complex cases in point!) and proteins must fold stably and do some work in or for the cell. A typical 300 AA protein has raw info capacity 4.32 bits per AA, and the study of patterns of variability in the world of life per Durston et al 2007, gives the sort of values reported in the OP. If you want a much cruder metric at OOL, try hydro-phil vs phob at 1 bit per AA and try 100 proteins as a simplistic number. That's 300 bits per protein x 100 proteins, or 30,000 bits. The message is clear: FSCO/I as a reasonable, empirically grounded needle in haystack search backed criterion for reasonably inferring design implies the living cell is a strong candidate for design in nature. Similar analyses of the physics of a cosmos suited for C-chemistry, aqueous medium cell based life strongly point to our cosmos sitting at a locally deeply isolated operating point; even through multiverse speculations -- cf. discussion and links in the OP. That is, we see cell based life in a cosmos evidently fine tuned for it. That points strongly to design of a cosmos in order to facilitate such life. Design sits at the table of scientific discussion from origin of cosmos to origin of life and body plans to our own origin as of right, not sufferance. But, that is extremely politically incorrect in our day of a priori imposed evolutionary materialist scientism. No wonder there are ever so many attempts to expel design from that table. I stand confident that in the end common sense rationality, inductive logic and the needle inn haystack challenge will prevail. Design just makes sense. If you doubt me, go take a searching look at the Abu-Garcia 6500 C3 Ambassadeur reel. KFkairosfocus
February 7, 2015
February
02
Feb
7
07
2015
04:42 AM
4
04
42
AM
PDT
KF says, Of course, it is easy to miss that one is seeing digits of pi, as pi is transcendental and successive digits have no neat correlation to the value so that we get what looks pretty random. I say, I agree,.... a little digression if I may There seem to me to be 3 kinds of number sequences 1)Rational numbers that can be ascribed to normal halting algorithms 2)Irrational numbers that appear random until you know the specification they represent. 3) schizophrenic numbers that split the difference between the other two. http://en.wikipedia.org/wiki/Schizophrenic_number I think that sequences that represent schizophrenic numbers are the ones most likely cause us to infer design. The presence of patterns with no algorithmic explanation seems to draw us to that conclusion You say, If we are seeing the sort of resistance to a patent case of FSCO/I as is shown by a 6500 reel, that speaks volumes on what more abstract concepts of specification would face. I say, I have long since given up hope that the other side can be fair here. They just have too much at stake. I think we should explore this stuff with out them if necessary. I am happy if they understand what I saying. Agreeing with my conclusions is probably too much to ask. peace PS excellent thread Thank youfifthmonarchyman
February 6, 2015
February
02
Feb
6
06
2015
04:28 PM
4
04
28
PM
PDT
Jerad, I have given concrete cases, stated that one first identifies function and that it is dependent on configs of parts then we look at perturbing the pattern and looking for limits of clusters of functional configs. Case after case has been given to show that we are dealing with something that is based on empirical investigation, to make the matter directly relevant to the real world key cases. You tossed out stuff on strings of numbers and I set them in context, You talked about radio signals and I put them in context. At this point it looks uncommonly like you do not see because you are imposing something that blocks addressing what is patently and plainly there starting with the paradigm case that you repeatedly dismiss, a fishing reel. That reel and its dependence on specific configs of correct parts to work, is a demonstration of the reality of FSCO/I. The specificity that you are making a mountain out of a molehill on, is patent from what happens if you put it together badly or get sand in it etc. Won't work. The function is obvious. The informational complexity comes from the nodes-arcs and structured strings of y/n q's approach, and just for the main gear we are well past 125 bytes. Try to understand the simple and obvious case then let that guide you on more complicated ones. If you cannot figure this out from a diagram go pay a tackle shop a visit. I suspect a lot more is riding on this sort of approach than you realise, but at minimum, you have been a significant objector presence in and around UD for years and years. At the least try to understand a simple case of what we have been talking about. KFkairosfocus
February 6, 2015
February
02
Feb
6
06
2015
11:01 AM
11
11
01
AM
PDT
5th, if we had a black box that is capable of outputting a stream of bits where at first they seem to be random, and suddenly we see the string of bits for successive binary digits of pi which keeps up beyond 500 - 1,000 digits in succession, we would have excellent reason to infer design and transmission of an intelligent signal. Of course, it is easy to miss that one is seeing digits of pi, as pi is transcendental and successive digits have no neat correlation to the value so that we get what looks pretty random. KF PS: If we are seeing the sort of resistance to a patent case of FSCO/I as is shown by a 6500 reel, that speaks volumes on what more abstract concepts of specification would face.kairosfocus
February 6, 2015
February
02
Feb
6
06
2015
10:48 AM
10
10
48
AM
PDT
Jerad, for a sequence of numbers in general S defaults to 0. Where there is a context for the numbers that locks them to a cluster of functionally defined possible values, e.g. the numbers are a bit string giving a system and config description, then on good reason and evidence it would become 1. Then, the string of structured numbers would provide an information metric, and if this exceeds the relevant limit of 500 - 1,000 bits, then the conclusion will be that the best explanation of said thing or at least the relevant aspect of it is design. In the case of RF reception, if we have something like SSB AM, or phase mod or the like, then we have a basis for inferring design, but absent the patterns that gives us functional specificity, we default to 0. As one result, absent a relevant key to detect say a spread spectrum signal, the default would be what it appears to be, noise. First get your function, then see specificity based on particular configs of parts, then note whether such complexity is a reasonable result from blind chance and mechanical necessity [--> what the threshold part does] and if beyond the threshold, we have FSCO/I best explained on design. KFkairosfocus
February 6, 2015
February
02
Feb
6
06
2015
10:44 AM
10
10
44
AM
PDT
Joe, of course S is a dummy variable that takes value 1 for specification [with a particular emphasis on the relevant type, functional specificity of interactive configurations to achieve said observable function], as is shown in the OP just above the equation, for quick reference. KFkairosfocus
February 6, 2015
February
02
Feb
6
06
2015
10:36 AM
10
10
36
AM
PDT
So S isn't Specification? Really? GEM of TKI- is your S specification?Joe
February 6, 2015
February
02
Feb
6
06
2015
07:20 AM
7
07
20
AM
PDT
Joe #146
OK so Jerad couldn’t understand Dembski’s paper. Par for the course, that
Show me where in Dr Dembski's paper he uses KF's S.Jerad
February 6, 2015
February
02
Feb
6
06
2015
07:09 AM
7
07
09
AM
PDT
OK so Jerad couldn't understand Dembski's paper. Par for the course, thatJoe
February 6, 2015
February
02
Feb
6
06
2015
07:04 AM
7
07
04
AM
PDT
Joe #141
Jerad, Read Dembski’s 2005 paper on Specification.
I have read it. Dr Dembski does not have S in his formulation. I'm asking KF about his metric. KF #142
the fishing reel is a clear paradigm example of FSCO/I, and the exploded view diagram above shows how function arises from specific arrangement and coupling of parts. Similarly, a text string *-*-*- . . . -*, that expresses a description or algorithm in coded form depends on placement and interaction of components.
Fine but I didn't ask about a fishing reel.
High specificity of function is seen from sensitivity to variability of parts and/or arrangements, whether natural to the situation or via injected perturbations. Fishing reels lose interchangeability if precision of parts slips a bit, and are very sensitive to orientation, placement, coupling and presence/absence of key parts. program code, apart from in places like comments, tends to be very sensitive to perturbation.
I asked how would you go about determinng S fo a sequence of numbers.
Where of course informationally a 3-d nodes-arcs pattern is reducible to a structured descriptive string. So, discussion on strings is WLOG. Extend by reasonably close family resemblance.
Is this pertinent to my question?
The question of borderline cases is very simple to address: if there is a reasonable doubt that the function under observation is configuration sensitive, the default is, regard it as not sensitive.
What is reasonable doubt? For example: biologists say there is quite reasonable doubt that DNA was designer whereas you disagree. Which is why I'm asking for your criteria when determining S. I'd like to know what kinds of analysis you would bring to bear.
In the expression, S = 0 is default and holds benefit of the doubt. An erroneous ruling due to doubt, not specific, is acceptable once there are responsible grounds. And, function is observable as is sensitivity to perturbation etc. So, not an issue.
Except that I still don't know what kind of analytic tools you would use in a given situation.
If say a SSB AM signal is detected from remote space and is from an unknown source, where we have the equivalent of over 1 kbit of information — a cosmic source — we may reasonably infer design. Which is exactly what would be headlined.
That's it, you'd just set S = 1 in that case? Why would that kind of signal be so indicative?
But if we have fairly broadband bursts with no observed definitive signal or carrier, there is no basis to infer functional specificity.
So, you're saying the narrowness of the frequencies observed is part of your criteria?Jerad
February 6, 2015
February
02
Feb
6
06
2015
06:30 AM
6
06
30
AM
PDT
F/N: Added a few exemplars of FSCO/I-rich systems, to help rivet the sheer empirical reality. KFkairosfocus
February 6, 2015
February
02
Feb
6
06
2015
04:09 AM
4
04
09
AM
PDT
KF said High specificity of function is seen from sensitivity to variability of parts and/or arrangements I say, I think sensitivity to variability is the key to all specification whether we are talking about function or not. I would say that specification is deeply related to lossless data compression and Irreducible Complexity. Returning to number strings for just a second look at this string 3.1415.... Pi would be the specification/nonlossy compression. If even one digit were to change we would have no nonlossy way to compress the string and S would default to zero. Now look again at KF's fishing reel. If only a few parts were to change it would no longer function so it could not be nolossyly compressed as a functioning "mechanism" and S would default to zero. A specification that captured more of the essence of the reel would be something like "Ambassador 6500". This compression does not contradict the first one "functioning mechanism" but only moves up a step in descriptive knowledge of the artifact. That is the Y axis I sometimes talk about. As we know more about an object our specification becomes more stringent and sensitivity to variability increases. So that less complexity is required to infer design. That is the way I see it anyway. Peacefifthmonarchyman
February 6, 2015
February
02
Feb
6
06
2015
04:08 AM
4
04
08
AM
PDT
Jerad, the fishing reel is a clear paradigm example of FSCO/I, and the exploded view diagram above shows how function arises from specific arrangement and coupling of parts. Similarly, a text string *-*-*- . . . -*, that expresses a description or algorithm in coded form depends on placement and interaction of components. High specificity of function is seen from sensitivity to variability of parts and/or arrangements, whether natural to the situation or via injected perturbations. Fishing reels lose interchangeability if precision of parts slips a bit, and are very sensitive to orientation, placement, coupling and presence/absence of key parts. program code, apart from in places like comments, tends to be very sensitive to perturbation. Where of course informationally a 3-d nodes-arcs pattern is reducible to a structured descriptive string. So, discussion on strings is WLOG. Extend by reasonably close family resemblance. The question of borderline cases is very simple to address: if there is a reasonable doubt that the function under observation is configuration sensitive, the default is, regard it as not sensitive. In the expression, S = 0 is default and holds benefit of the doubt. An erroneous ruling due to doubt, not specific, is acceptable once there are responsible grounds. And, function is observable as is sensitivity to perturbation etc. So, not an issue. If say a SSB AM signal is detected from remote space and is from an unknown source, where we have the equivalent of over 1 kbit of information -- a cosmic source -- we may reasonably infer design. Which is exactly what would be headlined. But if we have fairly broadband bursts with no observed definitive signal or carrier, there is no basis to infer functional specificity. KFkairosfocus
February 6, 2015
February
02
Feb
6
06
2015
03:33 AM
3
03
33
AM
PDT
Jerad, Read Dembski's 2005 paper on Specification.Joe
February 6, 2015
February
02
Feb
6
06
2015
03:14 AM
3
03
14
AM
PDT
KF #135
it has already been noted — for years — that unless there is positive reason on good observational warrant regarding functional specificity, S retains its default i.e. 0. A false negative ruling is acceptable as the price for very high confidence in holding functionally specific.
Yes, I understand that. What I don't completely understand is what are the criteria for deciding that S can be changed to 1. What are good observational warranted reasons. So I'm asking about some hard to decide situations. I'm not arguing about fishing reels. But I don't think I'm going to get any better answer than you've already given so feel free to drop the topic.Jerad
February 6, 2015
February
02
Feb
6
06
2015
02:57 AM
2
02
57
AM
PDT
F/N: I find the non-engagement of central facts, issues and concerns in the OP by especially objectors inadvertently highly revealing. In particular, there is need to acknowledge the simple fact that FSCO/I is real and may readily be both demonstrated empirically and also reduced to a metric of information by use of the structured string of y/d descriptive q's and extensions, in one form or another. Likewise there is need to face the implications of config based interactive function with high specificity (which is observable by noting effects of perturbation or variability of components and configs) -- islands of function in config spaces. Thence, beyond a reasonable complexity threshold intractability of the approach of blind chance and necessity driven needle in haystack search. KFkairosfocus
February 6, 2015
February
02
Feb
6
06
2015
01:18 AM
1
01
18
AM
PDT
F/N: Astute readers will observe that there has been no clear admission by objectors above that even so clear a case as a fishing reel exhibits functionally specific complex organisation and associated information. That is revealing on what is at stake for them. KFkairosfocus
February 5, 2015
February
02
Feb
5
05
2015
03:04 PM
3
03
04
PM
PDT
Hey Piotr, Thanks for the link I had not read of this find before. I think the similarity to a flute would provide the S in KF's schema. The controversy seems to be over the probability of a carnivore producing a similar pattern. Suppose we found a similar bone with 8 perfectly evenly spaced and in line holes that was not consistent with the diatonic scale. I think I would still infer design despite not knowing the artifacts function. Doubling the holes would eliminate any question of carnivore gnawing. and the inline and evenly spaced holes would provide a specification in my view. Just my opinion peacefifthmonarchyman
February 5, 2015
February
02
Feb
5
05
2015
02:55 PM
2
02
55
PM
PDT
1 2 3 6

Leave a Reply