Uncommon Descent Serving The Intelligent Design Community

Reinstating the Explanatory Filter

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In an off-hand comment in a thread on this blog I remarked that I was dispensing with the Explanatory Filter in favor of just going with straight-up specified complexity. On further reflection, I think the Explanatory Filter ranks among the most brilliant inventions of all time (right up there with sliced bread). I’m herewith reinstating it — it will appear, without reservation or hesitation, in all my future work on design detection.

P.S. Congrats to Denyse O’Leary, whose Post-Darwinist blog tied for third in the science and technology category from the Canadian Blog Awards.

Comments
After reading some responses on other forums, it appears that the anti-IDists 1- do not understand the meaning of INFERENCE concerning science and 2- do not undrstand that the science of today does not and cannot wait for what the future may or may not reveal.Joseph
December 17, 2008
December
12
Dec
17
17
2008
07:01 AM
7
07
01
AM
PDT
Hi Mike I see your question. I first note that it is to some extent misdirected. For, we are not interested in whether the onion's cells [including DNA, enzymes etc] show more evidence of FSCI than the onion's or the converse. Instead, the material point is that BOTH are well beyond the reasonable threshold for being reached by chance forces on the gamut of the observed universe across any reasonable estimate of its lifespan. What do I mean by that?
1 --> One can often easily enough estimate configuration spaces, and 2 --> can also reasonably identify that a function based on particular states within that space of possible configurations is prone to breakdown on perturbation of the relevant information. 3 --> These are the keys to identifying the search space and the relative size of the island of relevant function in that space. 4 --> FSCI is in the first instance based on finding a reasonable threshold of complexity [i.e. number of configs] that would exhaust the universe's search resources to get to islands of function of reasonable size. 5 --> For practical purposes, when . . . 6 --> config spaces require more than about 500 - 1,000 bits [the latter to take in generous islands of function that leave a lot of room for "climbing" up hills of performance from minimal to optional by your favourite search algorithm . . .] and 7 --> function is vulnerable to perturbation of the information, THEN . . . 8 --> we are dealing with FSCI.
So, we have a reasonable lower bound on reliably inferring to directed rather than undirected contingency being responsible for an observed configuration that functions in some context or other. (This is entirely similar to standard hypothesis testing techniques that work off the principle that predominant clusters mean that small target zones are sufficiently unlikely to show up in reasonably sized samples that if we see these results, we are entitled to infer to intent not happenstance as the most reasonable cause.) In the case of living systems, the current lower bound on an independent life-form plausible as first life is a geneome of about 300 - 500, 000 G/C/A/T elements (or possibly the RNA equivalent). That is a config space based on 4-state elements, and at the lower end, 4^300,000 ~ 9.94 * 10^180,617. Both carrots and onions would be well beyond that threshold, and it is reasonable to deduce that the basic genome is explained by intelligence not chance. If you then want to factor in the elaborations to get to the body-plans and peculiarities of the carrot or the onion, you are simply getting into overkill. On evidence, basic body-plans will require 1's to 10's or even 100's of millions of additional DNA G/C/A/T elements. Even the difference between a carrot and an onion would be well beyond the 500 - 1,000 bit threshold. We would reasonably infer that that difference is due to directed contingency, by whatever mechanisms such a designer would use. As to metrics of FSCI that give numerical values as opposed to threshold judgements, we note that FSCI is a sub-set of CSI, so the Dembski models and metrics for CSI would apply. For instance in 2005, he modelled a metric [here using X for chi and p for phi]: X = –log2[10^120·pS(T)·P(T|H)] Thus we have a framework for supplying the table of CSI values, but to go beyond the threshold type estimate to that is a far harder exercise, and it would not make any material difference. For instance post no 100 is an apparent message that is responsive to the context of this thread, and has in it 403 ASCII characters. 128^403 ~1.61 * 10^849, the number of cells in the config space for that length of text. I comfortably infer that this is message not lucky noise, per FSCI, as 1000 bits specifies about 10^301 states. Are you willing to challenge that design inference? On what grounds? GEM of TKIkairosfocus
December 16, 2008
December
12
Dec
16
16
2008
04:15 AM
4
04
15
AM
PDT
kairosfocus
That, we exemplify intelligent agents and demonstrate on a routine basis that we leave FSCI as characteristic traces of our intelligent designs
Aplogies if this has been asked before, but do you have a list of objects and the FSCI contained within them? I'd be interested to see how the figures work out. Do onions have alot of FSCI due to their unusual genonme for example? More then carrots?MikeKratch
December 16, 2008
December
12
Dec
16
16
2008
01:14 AM
1
01
14
AM
PDT
PS: The whole TMLO book by Thaxton et al is available here as a PDF, about 70 MB if memory serves.kairosfocus
December 16, 2008
December
12
Dec
16
16
2008
01:02 AM
1
01
02
AM
PDT
Gentlemen Following up on a few points: 1] Patrick at 95: Links. Excellent links! Thanks. I particularly like the remarks in mere Creation that explored the contrast between crystals and biopolymer based systems, with sidelights on Prigogine's work. [BTW, Thaxton et al's TMLO has a very good discussion of Prigogine's work in the online chapters 7 - 9.] 2] Ratzsch example Event & aspect: tumbleweed tumbles through small hole in fence. EF look:
Contingent? Yes. (Also, various mechanical forces are at work: wind, interaction with ground, gravity, but that is not relevant to this aspect.) Specified: yes Complex in info storing sense? No. In v .low probability sense? No. Verdict: Chance (+ necessity).
3] 97: People build their little logical boxes based upon preconceptions and attempt to forcefit/mangle everything into it. I don’t want to “build” such a box and call it reality, I want to know what our box called reality really is. Sadly apt. Science, at its best is an unfettered (but ethically and intellectually responsible) search for the truth about our world, in light of empirical evidence and logical/mathematical analysis. Too often, today, that is being censored in pursuit of the sort of politically correct materialistic agendas I cited from Lewontin at 86 above. In case some may be tempted to think that Lewontin is unrepresentative, I here excerpt from the US NAS's latest [2008] version of their pamphlet against "Creationism":
Definition of Science The use of evidence to construct testable explanations and predictions of natural phenomena, as well as the knowledge generated through this process. [US NAS, 2008]
that sounds fairly innocuous, until you see the immediately preceding context:
In science, explanations must be based on naturally occurring phenomena. Natural causes are, in principle, reproducible and therefore can be checked independently by others. If explanations are based on purported forces that are outside of nature, scientists have no way of either confirming or disproving those explanations . . .
Cue: red flashing lights . .. SOUND Effects: ERRMRR! EERMRR! ERRMRR! . . . SCREECH! Black-suited, lab - coated jackbooted (actually, penny loafers are more likely . . . ) "Polizei": "We're the thought police and we're here to help you!" On a more serious note did it ever occur to the NAS . . .
a --> that we do not only contrast natural/ supernatural, but also natural/ artificial (i.e. intelligent)? b --> That, we exemplify intelligent agents and demonstrate on a routine basis that we leave FSCI as characteristic traces of our intelligent designs? c --> That such empirical signs of design allow us to reasonably infer that where we see further instances, we can on the same confident grounds that we provisionally accept explanatory laws, and chance models, accept that intelligent action is being detected? d --> That inferring from the sign tot he signified,t hen discussing who the possible candidates are, is a legitimate and empirically anchored, testable process? e --> That a supernatural, intelligent, cosmos generating agent is logically possible and that such an agent might just leave behind signs of his action in the structures and operations of the cosmos? [And, in fact many scientists of the founding era and up to today, think and have done their science in the context of accepting that this is so, including classically Newton in the greatest scientific work of all time, Principia.] f --> That when we join the finetuning of the observed cosmos for life as we observe it, to the evident FSCI that pervades the structures of the cell up to the major body plans of life forms, it is not unreasonable to infer that a credible candidate for the author of life is the same author of the cosmos? (Indeed, at least as reasonable as any materialist system of thought.) g --> That many scientists, past and present (including Nobel Prize winners) have successfully practised science in such a "thinking God's thoughts after him" [Boyle, if memory serves] paradigm, and have obviously not been ill equipped to so practise science? h --> that re-opening up the vista of scientific explanations to include and accept chance, necessity and intelligence is just that, an opening up to permit unfettered, uncensored, empirically controlled pursuit of the truth, not a closing down?
4] JT, 93: why isn’t an ordinary snowflake in nature complex and specified on the same basis. It seems clearly it is. This has already been answered, more than once. The issue is that we need specification and complexity in the same aspect of the object, event or process. That is what sets up the large config space and the narrow island of functionality. In the naturally occurring snowflake [not my suggested Langley mod for steganographic coding purposes]:
a --> the simple, tight, elegant specification of hexagonal crystalline structure is set by forces of polarisation and the geometry of the H2O molecule. [There are considerations that suggest this molecule looks like an elegant cosmological level design in itself, as a key to life -- but that's another story.] b --> this exhibits, by virtue of the dominant forces, low contingency, so it will not store information. [One could in principle store information in artfully placed defects, e.g similar to a hologram, but then that is going to be a high contingency that may in future be directed but is undirected.] c --> in the case of e.g. dendritic star flakes, teh dendrites show high contingency based on the peculiar circumstances of their formation, giving rise to the story that no two snowflakes are exactly alike. Plainly, high contingency, and high information storage potential. but, we see not informational patterns and so infer that the dendritic growths reflect chance acting. d --> I proposed a technique for storing a bit-string around the star's perimeter using snowflakes, or more realistically computer manipulated images thereof. the idea was, that like the prongs on a Yale-type lock's key, the dendrites would serve as a long-string coded pattern. e --> Such would be directed contingency, and would function as a pass-code, i.e an electronic or software based key. [We could do a physical form of it, a six-prong update to the Yale Lock . . .] f --> were that to be done, we would at once see that we specify function through a tight island of functionality, in a very larger configuration space. [BTW, I think that the typical Yale Lock has about six pins, with three possible positions. the no of configs is 3^6 = 729; multiplied by the number of slot and angle arrangements on the key's body. . That's enough to be pretty secure in your house or car, but it would be a lot fewer than the hypothetical snowflake key or the more relevant DNA and protein cases! [Well, a car has a two-sided Yale key, or 531,441 basic tumbler positions; though typically they just do a symmetrical key(thus tumbling from 1/2 million to an island of less than 1,000). Lock picks allow thieves or locksmiths to "feel" and trigger the pins.]]
5] . . .some completely different ontological category of causation called “Intelligent Design” which some say doesn’t even exist And thereby fall immediately into self-referential absurdity and selective hyperskepticism, for they themselves are intelligent, are designers and have conscious minds. So, to then turn around and object to the implications of such empirically established phenomena reflects very sadly indeed on the current state of the intellectual life in our civilisation at the hands of the evolutionary materialists. As has already been pointed out. Details here. ______________ At this stage, the ball is plainly in JT's court. G'day GEM of TKIkairosfocus
December 16, 2008
December
12
Dec
16
16
2008
01:00 AM
1
01
00
AM
PDT
Agreed. People build their little logical boxes based upon preconceptions and attempt to forcefit/mangle everything into it. I don't want to "build" such a box and call it reality, I want to know what our box called reality really is. Many of the recent arguments seem to be along these lines: "I do not like the results so I am going to redefine the variables to get the results I desire."Patrick
December 15, 2008
December
12
Dec
15
15
2008
08:59 AM
8
08
59
AM
PDT
Patrick, What is occurring is an attempt to rationalize away reality.tribune7
December 15, 2008
December
12
Dec
15
15
2008
08:51 AM
8
08
51
AM
PDT
I was curious whether Dembski had ever commented on the Snowflake argument. This is all I could find: Mere Creation Page 12 of No Free Lunch also has a reference to crystals, but it's not on google, although snowflake examples are. "Refutations" by Darwinists seem to typically consist of mangling the concepts to be whatever they want (aka strawmen). For example:
Ratzsch, Nature, Design, and Science. The example of a false positive produced by the EF given in this book (pp. 166-167) is a case of driving on a desert road whose left side was flanked by a long fence with a single small hole in it. A tumbleweed driven by wind happened to cross the road in front of Ratzsch's car and rolled precisely through the sole tiny hole. The event had an exceedingly small probability and was "specified" in Dembski's sense (exactly as a hit of a bull's-eye by an arrow in Dembski's favorite example). Dembski's EF leads to the conclusion that the event in question (tumbleweed rolling through the hole in the fence) was designed while it obviously was due to chance; this is a false positive.
Where's my "rollseyes" button?
Of course, ID would indicate the drawing to be.
And any digital string encoding the drawing of the log, as well, presuming the encoding method can be found.Patrick
December 15, 2008
December
12
Dec
15
15
2008
08:16 AM
8
08
16
AM
PDT
JT--Say on a sheet of paper the bit 1 corresponds to black and 0 represents white. Say someone draws an ordinary snowflake on that paper. Now take each row of the paper and lay them out end to end so you have a million-bit long string of digits. The same thing would be true of a drawing of a rotted log. Are you saying that ID would indicate the log was designed? Of course, ID would indicate the drawing to be.tribune7
December 15, 2008
December
12
Dec
15
15
2008
07:54 AM
7
07
54
AM
PDT
OK KF, why isn't an ordinary snowflake in nature complex and specified on the same basis. It seems clearly it is. So if its CSI, all we can conclude from that is its not the result of metaphysical randomness - that's all that the Design Inference can establish. The design inference cannot determine whether the snowflake was caused by A) laws or B)some completely different ontological category of causation called "Intelligent Design" which some say doesn't even exist. You have to decide that on your own. OK I'll quit monopolizing this thread and see if I can figure out from KF's post and Jerry's what FCSI is all about.JT
December 15, 2008
December
12
Dec
15
15
2008
07:43 AM
7
07
43
AM
PDT
And for the record, I generally put "mind" in quotes when referring to the ID concept of it and don't use the term much at all, because of the potential for confusion.JT
December 15, 2008
December
12
Dec
15
15
2008
07:31 AM
7
07
31
AM
PDT
JT: Before locking off after doing some major downloads, I decided to come back by UD. Saw your 88. 1] Say someone draws an ordinary snowflake on that paper. H'mm seems a bit obvious, but we can look back as a case of known origin per gedankenexperiment. 2] take each row of the paper and lay them out end to end so you have a million-bit long string of digits. this gives us a 1 mb string, bearing a code based on the algorithm, snip at every so many bits, then align. Sort of like what I think was called the Caesar code -- wound up on a stick as I recall. A 1 mb string is complex. Assuming we can "spot" a pattern, and thence see that there is a specifying algorithm, it will be recognisably specified. (Sort of like SETI.) Once we see that there is a functional pattern here -- picture of a snowflake, that will give us a basis for inferring that we have complex string fulfilling a narrowe target. Designed. If we cannot spot the pattern we will infer complex but no evident functional pattern so default to chance. [Though with so simple qa case, there will be strong correlations from row to row so the pattern will be easy enough to spot.) BTW, this is a simplified version of what is alleged to be going on in a recent twist on codes and ciphers: steganography. If one fails to spot the pattern, the EF will default to chance, per its deliberate bias, and will make in this case a false negative. 9It is designed to be reliable on a positive ruling [by using so extreme a degree of threshold for ruling complexity], but will cheerfully accept being wrong on the negative ruling. 3] just say that a random snowflake lands on the paper Maybe this requires either a giant snowflake or a very good CCD display element so that the flake will block light on some pixels but not others. Then we row by row convert and use the resulting string as a transmitted string. this is rather like how in C17 - 18, I gather colonial authorities in what would become the US sometimes would use a leaf as a design on paper money so that the counterfeiting would be impossible. Again, what happens is that the correlations along the bit string will suggest that this is slices of a pic, like a raster scan. (My students in Jamaica loved to hear that term!) The ruling wlll on that outcome, be: designed, and it would relate to the composition of the string, not the features of the snowflake - i.e. is a digital or old fashioned chemical photograph designed or a mere product of chance and necessity? GEM of TKIkairosfocus
December 15, 2008
December
12
Dec
15
15
2008
07:30 AM
7
07
30
AM
PDT
7] If an agent’s actions are not predicatable then his actions equates to RANDOMNESS. Onlookers, this is of course the precise problem that evo mat thinking lands in as it fruitlessly tries to account for the mind on the basis of chance + necessity acting on matter. It ends up in assigning messages to lucky noise acting on essentially mechanistic systems. Thus, JT needs to address the challenge of why he would take seriously the apparent message resulting from a pile of rocks sliding down a hillside and by astonishing luck coming up: “Welcome to Wales,” all on the border of Wales. But we are not locked up to such a view. For the first fact of our existence as intelligent creatures is that we are conscious and know that we reason, think decide and act in ways led by our minds, not random forces. Indeed, that is the assumption that underlies the exchange in this blog — i.e self-referential incoherence yet again lurks here for the materialist. For the record - you're entirely missing my point. I don't think the actions of a human being are random. I say that they are not random because they are potentially predictable, and they are predictable because humans operate according to laws, albeit the very very complex laws embodied in our phyiscal make up - our brains and so forth. My point was to say that a mind not determined by laws equates to randomness and so therefore such a view of mind is incoherent. I do not personally think a mind equates to randomness. I say this is what the ID view of mind equates to. Yes, its obvious that mind is not random, so that must mean the ID view is wrong So hopefully, that clears up that point.JT
December 15, 2008
December
12
Dec
15
15
2008
07:20 AM
7
07
20
AM
PDT
I am not sure when the term FCSI first arose on this site but about a year and a half ago or maybe it was 2 1/2 years ago we were going through a bi monthly examination of the just what does CSI means and having little success. You have poker or bridge hands, coin tosses, voting patterns, sculptures, writing and language, computer programs and DNA. What is the commonality between each one of these things and no one was able to provide a definition that would encompass all these things. It seems we could not get specific about specificity. Then in one of the comments, bfast made the observation that specificity is relevant because the data specifies something. Bfast is a computer programmer I believe, so the use of code to specify basic instructions to the hardware of a computer is a natural association and Meyers had frequently made the association of language with CSI. So the distinction of FCSI with CSI became part of the thinking here and kairosfocus was one of the people making it. However, I don't know if Demski ever made the same distinction. If he did, then maybe someone has a reference. FCSI is easy to understand and it makes the ID case very readily while CSI, a broader concept and more vague, leads to the meaningless sniping against ID that we all witness. As I just said, the sniping is meaningless. Some how they think that if they can discredit CSI as a scientific concept, they have won the day or won a major battle. This inane thinking is more of an indictment of them then they realize. They constantly need to win small battles to think their world view is correct when they are overwhelmed by the other data which they dismiss by hand waiving. So we get the anti ID crowd coming here and picking away at CSI while they rarely ever go after FCSI. I personally suggest we steer any assault against CSI, a vague concept, towards FCSI, a very concrete concept that describes what happens in life. Defending CSI, whatever it is, has not been fruitful and will not be till a clear definition of just what it is, becomes available. No one should have to be conversant in obscure mathematics to understand it. If anyone disagrees, then I would be interested in just how they define specificity?jerry
December 15, 2008
December
12
Dec
15
15
2008
07:12 AM
7
07
12
AM
PDT
kairosfocus: At this point, I am not in debating mode, just trying to understand your point of view. So while still going over your most recent post, let me throw a scenario at you. Say on a sheet of paper the bit 1 corresponds to black and 0 represents white. Say someone draws an ordinary snowflake on that paper. Now take each row of the paper and lay them out end to end so you have a million-bit long string of digits. It seems that string is complex and specified according to the design inference and could not have happened by chance. Right? ('Yes' or 'No' is OK.) Or just say that a random snowflake lands on the paper and answer the same question.JT
December 15, 2008
December
12
Dec
15
15
2008
06:39 AM
6
06
39
AM
PDT
KF, good points about FSCI. I wish Dembski would make more use of the concept. With Patrick jogging my memory, patterns are specified (sorry Mark) but repetitive patters are not complex & crystals (snowflakes, stalagmites) are repetitive patterns hence not complex, and so I guess that is the what I should have remembered about crystals being specifically addressed by Dembski. Something to consider: Would ID be able to determine if 010101010101010101 repeated to 10^whatever was designed? No, although it may very well could be. OTOH, if that code was found to have a function -- something unexpected and useful occurred when we ran it -- then I think all of us would infer design.tribune7
December 15, 2008
December
12
Dec
15
15
2008
06:30 AM
6
06
30
AM
PDT
Okay A few remarks on points: 1] JT: when people bring up the snowflake example, its not to imply that snowflakes are pretty much the same as life and that proves that the forces that created a snowflake can create life as well. Mischaracterisation of the rebuttal. The evolutionary materialist claim, FIRST, is that the snowflake is produced by chance plus necessity, and that this pattern is also able to account for the origin and body-plan level biodiversity of life. In that context, evo mat advocates then raise the onward, even more specious objection: 2] “Here is something that everyone would agree is caused by natural laws. And yet the EF (or the design inference or whatever) would seem to imply that a snowflake is designed by what they call an ‘intelligent agent’. This proves that the Design Inference is not reliable.” First, the EF has always focussed on objects, situations and aspects thereof that SIMULTANEOUSLY exhibit [a]complexity, and [2] (often, functional) specification. the simultaneous side is important as the point is that complexity implies a large information storage capacity. "Simple" and/or independent specifiability implies that the observed result is in a small, v. hard to reach [practically, impossible; per search resource exhaustion] by chance, target zone. But, we know that intelligent agents routinely achieve such outcomes, so when we see CSI (or, its easier to understand subset FSCI] we infer to agents. Indeed, just by inferring that this post is not lucky noise, you are making such an inference. In the case of the snowflake, as I pointed out above, at 73, that the specification relates tot he crystalline structure, and the complexity tot he dendrites. thus they do not constitute a case that has a single aspect that exhibits BOTH storage capacity and tight specification to a target zone in the resulting large config space:
HEX STRUCTURE: Law, so specifcity but no room for information storing high contingency, so not complex. DENDRITES: Complex but produced by effectively random circumstances — undirected high contingency, i.e. complex but not simply specific or functional. Chance.
In short, the objection is based on a misunderstanding, or a misconstruing of what is being discussed. 3] DNA by contrast: In my appendix, I observe, for this very reason: DNA exhibits both information storage capacity AND is tightly functionally specified. thus there is a strong contrast to the snowflake. Observe what happens just after the section you excerpted:
. . . . The tendency to wish to use the snowflake as a claimed counter-example alleged to undermine the coherence of the CSI concept thus plainly reflects a basic confusion between two associated but quite distinct features of this phenomenon: (a) external shape -- driven by random forces and yielding complexity [BTW, this is in theory possibly useful for encoding information, but it is probably impractical!]; and, (b) underlying hexagonal crystalline structure -- driven by mechanical forces and yielding simple, repetitive, predictable order. [This is not useful for encoding at all . . .] Of course, other kinds of naturally formed crystals reflect the same balance of forces and tend to have a simple basic structure with a potentially complex external shape, especially if we have an agglomeration of in effect "sub-crystals" in the overall observed structure. In short, a snowflake is fundamentally a crystal, not an aperiodic and functionally specified information-bearing structure serving as an integral component of an organised, complex information-processing system, such as DNA or protein macromolecules manifestly are.
4] you saying that a snowflake may indeed be complex and specified In fact, as the very snippet you excerpted shows, I am pointing out that in the dendritic case [the other types of snowflake have simple plate or columnar gross structures, thus my "may" . . .] there is a difference in aspects that the complexity relates to, and the one that the specification relates to. I think there is a concept gap at work, as is further brought out by your . . . 5] it seems clear to me that by FSCI you just mean anything associated with life, and are not able to get any more detailed than that. Not at all. First, the term I have used is not original to me or to Dembski or to the design movement. It is a summarydescription of the substance of what was highlighted by OOL researchers by the 1970's. As Thaxton et al summarised in 1984 [a decade before Dembski]:
Yockey7 and Wickens5 develop the same distinction, that "order" is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, "organization" refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future.
Save in the service of debate! More seriously, it should be plain that I have simply clustered the terms into a phrase: functionally specified complex information, FSCI. Plainly, this term relates to the sort of integrated multi-part function that bio-systems exemplify, and which is based on complex, specified information. Bio-systems "exemplify" -- they do not "exhaust." Indeed, FSCI is a characteristic feature of engineered systems, and even written text that functions as messages under a given code or language. And that is exactly the sort of illustrative example that was used in the 1980's, and which I excerpted in the immediate context of the cite, right after the famous 1973 Orgel quote:
1. [Class 1:] An ordered (periodic) and therefore specified arrangement: THE END THE END THE END THE END Example: Nylon, or a crystal . . . . 2. [Class 2:] A complex (aperiodic) unspecified arrangement: AGDCBFE GBCAFED ACEDFBG Example: Random polymers (polypeptides). 3. [Class 3:] A complex (aperiodic) specified arrangement: THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE! Example: DNA, protein.
This example form its polymer context, comes from Walter Bradley, author no 2 of The Mysery of Life's Origin. An acknowledged polymer expert. And he highlights that DNA is a message-bearing functional digital string of monomers, by contrast with the tangled "mess" that happens when we say polymerise amino acids at "random, and again as opposed to the more orderly pattern shown by nylon. A crystal is of course a 3-d "polymer" structure that exhibits fantastic order. I trust that his will help clarify. 6] 83: A process determined entire by law can have EXTREMELY complex behavior and extremely difficult to predict behavior. No "process" is "determined entire[ly] by law." As you will note, form the above, I stated a general framework for system dynamics in no 80 just above:
. . . if a given aspect of a situation or object is produced by law, it is inherently of low contingency: reliably, given Forces F comforming to laws L, and under boundary and intervening conditions C, then result R [up to some noise, N] will result. Cf case of a falling heavy object.
I cited a simple case to illustrate the point, but he context for this was my background in systems modelling whereby sets of differential equations show how change happens relative to initial and intervening conditions. Once conditions are the same, onward unfolding will be the same. [The problem with sensitive dependence on initial conditions is precisely that in these cases, through amplification of small differences, we cannot keep the conditions the same from one case to another. (in these cases,w e see a higher law, showing itself in a pattern: the strange attractor in phase space. That is, this undercores the point.)] And it is precisely the reliability of similarity of ourtcomes from case to case under similar conditions that is the signature of law. And it is precisely this point that leads to low contingency and to lack of information storing power. In the case of proposed info systems that use chaotic systems to lead to divergent or convergent outcomes as required to create and detect messages, it is the contingency in the conditions that makes of the information storage capacity. 7] If an agent’s actions are not predicatable then his actions equates to RANDOMNESS. Onlookers, this is of course the precise problem that evo mat thinking lands in as it fruitlessly tries to account for the mind on the basis of chance + necessity acting on matter. It ends up in assigning messages to lucky noise acting on essentially mechanistic systems. Thus, JT needs to address the challenge of why he would take seriously the apparent message resulting from a pile of rocks sliding down a hillside and by astonishing luck coming up: "Welcome to Wales," all on the border of Wales. But we are not locked up to such a view. For the first fact of our existence as intelligent creatures is that we are conscious and know that we reason, think decide and act in ways led by our minds, not random forces. Indeed, that is the assumption that underlies the exchange in this blog -- i.e self-referential incoherence yet again lurks here for the materialist. on the contrary, JT: [a] we explain regularities by mechanical forces that are expressible in laws, [b] we explain UNDIRECTED CONTIGENCY by chance, and [c] we explain DIRECTED CONTINGENCY by design. 8] If you don’t know what caused something you can’t encode it, and thus can’t gauge its probability. First, the case in question is one where we do know the sort of forces that lead to the pattern of dendrites on a snowflake, so this is tangential and maybe distractive. Second, we can observe directly that a given aspect of a situation exhibits high and evidently undirected contingency; up to some distribution. So we can characterise chance without knowing dynamics that give rise to it, apart form inferring from the distribution to the sort of model that gives rise to is. We do that all the time, even in comparative case studies and in control-treatment experiment designs. Third, we have no commitment to needing to know the universal decoder of information in any and all situations. Once we do recognise that something exhibits high contingency and is functional in a system, we have identified FSCI. From massive experience of the source of FSCi, we can then induce that an agent is at work in this case too, with high confidence. 9] a string’s probability is proportional, not to its own length, but the length of the smallest program-input that could generate it. Yes, and that is precisely an example of a potentially simple specification. What happens is that WD is saying that MOST long strings are resistant to that sort of reduction, i.e they are K-incompressible. In effect to describe and regenerate them, you have to have prior knowledge of the actual string and in effect copy it directly or indirectly. That is, active information on a specified target. Now, such an algorithm -- even at the "hello world" level -- needs to be expressed in a coding system, and to be stored in a storage medium and to be physically instantiated through executing machinery. Factor these parts in,and the complexity goes right back up. And that is what we are dealing with when it comes tot he origin of life or the body-plan level innovations to get to major forms of life. It is also what we are dealing with when it comes to our Collins universe baking breadmaker. 10] in the ID conception, intelligence is not a mechanism, not something that can be deconstructed or explained, and there is no consensus that such a thing conceived like that is an explanation for anything. Now, WHO is saying that, again? Is it not a self- aware, self- conscious, intelligent creature who knows that he acts into the empirical world based on intelligence; even to type what was just cited? In short, we do not need to understand what intelligence is or how it arises or how it acts, to KNOW that: --> it is, that --> it acts and that --> it is a key causal factor in many relevant situations. Indeed, we know THAT --> it leaves behind certain reliable signs of its passage, such as FSCI , CSI and IC! In short, this last is self referentially incoherent and selectively hyper-skeptical tot he point of absurdity, compounded by dismissive contempt. JT, you can do better than that, a lot better. As to the final point, I note simply by citing Lewontin in his 1997 review of Sagan's last book, on the role of evo mat in modern science; observing also that the NAS etc now insist on precisely this same imposition of materialism in their attempted re-definition of science, in our day. Here is Lewontin:
. . . to put a correct view of the universe into people's heads we must first get an incorrect view out . . . the problem is to get them to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth . . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door.
The materialist agenda could not be plainer than that, regardless of what theistic evolutionism may wish to say. GEM of TKIkairosfocus
December 15, 2008
December
12
Dec
15
15
2008
04:17 AM
4
04
17
AM
PDT
KF I understand now that appendix was intented primarily to discuss FCSI not CSI, so sorry about the mischaracterization.JT
December 15, 2008
December
12
Dec
15
15
2008
02:28 AM
2
02
28
AM
PDT
predicatable = predictableJT
December 15, 2008
December
12
Dec
15
15
2008
02:10 AM
2
02
10
AM
PDT
kairosfocus [80]: Just some quick responses I'll fire off to you at this point without a lot of planning: First, if a given aspect of a situation or object is produced by law, it is inherently of low contingency... I could not disagree with that assertion more vehemently. This idea you express stems from two common-sense type assumptions: 1) That laws must be necessarily trivial and simple, because the natural laws we happen to know about are simple (at least in comparison to the complexity of the genome) The second misguided idea is that because laws are determinsitic there is no contingency. But there is contingency if you do not know what complex determinsitic process is causing something. The contigency only dissapears when you know what that process is. A process determined entire by law can have EXTREMELY complex behavior and extremely difficult to predict behavior. I can't do better than that at the moment for a rebuttal, but I am telling you the idea you've expressed above is among the most misguided in all of ID thought. As another rejoinder to you, I would submit to you that the ID concept of agency equates to randomness. If an agent's actions are determined by laws, even extremely complex laws, then his actions are potentially predicatable. Reciprocally [we can turn anything into an adverb in English right?], to the extent an agent's actions are potentially predicatable we can derive a program, a set of laws characterizing how he is known to operate. OTOH If an agent's actions are not predicatable then his actions equates to RANDOMNESS. am sure you will understand that the longer the required specific bit string on our model snowflake, the harder it is to accidentally duplicate [or crack by brute force]. The length of that bitstring, if not based on something trivial like just the number of atoms, would be based on our knowledge of processes that could have caused it. So any such encoding scheme requires knowledge. If you don't know what caused something you can't encode it, and thus can't gauge its probability. This could be stated much better as well - but that's the gist of it. Also, yes I do understand, or provisionally accept that (to use WmAD's reasonings) the percentage of compressible strings is incredibly minute, so observing one of a certain length means you could definitely rule it out occuring by a series of coin flips for example. OTOH, there's another way of looking at probability wherein a string's probability is proportional, not to its own length, but the length of the smallest program-input that could generate it. SO in that scenario a string of 100 1's would be highly probable. And it does seem you do see that type of regularity in the natural world (but maybe not in coin flips). Repeating myself here, admittedly. If not, why then do so many evo mat advocates foam at the mouth when the obvious point is put: there is a well-known, even routine, source of such complexity: intelligence. Because in the ID conception, intelligence is not a mechanism, not something that can be deconstructed or explained, and there is no consensus that such a thing conceived like that is an explanation for anything. I conclude (provisionally but confidently, as is characteristic of scientific investigations) that the evidence of a programmed observed cosmos points to: design of the cosmos. 1 –> The first point of our awareness of the world is that we are conscious, intelligent, designing creatures who act into the world to cause directed contingency. I don't think even you or any other ID advocate know specifically what you mean by "directed contigency". –> We know that intelligence produces directed coningency, and that resulting FSCI is beyond the random search resources of our observed cosmos. In the case of life systems, VASTLY so; e.g. 300, 000 DNA base prs (lower end of estimates for credible 1st life) has a config space of about 9.94 * 10^180,617. 7 –> Now, given the raw power required to make a cosmos on the scale of the one we see, that sounds a lot like a Supreme Architect of the cosmos etc. that makes a lot of people very uncomfortable, and I detect that in the loaded terms you used above. I didn't really imagine that the ID - evolution debate really had to do with the existence of God - after all there are theistic evolutionists. The whole point of science is to explicate the laws, the process that caused something. Your idea that there are some things that laws just cannot do is ill-founded. Admittedly whatever processes that could account for us would equate to us and be extremely complex. But also there has to be a lot of randomness in there or why is the unverse is large as it is. I'm verging into an area that I've already discussed in other threads before, so will not rehash that whole discussion at this point.JT
December 15, 2008
December
12
Dec
15
15
2008
01:20 AM
1
01
20
AM
PDT
KF: We were out of sync there- just saw your new post. It could be a while before I respond.JT
December 15, 2008
December
12
Dec
15
15
2008
12:11 AM
12
12
11
AM
PDT
[Have no idea why my entire post is blockquoted.] KF wrote [73]:
You may find my discussion and the onward links here helpful. Some nice snowflake pics, too
As a general introduction let me say that, when people bring up the snowflake example, its not to imply that snowflakes are pretty much the same as life and that proves that the forces that created a snowflake can create life as well. Rather, the implicit argument is, "Here is something that everyone would agree is caused by natural laws. And yet the EF (or the design inference or whatever) would seem to imply that a snowflake is designed by what they call an 'intelligent agent'. This proves that the Design Inference is not reliable." So the focus is on the specific arguments made by Dembski. Therefore, while dwelling on the difference between a snowflake and DNA in a detailed and laborious way which you do at times in this section, however enlightening it is in a general sense, is not pertinent to the actual debate because everyone understands that a snowflake and life are not the same thing. The discussion is enlightening no doubt - just of questionable immediate relevance. If you can distill all that discussion down to a formula that compactly distinguishes life from nonlife and utilize such a formula in conclusively showing that natural laws (known or unknown) cannot account for life, that's another matter. But I don't see any evidence you've done that.
A snowflake may indeed be (a) complex in external shape [reflecting random conditions along its path of formation] and (b) orderly in underlying hexagonal symmetrical structure [reflecting the close-packing molecular forces at work], but it simply does not encode functionally specific information. Its form simply results from the point-by-point particular conditions in the atmosphere along its path as it takes shape under the impact of chance [micro-atmospheric conditions] + necessity [molecular packing forces].
In the above you saying that a snowflake may indeed be complex and specified which is what it is Dembski's scheme as well. I know in response to me previously you said that a snowflake was not complex. (Note also that the order in a snowflake you allude to would most definitely qualify as a pattern for the purposes of specifiication in the Dembskian scheme.) The crucial factor for you however (or maybe you're quoting a source here) is that the snowflake does not encode "functionally specific" information. But if your stated goal in this section is to clarify what Dembski was talking about, he has nothing to say about functional specificity. No such terminiology appears in his "Specification" paper. Maybe he engages the reader in some speculative discussion to this effect in some other book of is, but not in what he's presented of late as his definitive monograph of the subject of CSI. You alluded to "functionally specified complex information" and "FSCI", at the very beginning of this appendix, and if I understood your remarks correctly, you said that although your primary purpose in this appendix was to clarify the concept of CSI, that FSCI was "a more easily identified and explained subset of the CSI concept". So I went looking for a definition for FSCI in your paper and found the following:
Functionally Specific, Complex Information [FSCI] is a characteristic of complicated messages that function in systems to practically solve problems faced by intelligent agents.
(In fact there is a hyperlink to the above from, Defining "Functionally Specific, Complex Information" [FSCI] This seemed a little vague to me, so I went looking for more specific references to the concept in your paper:
But also, in so solving their problems, intelligent agents may leave behind empirically evident signs of their activity; and -- as say archaeologists and detectives know -- functionally specific, complex information [FSCI] that would otherwise be utterly improbable, is one of these signs.
...In short, we all intuitively and even routinely accept that: Functionally Specified, Complex Information, FSCI, is a signature of messages originating in intelligent sources.
That's about it. Then there was the following hyperlink
"Definitionitis" vs. the case by case recognition of FSCI
...In short, we do not need to have a "super-definition" of functionally specified complex information and/or an associated super-algorithm in hand that can instantly recognise and decode any and all such ciphers, to act on a case by case basis once we have already identified a code.
... This is of course another common dismissive rhetorical tactic. Those who use it should consider whether we cannot properly study cases of life under the label "Biology," just because there is no such generally accepted definition of "life."
So from the above it seems clear to me that by FSCI you just mean anything associated with life, and are not able to get any more detailed than that. So in your previous quote where you say that snowflakes are complex and specified, but not functionally specified, your objection is that apparently that they're not life, but we already knew that. Your closing remark in that quote is that,
[The snowflake's] form simply results from the point-by-point particular conditions in the atmosphere along its path as it takes shape under the impact of chance [micro-atmospheric conditions] + necessity [molecular packing forces]
But we knew already snowflakes are the result of chance and necessity, and not "designed". The argument is that the Design Inference would conclude they are designed and so is therefore invalid. Also, the fact that you would point out the obvious (that snowflakes are the result of chance and necessity) implies that its simply a matter of definition for you that anything caused by chance and necessity could not be life.
Moreover, as has been pointed out, if one breaks a snowflake, one still has (smaller) ice crystals. But, if one breaks a bio-functional protein, one does not have smaller, similarly bio-functional proteins. That is, the complexity of the protein molecule as a whole is an integral part of its specific functionality.
Undeniable, but considerations regarding periodicity [which you discuss elsewhere - the above has to do with recursiveness] do not appear to be relevant in the design inference itself. Also people can already see that among objects in the natural world, what we call life is the only thing that encodes information in the way that it does. Its also evident that life is not a matter of simple patterned or repeating complexity. To detail all these differences between life and nonlife in a systematic way do not establish that a seperate ontological category of "agency" is necessary to account for them. As far as the chance aspect of the discussion, Dembski's argument may possibly be relevant there (but I'm not sure). But actually, it would be sufficient to merely show that strict darwinianism largely equates to pure chance, because that is what they adamantly deny. They and everyone else understand that IF darwinism largely equates to chance, then its meaningless. You don't actually have to establish in a formal sense I think that an object of such and such complexity could not happen by pure chance. Or maybe you do, who knows. I'll abruptly end my discussion of your paper (or specifically that appendix you requested I read). You could probably have predicted my response. Could probably continue in a similar vein for a while, but just wanted to acknowledge I read it. Not saying the entire 120 page paper is worthless or something. Well there is one more comment I need to make. You write:
2] By developing the concept of the universal probability bound, UPB, [Dembski] was able to stipulate a yardstick for sufficient complexity to be of interest, namely odds of 1 in 10^150. Thus, we can see that for a uniquely specified configuration, if it has in it significantly more than 500 bits of information storage capacity, it is maximally unlikely that it is a product of chance processes.
So evidently you do understand that the complexity that Dembski is talking about is simply the number of bits required to express a value.JT
December 15, 2008
December
12
Dec
15
15
2008
12:00 AM
12
12
00
AM
PDT
Okay JT: A few notes: 1] Dembski on specification First, I note that we are not dependent on WmAD or the specific document in 2005. He is providing a MODEL that gives a metric for CSI, not the origination of the concept. That is in part why I often refer back to Orgel et al in the 1970's and build on the more basic foundation of looking at the configuration space implied by information-storing capacities. Once those capacities cross 500 - 1,000 bits, and we have reasonably specific function (vulnerable to perturbations, and requiring precise coding as a result) we are looking at the sort of isolation in config space issues I documented in my always linked online note. Within that context, WmAD has provided a useful model for certain cases, both in the older and in the newer formats; e.g. "K-compressibility" of a string's description relative to its own length is a useful metric on simopplicity of specifiability. But, "functions as such and such a particular component in a processing or info or control system" is just as valid. "Unfair[ness]" is irrelevant. 2] Snowflakes and complexity First, if a given aspect of a situation or object is produced by law, it is inherently of low contingency: reliably, given Forces F comforming to laws L, and under boundary and intervening conditions C, then result R [up to some noise, N] will result. Cf case of a falling heavy object. Low or no contingency means that that aspect cannot store information, i.e. its capacity is well below the threshold. For snowflakes, the forces connected to the H2O polarisation and geometry specify its crystallisation, leading to a very regular, low contingency pattern [up to the inclusion of the usual crystal defects]. As Orgel wrote in '73 -- originally describing the concept of CSI -- and as you seem to have overlooked:
Living organisms are distinguished by their specified complexity [i.e. as expressed in aperiodic, information-rich, specifically functional macromolecules -- GEM]. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6 [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189.]
This is a classic definition by archetypal example and family resemblance thereto. The aspect of snowflakes that IS complex is different, e.g. the dendritic growth patterns on a star-shaped flake. In effect we can view this as a bit pattern running around the perimeter, similar to how the length and position of prongs on a Yale type lock's key specifies its function. Such a pattern may reflect undirected contingency (chance) or possibly could be manipulated to store information. [I wonder if some of those smart boys over in Langley have used altered snowflake patterns to store access codes?] I trust that this gedankenexperiment example will make plain the differences highlighted by the EF: low contingency is associated with specification of course but not complexity. High contingency may be undirected (chance) or directed (design). I am sure you will understand that the longer the required specific bit string on our model snowflake, the harder it is to accidentally duplicate [or crack by brute force]. And the function would be very specific and simple to describe: "access granted/denied." [Cf Dembski's example from the 2005 paper, of Langdon and Neveau's attempted access to the bank vault in the Da Vinci Code novel. In short, WmAD and I have the same basic ideas in mind.] In the relevantly parallel case of protein codes, they must fold to a given key-lock fit 3-d shape, and must then function properly in the cell's life processes. Hundreds, indeed thousands of times over, with average length of proteins 300 20-state elements, i.e. requiring about 900 G/C/A/T elements, or 1,800 bits. With even generous allowances for redundancies, that is well beyond the reach of random chance in any plausible prebiotic soup; not to mention that the codes and algorithms and algorithm executing machines all have to come together in the right order within about 10^-6 m. That is why OOL research on the evo mat paradigm is in such a mess. At body-plan evo level we have to address the genetic code and epigenetic info issues as well to innovate body plans to get tot he functions that natural selection forces can cull from. That is why the Cambrian fossil life body plan level explosion is such a conundrum; and has been since Darwin's day -- 150 years of unsolved "anomaly." Hence Denton's "theory in crisis" thesis. 3] Complexity, programs and their execution Working programs as just noted are of course based on highly contingent codes, algorithms and implementing machines; whether in life-based info systems or in non-life-based info systems. If you want to look at the laws of the cosmos as a program, I ask a question: who or what designed the language, wrote the code,developed the algorithms and the implementing "cosmos bread-making factory" machinery, as Robin Collins put it? Do you know of a case where complex programs have ever written themselves or designed themselves [apart form preprogrammed genetic algorithms that have specfified target zones and so put in active informaiton at the outset], "blind watchmaker" style beyond he 500 - 1,000 bit threshold? [Methinks it is like a Weasel etc (and evidently up to Avida) are bait-switches on blind searched, substituted by targeted ones.] If not, why then do so many evo mat advocates foam at the mouth when the obvious point is put: there is a well-known, even routine, source of such complexity: intelligence. So, per scientific induction, we infer a "law" of information: FSCI is a reliable sign of intelligence. From that we look at the program that wrote the cosmos, i.e. its fine-tuned highly complex set of physical laws. [Onlookers have a read of the online physics survey book, Motion Mountain. "Google," and if necessary follow up the ScribD source if the original still will not download.] Reckoning back on inference to best explanation, I conclude (provisionally but confidently, as is characteristic of scientific investigations) that the evidence of a programmed observed cosmos points to: design of the cosmos. [Provisionally of course implies falsifiability, or at least ability to in the Lakatos sense distinguish progressive and degenerating research programmes. And, if cosmos-generatign and regulating physics is algorithm, then it can in principle be hacked; maybe tapping into that dark energy out there as a power source. If that does not warm the cockles of the heart of any adventuresome physicist, I don't know what will! Imagine the possibility of superluminal travel by being able to create/access parallel universes that bring points far apart in our spacetime to our neighbourhood. Wormholes for real!] 4] The only question is, does the fact that we don’t know what caused life mean it happend by magic. Excuse me!
1 --> The first point of our awareness of the world is that we are conscious, intelligent, designing creatures who act into the world to cause directed contingency. 2 --> This is more certain than anything else! Indeed, it is the premise on which we live together,interact and communicate, etc in our world. 3 --> So, intelligence that acts based on mind into the world is credibly real, and actual, not mythical magic. 4 --> Further, it underscores the proper -- as opposed to conveniently strawmannish -- contrast: natural/ARTificial (or, intelligent) as opposed tot he ever so convenient and loaded: natural/supernatural. 5 --> We know that intelligence produces directed coningency, and that resulting FSCI is beyond the random search resources of our observed cosmos. In the case of life systems, VASTLY so; e.g. 300, 000 DNA base prs (lower end of estimates for credible 1st life) has a config space of about 9.94 * 10^180,617. 6 --> So, we may confidently reason from the info sys characteristics of life to is=ts credible origin: directed contingency. There are various possible candidates for that, but he most credible would the the same one responsible for a fine-tuned cosmos that is fine-tuned to set up and sustain cell-based life. 7 --> Now, given the raw power required to make a cosmos on the scale of the one we see, that sounds a lot like a Supreme Architect of the cosmos etc. that makes a lot of people very uncomfortable, and I detect that in the loaded terms you used above. 8 --> Well, so what? The chain of reasoning is from inductively well-grounded sign to the signified, not the other way around. 9 --> For, science is supposed to be an empirically based, open-minded, unfettered (but ethically and intellectually responsible) and open-ended search for the truth about the cosmos, not a lapdog, handmaid and enforcer to Politically Correct materialism hiding in a lab coat.
5] there’s no reason to belabor the differences But, that was the precise point: we see grounds for seeing tha there is a crucial difference between the snowflake and the info systems of life. Namely, that snowflakes do not have aspects where we see CSI, apart from the possibility of the boys from Langley intervening. And that is the exact point . . . GEM of TKIkairosfocus
December 14, 2008
December
12
Dec
14
14
2008
11:13 PM
11
11
13
PM
PDT
w/ apologies for corrections: [76] You may have your own definition of complexity, and I have mine, but its important to keep Dembski’s in mind:
(?R) cannot plausibly be attributed to chance...It’s this combination of pattern- simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance) that makes the pattern exhibited by (?R) - but not R - a specification.
-”Specification: The Pattern that Signifies Intelligence”JT
December 13, 2008
December
12
Dec
13
13
2008
09:36 PM
9
09
36
PM
PDT
correction: 76 was in reply to kairosfocus [73].JT
December 13, 2008
December
12
Dec
13
13
2008
09:23 PM
9
09
23
PM
PDT
KF [76]: One other thing: There's no doubt that life is very very different from nonlife, and actually nobody denies this. Nobody thinks that a snowflake is the same as life. So to me, there's no reason to belabor the differences. The only question is, does the fact that we don't know what caused life mean it happend by magic. Maybe it might be reasonable to say that, since in our experience life only comes from life, that whatever complex forces and laws out there we're currently unaware of that directly account for life, those forces and laws, no matter how diffuse and indirect they may be, must equate to life in some real sense. To me, this view is preferable than creating a new seperate ontological category of "agent" that essentially operates by magic.JT
December 13, 2008
December
12
Dec
13
13
2008
09:19 PM
9
09
19
PM
PDT
You may have your own definition of complexity, and I have mine, but its important to keep Dembski's in mind:
It’s this combination of pattern- simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance) that makes the pattern exhibited by (?R) - but not (R) - a specification.
-"Specification: The Pattern that Signifies Intelligence" In this scheme every bit-string of length n has the same complexity, whether the string is all 1's, completely random, or anything in between. It can be confusing because subsequent to this there are repeated references to descriptive complexity, i.e. the complexity of the actual pattern, but as its explained, descriptive complexity actually has to be kept low or the Design Inference won't work. (Note that the terms "CSI" or "complex specified information" do not appear in the above paper, but that's what its about - W.D: "For my most up-to-date treatment of CSI, see “Specification: The Pattern That Signifies Intelligence”) So its event complexity that is relevant - "the difficulty of reproducing the event by chance". In reference to bit strings this is taken to be its length. It seems some straightforward measure of length should be used in a natural context as well (e.g. number of atoms). But some people take into consideration known processes to arrive at complexity for natural events. But in that case there is an unfair prejudice against anything we don't know the cause for, wherein we say any such thing is really really improbable and complex because we don't know what caused it. But anyway my own measure of complexity would be the size of the smallest program-input that could generate a number so that a string of all 1's would be highly probable. This I think is the conventional measure of complexity. You wrote: "In the case of the snowflake, the typical hex symmetry is set by law-like forces. That gives specification but no complexity" I think you're intending to imply that law-like forces cannot result in your concept of complexity, or maybe you exclude by definition anything produced by law-like forces from being complex, I'm not sure. But in the conventional definition of complexity its assumed that everything can be produced by some set of laws (i.e. some program), maybe not some known set of natural laws, or a simple set of laws, but some set of laws nonetheless. (It could be that we don't have the brainpower to identify the natural laws that created us because they're too complex.) You may find my discussion and the onward links here helpful. Some nice snowflake pics, too. Thanks. I'll get back with you on this later.JT
December 13, 2008
December
12
Dec
13
13
2008
09:07 PM
9
09
07
PM
PDT
GSV, message #11 "I am trying to explain the EF to a friend who does not have any of your books Mr Dembski so I was hoping somewhere there was an example of it’s use on the web. Anyone?" http://www.arn.org/docs/dembski/wd_explfilter.htm RayR. Martinez
December 13, 2008
December
12
Dec
13
13
2008
05:08 PM
5
05
08
PM
PDT
GSV, message #11 "I am trying to explain the EF to a friend who does not have any of your books Mr Dembski so I was hoping somewhere there was an example of it’s use on the web. Anyone?" http://www.arn.org/docs/dembski/wd_explfilter.htm RayR. Martinez
December 13, 2008
December
12
Dec
13
13
2008
05:06 PM
5
05
06
PM
PDT
JT Saw your The snowflake would be both complex and specified . . . [71] while downloading and searching on a 6to4 adapter Vista headache. The key to the problem is to understand that the EF is speaking about complex specified information relating to a given aspect of a phenomenon. THAT is what puts the outcome in an isolated island of Functionality in a broad config space, or into a relatively tiny target zone that is a supertask for a random search to try to find. In the case of the snowflake, the typical hex symmetry is set by law-like forces. That gives specification but no complexity: you have a periodic, repeating structure that has low contingency; so little capacity to store information. Where there is complexity is in e.g dendritic flakes. This is driven by the random pattern of microcurrents and condensation of tiny drops of water from the air as the flake falls and tumbles. So, we see a complex but not directed/controlled branching structure superposed on the hex symmetry. [With such direction, it COULD in principle be made to store working information, as it has high contingency, but of course in the generally observed case, it does not store any functional information.] So, the specificity and the complexity speak to divergent aspects of certain snowflakes -- dendritics form under certain conditions, and not others. Tha tis why teh Ef will rule for the two aspects:
HEX STRUCTURE: Law, so specifcity but no room for information storing high contingency, so not complex. DENDRITES: Complex but produced by effectively random circumstances -- undirected high contingency, i.e. complex but not simply specific or functional. Chance.
You may find my discussion and the onward links here helpful. Some nice snowflake pics, too. Update just finished, and so am I. (I hope my wireless net access will start back working . . . Vista can be a real pain.) GEM of TKIkairosfocus
December 13, 2008
December
12
Dec
13
13
2008
10:52 AM
10
10
52
AM
PDT
1 2 3 4

Leave a Reply