Uncommon Descent Serving The Intelligent Design Community

“Blown Away” Dan Peterson reviews Dr. Stephen Meyer’s book The Signature in the Cell at The American Spectator

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Dr. Stephen C. Meyer’s book The Signature in the Cell is reviewed by Dan Peterson in The American Spectator (September 1st, 2009). Here is an excerpt:

“Of the approaches taken by ID theorists, Signature in the Cell is most closely aligned with the pioneering work on design detection published over the last decade by mathematician William Dembski, one of Meyer’s colleagues at the Discovery Institute.  Dembski and Meyer both rely, at least in part, on information theory and probabilistic analysis to determine whether a phenomenon is best explained as the  product of unguided “chance and necessity,” or of design by an intelligence…

Signature in the Cell is a defining work in the discussion of life’s origins and the question of whether life is a product of unthinking matter or of an intelligent mind.  For those who disagree with ID, the powerful case Meyer presents cannot be ignored in any honest debate.  For those who may be sympathetic to ID, on the fence, or merely curious, this book is an engaging, eye-opening, and often eye-popping read.”


Comments
I don't recall criticising Meyer's book specifically, so whether or not I actually have read it is irrelevant (I have read his paper published in the BSW, however). I was responding to jerry's post regarding the Cambrian explosion.Dave Wisker
September 9, 2009
September
09
Sep
9
09
2009
03:20 AM
3
03
20
AM
PDT
PPS: This discussion -- linked through the Wiki article -- is excellent, providing a great primer on getting to the cosmic chemistry set. This one by Telescope Australia gives a good overview on cosmology and star physics. This and this on solar system formation models give introductory overviews -- but note there are many, many, many controversies and unresolved points. [For instance Sir Fred Hoyle c 1960 raised an interesting point on in effect Faraday disk generators [which have very weird subtleties . . . ] and magnetic braking to explain the distributions of mass and angular momentum: the planets with less than 2% of mass have something like 98% of L, which is very hard to account for. it is recent observations of apparent solar systems in formation that have given a revitalisation to the nebular hypothesis of Laplace, despite difficulties. And a recent issue is to get from boulders ~ 1 m to island sized "asteroids" ~ 100 km across towards coalescing planets.)kairosfocus
September 9, 2009
September
09
Sep
9
09
2009
02:31 AM
2
02
31
AM
PDT
PS: On the proportion of available ingredients in the cosmic "chemistry set," cf here. A more specific discussion on C, the key building block element [for its ability to form unlimitedly long chains that at the same time are reasonably stable but breakable without too much effort] is here.kairosfocus
September 9, 2009
September
09
Sep
9
09
2009
02:03 AM
2
02
03
AM
PDT
Onlookers: It is fairly obvious that Tim has hit the nail on the head: ID critics by and large are convinced that they already know it all, that those who disagree with them are "ignorant, stupid, insane or wicked," and don't bother to check out the relevant facts and logic -- not to mention correctives for typical basic errors that we see over and over again -- before dismissing the evidence for design. (And AH's dismissals at 117 by broad-brush appeal to the "consensus" of "the experts" -- AKA a naked appeal to collective authority of the evolutionary materialist neo-magisterium, stating their a priori materialism shaped views and speculations as though they were unquestionably true fact -- are sadly illustrative of the attitude.) Perhaps the slice of the cake that has in it all the ingredients is the attempted dismissal of FSCI by someone who pointed out that a paragraph is still communicative with a few typos: 1 --> When was that EVER in dispute? (For, the point of an ISLAND of function is that there is a beach of initial functionality, and the terrain climbs to peaks of optimal function. An island is not ocnfinfed to one point, and to act as though it were is to create and knock over a distractive strawman.) 2 --> Similarly, the point of the 500 - 1,000 bit information capacity threshold on complexity for the simple heuristic, is that 1,000 bits specifies a config space of 1.07 * 10^301 states, or about 10 times the SQUARE of the number of configs of all the 10^80 atoms of the observed cosmos across its thermodynamically credible lifespan [~ 50 million times as long as the time since the big bang on the usual timelines]. 3 --> So, viewing the cosmos as a search engine that creates galaxies, initial stars that cook up heavy elements, then second etc generation stars with solar systems, terrestrial planets, possible terrestrial moons of gas giants, and Oort clouds etc, in duly habitable zones -- galactic and circumstellar -- we see it creating quite limited habitats for potential cell based life, which then can be viewed as natural labs in which organic molecules can play around to their heart's content and see if they organise themselves spontaneously into life. 4 --> The problem with the picture? First, we just cut the proportion of the universe available to form life way, way down: C alone is less than 1% of the atoms in the cosmos; which is 3/4 H and mostly He for what is left. And, the GHZ's and CHZ's further dramatically cut down the available number of life origin sites. 5 --> But, moreso: even on the generous estimate that assigns all atoms to life making potential at all sites in the cosmos, the whole observed universe working as a search engine would not be able to sample more than 1 in 10^150 of the configs of just 1,000 bits of storage capacity. That's a fraction practically indistinguishable from zero; i.e not a credible search. (And, that is the point that has been made, repeated, and underscored literally dozens of times across weeks, months and now years. Would a sample of just one atom from the 10^80 or so at random from one instant at random in the life of our observed cosmos be likely to give a good view of what our cosmos is capable of? That is the comparable scope of sample to scope of space that we are discussing.) 6 --> In short, the question is not the number and scale of islands of function, but whether chance + necessity working on the gamut of the observed cosmos can get us to a credible search of the relevant config spaces that makes it reasonable that we would end up on the shores of ANY island of function corresponding to a workable cell based body plan. (And, for the sake of argument, I am fully prepared to grant that once you land on a shore hill climbing algorithms exist in nature that would help you climb to the mountaintops, optimising the relevant body plan. [Those who have an overly rosy view of genetic and evolutionary algorithms need to look at recent reviews on same, starting with David Abel's remarks on Capabilities of Chaos and Complexity in a recent peer-reviewed article. Which BTW reproduces the Durston et al table of FSC metrics for proteins, pp. 254 - 5, which are on a more sophisticated basis than the rule of thumb, but make the same message. Proteins from Vif on are over the 170 AA threshold, and those from EPSP Synthase are over the 350 threshold. (Observe how the latter is an enzyme.)]) 7 --> To get an idea, let's look at 1,000 bits as a working store, the top end of the rule of thumb cutoff I have used heuristically for the credible reach of chance + necessity on the gamut of our cosmos as observed. This is about 130 bytes, and the task is to create a Von Neumann self-replicator in 130 bytes: blueprint, blueprint reader, blueprint and working effector machines. [We beg the question of where algorithms and computer languages as well as designs and specifications for required data structures come from. "lucky noise" is not a credible source, but we must just link to that discussion.] Can't be done -- just not enough working space. No credible self-replicator that does more than mere autocatalysis for self duplication in a space full of precursor molecules can be implemented in that scope. Punto final. 8 --> To see what that means, cf real life forms, of simplest genome length: 100's of thousands to a million bases. (And a lot of the effective working store may be in the machinery of the cell itself -- DNA is coding for parts and regulation.) 9 --> Just 100 k bases -- which in the real world would be less than a fully independent life form (it would hitch-hike on and parasite off existing cells for essential input molecules etc) -- specifies a config space of 9.98 * 10^60,205. 10 --> to see what that means, make up 10^150 islands of function, of 10^150 states each, i.e. 10^300 states taken up. The sea of non function on these terms would be ~ 10^ 59,905 states. A search of scope 10^150 states, would be 1 in 9.98 * 10^60,055. 11 --> This is so close to a zero fraction as makes no difference. Origin of life is maximally implausible on undirected chance + necessity on the gamut of the observed cosmos. 12 --> And, when it comes to innovation of major body plans, we are looking at working on earth, with genome innovations of order 10's - 100's of millions of new base pairs, relative to the original unicellular organisms of scope up to 1 mn bases or so. This is an even more hopelessly overwhelmed "search." 13 --> And yet FSCI on this gamut of complexity is routinely created by intelligent agents day by day around us: computer text, computer programs, speech, video etc etc etc. 14 --> All of it is functionally specific and complex, with 1,000 bits a trivially easy threshold to pass. take almost any sample of such functionally specific, coded information and spew random noise into it. In very short order, gibberish will result, not novel functionality. 15 --> And, injection of more and more randomness will simply move us around inthe vast sea oc f non-funciton, not move us towars those wonderful islands of fuction out there. Simply because the confiog space is overwhelmingly non-funcitonal. ___________ And, the critics know, or should know that; or could easily access relevant and plainly cogent information on the point and make the required calculations, without even having to buy Meyer's book (which is obviously a must-read!). But, the ideological blinkers of Darwin's remodelled Plato's cave are ever so obviously at work, leading to darkened understanding, even among the otherwise highly educated and critically aware. ______________ In short, the root problem is plainly spiritual, not intellectual. And with that in mind that, I think a certain scripture is apt as a call to reflect: ++++++++++ >> 2 Cor 10: 4For the weapons of our warfare are not physical [weapons of flesh and blood], but they are mighty before God for the overthrow and destruction of strongholds, 5[Inasmuch as we] refute arguments and theories and reasonings and every proud and lofty thing that sets itself up against the [true] knowledge of God; and we lead every thought and purpose away captive into the obedience of Christ (the Messiah, the Anointed One) . . . >> ++++++++++++ So, let us ask:
1]could it be that many of us across our civlisation been taken captive to strongholds of proud but warped ideas that blind us to the otherwise obvious truth? 2] If so, how can we escape, save by paying heed to corrective discussions? 3] And, if we refuse to pay attention to the possibility that what we think is our light is in fact darkness [Matt 6:22 - 23], whose fault is that?
Points to ponder . . . GEM of TKIkairosfocus
September 9, 2009
September
09
Sep
9
09
2009
01:45 AM
1
01
45
AM
PDT
... Meyer’s discussion of the formulation of ID within types of science, abduction, inference to the best explanantion, chance, contingency, and information, as well as obstacles/difficulties in OOL were all convincing to me. I found no errors of substance, a very readable narrative, and a nice balance of argument, evidence, and edge.
Well, I've read Meyer's book and I had no problems finding fatal errors of substance. His claims regarding the impossibility of protein evolution (that emanate largely from Axe's work) are wrong. I have explained the errors of IDists in this regard in this essay. His arguments about the genetic code are infantile (to be generous) and refuted in every possible sense by studies such as those from Yarus' lab (studies Dave Wisker has mentioned on this blog). These two erroneous claims of Meyer are the foundation of his book, and their falsity renders the book little more than a curious piece of autobiographical fiction. The worse thing was the way Meyer tried to liken himself to Watson and Crick. They could not be more polar opposites. Meyer's scholarship relies crucially on the avoidance of uncomfortable facts. W&C were insatiable in their quest for all sorts of data, especially facts that contradicted their first ideas (the triple helix, for example). Indeed, we can use Meyer's story to draw an interesting parallel that also illustrates the disconnect between Meyer and W&C. ID is like the triple helix - all wrong, contradicted by any and all manner of experimental result. In contrast to W&C (who embraced the data that disagreed with their first ideas), Meyer pretends that the contradictory data (which amounts to almost all experimental results that are relevant) simply do not exist. Watson and Crick? Not bloody likely.Arthur Hunt
September 8, 2009
September
09
Sep
8
08
2009
03:58 PM
3
03
58
PM
PDT
Ok, I've read the book. Yup, went right on over to the ol' library, checked it out, and read it. As I perused many of the comments on this thread, though, I've noticed that many of the complaints made by our Darwinist guests have actually been met in the book. This makes me wonder why people don't just read the thing. This thread is about the book and a complimentary review in a magazine, isn't it? (#3)Paul Burnett won't be impressed until he reads a review in an "actual science journal" to which I can only respond: So what? Look, if Paul Burnett will not be impressed by ID unless it is commented upon in a science journal, then Paul Burnett is just going to have to wait. I'd suggest to Paul Burnett to, uh, read the book. . . I suspect he'll be impressed. (#6)William J Murray, thanks for commenting on something from within the book. Something that may just have been as eye-popping to the reviewer. Folks, return to #6 and deal with what William wrote in the first paragraph. No, not by bringing your boilerplate; that is just not helpful. Instead, bring criticism, if any, of what was presented in the book. Then, we can argue the merits. Anyway, let's not just pick on Paul Burnett. No. Instead, we have this: (#7) MeganC has not read the book . . . (#11) Learned Hand has not read the book . . . (#13) Blue Lotus has not read the book . . . (#24) Gaz has not read the book . . . (#26) Cabal has not read the book . . . (#32) DeLurker has not read the book . . . (#76) Damitall has not read the book . . . (#91) Dave Wisker has not read the book . . . (#95) Adel DiBagno has not read the book . . . (#105) Indium has not read the book . . .(#108) Frogbox has not read the book . . . Or, if they have, they found its content to be totally convincing. Else, as critics of ID why haven't they posted their problems with the content of the book? I am happy they liked the book (wink, wink). I did, too. Meyer's discussion of the formulation of ID within types of science, abduction, inference to the best explanantion, chance, contingency, and information, as well as obstacles/difficulties in OOL were all convincing to me. I found no errors of substance, a very readable narrative, and a nice balance of argument, evidence, and edge. I recommend the book.Tim
September 8, 2009
September
09
Sep
8
08
2009
02:23 PM
2
02
23
PM
PDT
Indium, I'm quite sure that if you do some searching using the search tool you can manage to find where KF has explained at length how functional parts represent islands in the configuration space, and how drifting from the vast sea of non-function to one of these islands of function requires much much more than what is proposed by Darwinists.PaulN
September 8, 2009
September
09
Sep
8
08
2009
07:58 AM
7
07
58
AM
PDT
How about we measure the functionally specific bits on this baby.PaulN
September 8, 2009
September
09
Sep
8
08
2009
07:30 AM
7
07
30
AM
PDT
kf, that you dismiss my simple example of a "mutated" text shows that you have not understood the basic problem with FCSI: You are only looking at the size of the config space. But you also have to look at the size of the functional part of the config space. So, this is a bit disappointing: FCSI is again just the size of the configuration space? That´s another tornado in the junkyard argument then!Indium
September 8, 2009
September
09
Sep
8
08
2009
07:29 AM
7
07
29
AM
PDT
DeLurker, Do you remember a commenter here who went by JayM? Do you remember how he was an ID supporter, who wanted ID to be strengthened, so he offered his advice on how to strengthen it? Yeah, sure you remember, because that person is YOU. Do you now, finally, admit that your JayM character was disingenuous, to use a kind word?Clive Hayden
September 8, 2009
September
09
Sep
8
08
2009
07:18 AM
7
07
18
AM
PDT
kairosfocus#108
2]I have already analysed a DNA sequence by the way we can most readily do so. I stipulated a 350 AA typical enzyme protein length, then reckoned with the known vulnerability to perturbation. Functional and specified. I then evaluated complexity by looking at the known process for assembling a protein: Start, 350 elongation codons, stop. [352 AA * 3 bases/AA codon = 1056 4-state elements. Using 2 bits/4-state element, that's 2112 functionally specific bits.
This is a computation of the number of possible DNA strands of that length. Is that all you mean by FSCI? How does that allow one to determine that any previously unresearched strand of DNA of that length is the result of design?
3] Of course this does not namer any particular protein in any particular case, but that is a distractive irrelevancy, as we know that there are many proteins of relevant length.
That's not a distraction, nor is it irrelevant. In fact, it is one of the core questions with respect to FSCI. If, as you claim, FSCI is indicative of design, you need to show how to calculate the FSCI for actual biological artifacts and how to distinguish between those which are designed and those which are simply of the same length. Thus far, FSCI isn't looking particularly useful in proving design.DeLurker
September 8, 2009
September
09
Sep
8
08
2009
06:46 AM
6
06
46
AM
PDT
PS: Onlookers, peruse the Wiki list of proteins and that of enzymes to heart's content, and tell me how many are 350+ or 170+ as you wish. Quite a few.kairosfocus
September 8, 2009
September
09
Sep
8
08
2009
06:32 AM
6
06
32
AM
PDT
Onlookers: Observe the pattern -- no compunction on the already exposed manipulative and disrespectful rhetoric, just, "on to the next objection." That is telling on the underlying problem: selective hyperskepticism driven by materialistic ideological zealotry, under the false colours of science. I simply note, in brief: 1] Moderate perturbation is of course an incremental random bit pattern variation. The objector knows full well that such will soon enough reduce text to gibberish. 2]I have already analysed a DNA sequence by the way we can most readily do so. I stipulated a 350 AA typical enzyme protein length, then reckoned with the known vulnerability to perturbation. Functional and specified. I then evaluated complexity by looking at the known process for assembling a protein: Start, 350 elongation codons, stop. [352 AA * 3 bases/AA codon = 1056 4-state elements. Using 2 bits/4-state element, that's 2112 functionally specific bits. (Pardon, I missed a step earlier, I only needed 170 or so.) Double the threshold, and corresponding to 5.96 *10^635 configs.] 3] Of course this does not namer any particular protein in any particular case, but that is a distractive irrelevancy, as we know that there are many proteins of relevant length. [In fact, any protein longer than about 170 AA will be deemed sufficiently complex to be FSCI by this criterion. Hundreds, or thousands qualify. And, collectively the genome for functional life forms, the source of the relevant information, will therefore also more than qualify, as there are dozens to hundreds of complex proteins in any reasonable simple life form.] 4] But that -- predictably -- will not stop the objections, revealing their fundamentally unreasonable, ideologically driven character. (Why is it that I feel so much like in the days when I had to argue with Marxist zealots on Campus here in the Caribbean? ANS: because the rhetorical and agitprop tactics and underlying materialist zealotry -- and event he rationale that all is being done in the name of "science" -- are the same. Sad.) GEM of TKIkairosfocus
September 8, 2009
September
09
Sep
8
08
2009
06:26 AM
6
06
26
AM
PDT
A layman following this discussion for some time in lurking mode, I am facing the conclusion that, if there were ever to be a fully worked example of a calculation such as has been begged for, it would simply tell us that a protein (for example) was complex, contained some information in its sequence, and was functional. But I think we can tell this by simpler means. The other argument used frequently in fqavour of design is the argument from improbability - citing the odds against blind chance assembling a particular sequence of say 200 aminoacids or 600 nucleotides from all possible 200-mers or 600-mers. Who says that this ab-initio/ex-nihilo large molecule is necessary for either abiogenesis or the beginnings of evolution? No evolutionary biologist that I have ever spoken to or read, that's for sure- this seems to be a classical strawman.FrogBox
September 8, 2009
September
09
Sep
8
08
2009
04:04 AM
4
04
04
AM
PDT
kairosfocus, Rather than spending so much time extolling the power of FSCI, you could end the discussion immediately with a worked example of how to calculate the FSCI in an actual biological artifact. Note that your example should be reproducible and allow anyone to arrive at the same quantitative value for the same artifact.DeLurker
September 8, 2009
September
09
Sep
8
08
2009
03:17 AM
3
03
17
AM
PDT
So, I added a few erros in my parargaph above then. ;-) I thnk despte_ al my arors yu styll geat -wat I meeeen, rite?Indium
September 8, 2009
September
09
Sep
8
08
2009
03:01 AM
3
03
01
AM
PDT
kf, small perturbations do not destroy the readability of your paragraph. For example, you made a typing error:
i.e. we are looking at a small ilsnd of relevant function in a large configuration space
Obviously, despite your error I was able to understand th word "islands". You could have made lots of such errors and I would still have been able to comprehend most of what you said. So, as you probably know, sometimes it´s very hard to know how big such islands of funtionality are. It´s even harder if you also have to also check for different and maybe even unknown functions! Now, maybe you could show us an example by analysing a DNA sequence or something?Indium
September 8, 2009
September
09
Sep
8
08
2009
02:57 AM
2
02
57
AM
PDT
D: Please, get your facts and reasoning straight before making unwarranted -- or even utterly false and misleading -- assertions in a confidently dismissive manner: 1] 102: Are we now saying that FCSI is CSI with the added investigator’s knowledge of the functionality of the sequence under invetigation? Complex, specified information is a class of information where relatively small targets are specified in large configuration spaces that are information-bearing. FUNCTIONALLY specific, complex information is a subset of that class, wherein we OBSERVE a function that is specific to a relatively small set of configurations in a large space of possible configs. For instance, relatively few configs of ASCII characters would make a comprehensible paragraph in English responsive to your comment. And, should you disturb this paragraph's underlying bit patterns at random, you would soon reduce it to gibberish; i.e. we are looking at a small ilsnd of relevant function in a large configuration space of possible bit patterns of the same length. That is, the above paragraph is an instance of FSCI. 2] I thought that CSI was discernible by Dr Dembski’s Explanatory Filter . . . It is. [Note the extension of earlier thought to incorporate the focus on aspects, as is also discussed in the Weak Argument correctives here.] And, for FSCI, when the specification involved is functional -- as just discussed, the filter is very simple and practically effective to apply. (Onlookers, that is why there is such a press to distract you from noticing its effectiveness and patent common sensical soundness relative to our world of experience.) 3] . . . but that he had abandoned the EF, only to reinstate it in his affections when the unpleasantly gleeful chortles of the materialists became too much to bear. Again, kindly do your homework soundly before commenting in an ill informed manner on the presumption that the assertions of critics of ID are correct or fair. (the above exchanges should suffice to demonstrate that they are too often neither true nor fair, AND that on being corrected, there are no compunctions or intent to amend their ways. Sadly.) Examine the discussion here in the Weak Argument Correctives. You will see that the critics in question gleefully pounced on an ambiguity in the possible meanings of "dispensed with," and improperly inferred that the EF failed in the absolute, when in fact WD's intent was to state that he found it more effective to put the underlying discussion aside [not least because of the unfruitful debates hinging on teh selective hyperskepticism and strawmannising of the same circles of critics] and go straight to the implications of the discovery of CSI. If you doubt me, here are his remarks, excerpted in WAC 30:
In an off-hand comment in a thread on this blog I remarked that I was dispensing with the Explanatory Filter in favor of just going with straight-up specified complexity. On further reflection, I think the Explanatory Filter ranks among the most brilliant inventions of all time (right up there with sliced bread). I’m herewith reinstating it — it will appear, without reservation or hesitation, in all my future work on design detection. [….] I came up with the EF on observing example after example in which people were trying to sift among necessity, chance, and design to come up with the right explanation. The EF is what philosophers of science call a “rational reconstruction” — it takes pre-theoretic ordinary reasoning and attempts to give it logical precision. But what gets you to the design node in the EF is SC (specified complexity). So working with the EF or SC end up being interchangeable. In THE DESIGN OF LIFE (published 2007), I simply go with SC. In UNDERSTANDING INTELLIGENT DESIGN (published 2008), I go back to the EF. I was thinking of just sticking with SC in the future, but with critics crowing about the demise of the EF, I’ll make sure it stays in circulation.
Where there was one significant concern on the earlier formulations of the filter in flowcharts [e.g. c. 1999], the focussing of the filter on specific aspects of phenomena and objects adequately resolves the concern, as already linked. 4] Ar we then still to use the EF to discern design?- or is there now a more specific method, (since I understand that the EF gives rather too many false negatives and false positives for comfort.) Confident assertion of demonstrable falsehoods and misleading half-truths. Unfortunately, this is typical of Darwinist objector talking point rhetorical tactics on this subject: a --> The explanatory filter is DESIGNED to be vulnerable to false negatives, as it is biased to be extremely conservative in ruling "designed." b --> That is, by being quite stringent on when it rules "design" [e.g. in the simple case at least 500 - 1,000 bits of used information capacity involved in an observed case of function vulnerable to modest perturbation], the filter is saying that unless the search resources of the cosmos as a whole wold be credibly fruitlessly exhausted in a baseline random walk search of the relevant config space, we will not rule "designed." c --> That is, the filter is heavily and deliberately biased towards ruling chance as the source of high contingency. (Cf the case of applying the Dembski metric of CSI to a suspicious hand of cards in WAC 27. Any reasonable person would infer to cheating -- for good reason -- long before the matter rose to the level where the EF would rule design definitively.) d -> That is the misleading half truth part: false negatives. e --> Now for the outright deceptive falsehood: the claim that the EF improperly rules "design" in many cases (false positives):
i] As a corollary to the stringency on allowing many false negatives, the filter -- as statistics theory teaches us on such inferences by elimination to relevant confidence levels -- is going to be then that much more credible when it does rule design. ii] So, it is unsurprising that, as a matter of easily confirmed fact, there are literally millions or billions of successful positive cases where the filter correctly rules design. [The Internet being exhibit 1.] iii] Similarly, in EVERY case to date where we know the origin story directly and independently, the EF's ruling "design" on detecting FSCI, CSI etc, is correct. iv] This is confirmed by the suspicious lack of credible counrter-examples coming from the critics: time and again, they have confidently declared that the EF in praxis makes false positives, but on being challenged to give specific instances of known provenance, they cannot, and retreat into a theoretical or philosophical discussion, or a case that on closer inspection is an instance of intelligent design. (E.g. some would want to say that if a GA can be made to create a long enough sentence then that is a case of chance + necessity creating complex sentences out of lucky noise. They distract their own attention from the obvious context: a GA is an intelligently designed foresighted PROGRAM that uses artificial selection towards optimisation of intelligently selected objective functions.) v] For instance, onlookers, see if D can provide us with, say, a case where a string of ASCII text of at least 143 characters [= 1,000 bits] originating in a random walk and without artificial selection of current non-functional phrases towards a distant ideal target, will create a meaningful sentence in correct English within any reasonable scope of resources on the gamut of our observed cosmos. vi] now, of course, this is not a logical prohibition, but a search resource exhaustion barrier: because the space of possible configs is so large, the search reopurces of our obsered cosmos would not be able to search out through rasndom walks any apreciable fraction thereof. (And if one picks a search unintelligently, the mathematical challenges of ever higher order searches for good searcfhes will show that on average and unintelligently selected algorithm will perform no better than random search. It is active informaiton originating in our observation in intelligence, that is resposnible for the relative success of well chosen searches. vii] So, the barrier to false positives is not absolute, but a mater of practical reliability. (Just as, the config space for air molecules in the room in which you sit has in it configs where the O2 molecules rush to one corner. That would leave you gasping for breath fruitlessly, but the relevant states are so isolated and so overwhelmed by the highly diffused ones, that the expectation that we will have O2 molecules in the air when we breathe is utterly reliable. On stat thermodynamics, we can work out he odds of that happening in the lifespan of the cosmos, and we will see that it is utterly improbable ever to see this once in the history of the cosmos. The same sort of reasoning underlies the confidence in the empirical reliability of the EF. [And yes, there are those who will selectively object to the EF who would never dream that they are thereby picking a quarrel with statistical thermodynamics. And, on a personal note, it is that background that made me see that the EF and FSCI-CSI concept have something in them.])
f --> In short, the objection is specious. GEM of TKIkairosfocus
September 8, 2009
September
09
Sep
8
08
2009
12:39 AM
12
12
39
AM
PDT
Adel: Welcome. GEM of TKIkairosfocus
September 7, 2009
September
09
Sep
7
07
2009
11:35 PM
11
11
35
PM
PDT
I'm trying to catch up here.... Are we now saying that FCSI is CSI with the added investigator's knowledge of the functionality of the sequence under invetigation? I thought that CSI was discernible by Dr Dembski's Explanatory Filter - but that he had abandoned the EF, only to reinstate it in his affections when the unpleasantly gleeful chortles of the materialists became too much to bear. Ar we then still to use the EF to discern design?- or is there now a more specific method, (since I understand that the EF gives rather too many false negatives and false positives for comfort.)damitall
September 7, 2009
September
09
Sep
7
07
2009
11:47 AM
11
11
47
AM
PDT
The very provenance of the term function indicates that function is identified as a component of making the determination that FSCI is present. As such, FSCI is not going to predict function.
Thanks for the clarification, kf.Adel DiBagno
September 7, 2009
September
09
Sep
7
07
2009
05:47 AM
5
05
47
AM
PDT
Adel: FSCI is explicitly used in the context of evaluating the source of certain features of the empirical world. (And the fact that all known cases of origin of FSCI trace to intelligence is highly relevant to the status of several key areas of origins science, with potentially revolutionary impact, once we look to the more technical formulations, FSC and CSI. [Strictly FSCI is that subset of CSI where the specification is functional. As a simple but useful heuristic, we can identify it based on observed function, vulnerability of such to modest perturbation, and using at least 500 - 1,000 bits of information storage explicitly or implicitly.]) The very provenance of the term function indicates that function is identified as a component of making the determination that FSCI is present. As such, FSCI is not going to predict function. (But if we know the code implicated well enough, we may infer function from the decoding of the information itself. I should note I recently had reason to look at files for eSWORD Bible Software, and was astonished to see that for over half the files, there was an apparently meaningless repetitive cluster of alphanumeric characters. Only very late in the file did the text turn up. Reminds me a lot of complaints on apparent "junk" in DNA. eSWORD uses an ACCESS data base file up to version 8.x.) GEM of TKIkairosfocus
September 7, 2009
September
09
Sep
7
07
2009
05:20 AM
5
05
20
AM
PDT
An Organisation Behaviour theory footnote: Highly Machiavellian, manipulative people are restrained not by compunctions or words of correction, but by prudence: where they perceive that they will likely get caught and it will hurt them, they will refrain from unacceptable conduct. but if the odds are they will get away with and benefit from it, they will proceed full steam ahead. So, allowing such amoral men to act without painful consequence them is enabling behaviour. And, as I have highlighted this morning, evolutionary materialism, since 360 BC, was known to be amoral. Sadly, the manipulative, destructive darwinist rhetorical tactics above -- sadly -- fit the pattern as a hand fits a glove. (When I used to see this in the power centres of universities here in the Caribbean, I used to discuss it in terms of "Star Trek World, the reality." Alcibiades has all too many descendants among us, I am afraid.) A thought for the day. G'day GEM of TKIkairosfocus
September 7, 2009
September
09
Sep
7
07
2009
05:02 AM
5
05
02
AM
PDT
I had written:
When ID theory can identify the F in a test sequence, the world of science will applaud, cheer, bow and admit ID into its membership.
Perhaps I was asking too much, but I was hoping for a response that would make a case for the predictive utility of the FSCI concept. Genome sequencing has so far uncovered thousands of putative genes whose functions are yet unknown. It would be a signal contribution to science if FSCI theorists could apply their methodology to deciphering those functions. As things stand, investigators have to go to a lot of trouble in their laboratories and in other venues to link a gene to a function.Adel DiBagno
September 7, 2009
September
09
Sep
7
07
2009
04:56 AM
4
04
56
AM
PDT
III: Following up select points: 1] Re BL, 83: The trouble is KF that you have concentrated on the man in the street and neglected to make your case at the higher technical level. In fact the technical case has been successfully made in the peer reviewed literature, as was noted, ourlined and linked at 1 above. But, truth being inconvenient, the rhetorical agenda is plainly to steamroller over the mere inconvenient facts to the contrary. In any case, the issue is to address the matter on the merits, and BL's implication/insinuation that I am making a rhetorical case is patently false and even maliciously distorting, as can be easily seen from the always linked. Since when was discussion of relevant key aspects of information theory, thermodynamics and related topics at mathematical level popular discourse, onlookers? [What I have done is to use my background as a sci-tech educator to discuss the matters at a 101, initial, educational level. And that is plainly what so irks the Darwinist advocates: I (and others too, but I am in the hot seat just now . . . ) am opening the gateway for the man in the street to read the technical works with sufficient understanding to evaluate for him or her self instead of taking the edicts of the neo-magisterium at face value. Poof: the emperor is parading around, stark naked! Yikes!) Also, observe not a hint of compunction over the already exposed tactics, but a continuation of same, full steam ahead. Not to mention, very lite that actually addresses the matters on the merits, even in the face of links to the demanded technical peer-reviewed discussions. (Recall, too, how BL tried to imply that the peer reviewed papers were not what they are.) 2] 83, So, I ask you KF, what journal access has been restricted and for whom? Read and weep, here and here, onlookers; to see what is going on, when all the blaming the victim and poisoning the well rhetoric has settled down. (Again, inconvenient points already in evidence and steamrollered over. Worse, on matters of patent injustice.) 3] 83, Nowhere is my question [about Weasel] answered. Blatant falsehood, resting on a twisting of what was in the IEEE paper, p. 1055 – a declaratively didactic example of what partitioning means was wrenched to form a strawman algorithm thast was then soaked in ad hominems and ignited. EIL provides actual Weasel type algors – with a zip on source code! -- in their Weasel GUI page, and they cover the bases.. The issue is addressed, step by step, in the context of what M & D actually said here. 4] 83, I’ll take that as a “No, I can’t put a number on it [a measure of the FSCI in DNA] so here are some distractions from that fact”. Of course, in 84 above, I already provided a sample calculation that would enable anyone feeding in the numbers for actual organisms to do so to the heart's content. The number of I chose is at the low end of the ballpark for observed organisms:
For a DNA stand of 100+ k, we see that the strand is functional from its locus in an observed life form. We further know that it stores information, and that the elements are four state, the FSCI metric for that would be at the lower end: 200 k functionally specific bits, as 1 4-bit state is 2 bits. Such specifies a config space of ~ 1.148*10^602 cells.
Such calculations are in my always linked, and more sophisticated calculations are readily available in the already linked materials from Durston et al. In EVERY case of directly known origin of FSCi where the metric passes thecomplexity threshold, the source of the FSCI is intelligent. And, we have excellent search space reasons for seeuing why that is so. Therefore, on a massive inductive evidence base, we may infer that iFSCI is a reliable sign of intelligence. DNA exhibits such FSCI. Similarly, we know one and only one class of source for algorithms, alphabetic codes, and programs with associated structured data structures; with similarly excellent empirical base for seeing why undirected chance and necessity will haver no credible prospect of designing such languages and programs on the gamut of our comsos. Intelligence. But, this time around, the standard scientific pracice of inductive generalisation on empirical data is inconvenient for the neo-magisterium, so it is steamrollered over. Just as Lewontin said, and just as his colleagues in the US National Academy of Sciences have ruled. Morris Cargill used to call such tactics: logic with a swivel. 5] BL, 87: Kariosfocus, can you determine the FSCI in a string of coding DNA? Note, onlookers: this is AFTER the example has been given in 84. In short, we again see the pattern of pretence that inconvenient evidence is “not there.” 6] DL, 88: Without taking into account the number of strands of that length that have the same function, related function, or other function entirely, it provides no information about rarity. This is of course precisely what Durston et al did in 2007, and then published a table of 35 values of FSC in FITS, Functional Bits. Again, conveniently ignored. 7] BL, 89: I think of more interest would be a specific coding sequence and a value for FSCI specific to that sequence. This is of course a further distraction form the provision of a method applicable with no more than a simple scientific calculator to any case. Take any protein of suitable length, say 350 AA's, not too atypical for an enzyme. (And I am deliberately giving a general example, to underscore the evident willful obtuseness at work on BL's part.) We know that the associated DNA had a 3-letter start codon, 350 succeeding elongation 3-letter codons, and a stop codon. That makes for 3 * (350 + 2) = 1056 4-state DNA bases. Such a protein is functional, and will as a rule be vulnerable to modest perturbation by incorrect amino acid substitution – e.g. think about proline the “pinned” acid, which would tend to lock up folding, 1 of 20 odds on a random substitution. [More sophisticated analyses are possible, and have been done in the peer reviewed literature, on foldable and functional sequences.] A 350 base Protein would thus be functional, specific and complex beyond the threshold. Its underlying DNA code comes in at 1056 bits [on just the null state free sequencing comparison base]. So, we can weigh it in at 1056 functionally specific bits. Compare any computer program string in any reasonable language of that bit-length that functions. It too will be functionally specific and will be vulnerable to modest perturbation. Can BL identify a credible case where such a code string originated by undirected chance plus necessity only? Of course not. But, there are millions of cases where such strings come form known intelligent agents. Indeed, that is the only empirically observed source for such. (And before you think to trot out genetic algorithms as claimed counter examples, I suggest you have a look at Abel's remarks on these in section 12, p. 268. Unless you can cogently answer his case, you are jut making a distraction.) The point should be clear. 8] 89, Kariosfocus did not address any of my questions regarding where he is getting his knowledge of the fitness landscape for the first replicatior or how complex that first replicator was. All this reveals is that BL wishes to substitute a hypothetical “replicator” for observed organisms [which start out at genome length in the 100's of thousands; and in a context where it is known that protein function etc is vulnerable to perturbation of sequences – indeed in some cases, folding is non-unique and introduces further complexity, as the prions and associated scrapies and mad cow disease etc show), and that he has not bothered to read (much less, interact with) the discussion on genome complexity and functionality in my always linked, here. (Which, apart from always being linked, was explicitly linked above.) In short, BL has here made up a dumb strawman for him to pummel for its silence. But the poor dumb strawman ain't me! _______________ GEM of TKIkairosfocus
September 7, 2009
September
09
Sep
7
07
2009
02:04 AM
2
02
04
AM
PDT
Onlookers: I: First, a pause for a moment of inadvertent, highly instructive reductio ad absurdum as the selective hyperskepticism game used by so many Darwinist advocates plays out to its sad end:
DL, 94: “FSCI gets less and less well-defined the more one looks at it. ” Adel, 95: “Where, oh where, is the F [i.e. Functionality] in the FSCI? ”
H'mm, let's see if we can tell “which one of these is not like the others, which one of these is not the same”:
1. [Class 1:] An ordered (periodic) and therefore specified arrangement: THE END THE END THE END THE END Example: Nylon, or a crystal . . . .  2. [Class 2:] A complex (aperiodic) unspecified arrangement: AGDCBFE GBCAFED ACEDFBG Example: Random polymers (polypeptides). 3. [Class 3:] A complex (aperiodic) specified arrangement: THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE! [Of course, this is not quite long enough to be over the complexity threshold of 500 - 1,000 used bits of information storing capacity (~ 143 ASCII characters), but the point is made. Oops, with this addition, it is. How many passages of contextually relevant text in English like this have ever been observed to be created by undirected chance + mechanical forces of necessity? How many, by intelligent design? KF.] Example: DNA, protein. [From Thaxton et al, The Mystery of Life's Origin, ch 8, 1984,]
In other words, -- and as was cited ever so many times from Orgel in his 1973 Origin of Life, p. 189: >> In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. >> a --> Whether we address linguistic function, algorithmic function or [also algorithmic – DNA] bio-function, function is a directly recognisable, observable brute fact: DOES IT WORK IN A “MACROSCOPICALLY” RECOGNISABLE WAY? (But then, if your worldview leads you to believe there are no effective minds to observe and report accurately to reality, then maybe you will be tempted to selectively -- and inconsistently -- doubt or dismiss inconvenient facts . . . ) b --> Second, we are dealing with specific function: function that comes in islands of closely related configurations, subject to disruption by modest perturbation. c --> A thought experiment will help clarify: _____________ Take the Class 3 sample above and apply a random mutation operator to it, say at 5% per letter random change from the full ASCII set, generation after generation. How long will it be before the message dissolves into non-communicative gibberish not distinguishable from Class 2; in fact, exemplifying it? And, if we were to put some birdseed on a key board and put it out in a hen-yard, so that pecking birds would press keys unintelligently and at random [from the experiments, monkeys are too destructive], would we be able to meaningfully distinguish the two random sequences, other than by in effect letter by letter comparison? How long would we have to wait on average for hens pecking away at birdseed dusted keyboards for a meaningful text string of length 143 characters to appear? If we then took the gibberish from the original Class 3 text and we continued to apply the random change operator to it, how long would it take for any meaningful paragraph to re-appear? If we were to feed it into a Weasel Program [Atom's adjustable Weasel can do this], with the original as target, how long would it take to re-appear? Why? What would this say about the significance of active information injected though targetting and use of warmer-colder metrics to artificially select for formal resemblance to a meaningful message? [Hint, cf Abel's 2009 (peer -reviewed: Int. J. Mol. Sci. 2009, 10, 247-291; doi:10.3390/ijms10010247, pp. 247 - 291) paper on capabilities of chaos and complexity, as previously linked.) __________ d --> At a more formal level, and as is discussed in the previously linked App 4 in my briefing, Abel & Trevors have pointed out (this, in the peer reviewed literature too . . .) that strings – the simplest relevant information case – can be viewed in this context as being in a three-dimensional space: a spectrum of complexity from ordered to random, an inversely proportionate metric dimension of algorithmic compressibility, and a third, independent dimension: algorithmic functionality. (This easily enough extends to other types of recognisable function.) e --> As they illustrate, such functionality peaks sharply, and is near to but not at the Random Sequence end of the complexity scale. (This is of course the 1-dimensional string version of an island of function. The fact that biological life forms come in distinct kinds separated by significant gaps in the fossil record [most notably the Cambrian], in observed life today and in DNA sequences, should tell us something on how relevant such an island of function view is to bio-forms. Darwin's missing links are still missing, 150 years later.) f --> Of course, as already linked in 84 above, Durston et al have long since (2007 is 2 years ago) – again, in the peer reviewed literature -- turned this into a quantitative metric based on empirical data on protein sequences in light of a functional version of the H uncertainty metric, comparing functional with ground and null states. (But since that mere fact is inconvenient to their case, and since Darwinists sadly seem to want to be unaccountable before the truth, that has been ignored.) g --> So, plainly DL and Adel et al are sadly mistaken. II: Main business (duly noting that the above is a slice of the cake that reveals all the ingredients of what is going on): Over the past few days, we have passed un momento de verdad here at UD. For, we have seen laid out above (and in the parallel Contest 10 thread), step by step, fairly detailed exposes of how evolutionary materialist darwinists and their fellow travellers far too often habitually resort to the destructive rhetoric of distraction, distortion demonisation and dismissal. And, the said darwinists acted as though nothing had happened; trying to proceed with “business as usual.” So, if that is how such men – almost invariably, such are men – behave when they hold the minor privilege of posting in a blog, how would they act when they control institutions of science, education, media, jurisprudence and public policy-making? (Actually, we do not need to ask: that is what Expelled documented over a year ago, all too accurately. [And, on the business as usual track, it is therefore no surprise that instead of pausing to reflect and correct misbehaviour, the activists have sought to shoot the messenger and blame the victims; indeed, we saw examples over the past few days.]) Similarly, what such portends for our civilisation if unchecked is not a matter of speculation but of well documented but easily forgotten or dismissed history. Starting with Plato's commentary on Alcibiades and others, in his The Laws, Book X. Speaking of the avant garde materialist teachers and their disciples c 400 BC -- yes, evolutionary materialism was making waves 2,400 years ago -- he wrote how the materialists of his day held that: ____________ >> . . . The elements [then viewed as: fire, earth, air, water] are severally moved by chance and some inherent force according to certain affinities among them-of hot with cold, or of dry with moist, or of soft with hard, and according to all the other accidental admixtures of opposites which have been formed by necessity. After this fashion and in this manner the whole heaven has been created, and all that is in the heaven, as well as animals and all plants, and all the seasons come from these elements, not by the action of mind, as they say, or of any God, or from art, but as I was saying, by nature and chance only. Art sprang up afterwards and out of these, mortal and of mortal birth, and produced in play certain images and very partial imitations of the truth, having an affinity to one another, such as music and painting create and their companion arts. And there are other arts which have a serious purpose, and these co-operate with nature, such, for example, as medicine, and husbandry, and gymnastic. And they say that politics cooperate with nature, but in a less degree, and have more of art; also that legislation is entirely a work of art, and is based on assumptions which are not true . . . . [For,] the Gods exist not by nature, but by art, and by the laws of states, which are different in different places, according to the agreement of those who make them; and that the honourable is one thing by nature and another thing by law, and that the principles of justice have no existence at all in nature, but that mankind are always disputing about them and altering them; and that the alterations which are made by art and by law have no basis in nature, but are of authority for the moment and at the time at which they are made.- [Relativism, too, is not new.] These, my friends, are the sayings of wise men, poets and prose writers, which find a way into the minds of youth. They are told by them that the highest right is might, and in this way the young fall into impieties, under the idea that the Gods are not such as the law bids them imagine; and hence arise factions, these philosophers inviting them to lead a true life according to nature, that is, to live in real dominion over others, and not in legal subjection to them. >> _______________ In short, history teaches us that typically civility, restraint and justice are given short shrift by those influenced by such avant garde materialist speculations, due to its amorality and relativism, which enable the rise of the ruthless and destructive idea that “might makes right.” (That is, the incivility we see above is not surprising, on a rather long, sad history; with Alcibiades notoriously being exhibit no 1.) So, we at UD have some decisions to make on how we are going to address the sort of incivility we are seeing. [ . . . ]kairosfocus
September 7, 2009
September
09
Sep
7
07
2009
02:03 AM
2
02
03
AM
PDT
Where, oh where, is the F in the FSCI? When ID theory can identify the F in a test sequence, the world of science will applaud, cheer, bow and admit ID into its membership.Adel DiBagno
September 6, 2009
September
09
Sep
6
06
2009
01:54 PM
1
01
54
PM
PDT
Jerry#90
“That is not correct. A string of 1000 As has less Shannon information and Kolmogorov complexity than a shorter string of random As, Cs, Gs, and Ts. What definition of “information” are you using?
Since you understand the metrics we are talking about why bring up the question. You know as well as anyone that no one is talking about a string of A’s. You answered your own question.
You said, and I quote:
The longer the string in the coding region the more information it contains.
That is incorrect. You have failed to address my question of how a simple calculation of the number of possible unique strands of a certain length has any pertinence to modern evolutionary theory. You have also failed to address my question of what definition of information you are using. You have further failed to explain how to identify which strands have FSCI and which do not. FSCI gets less and less well-defined the more one looks at it.DeLurker
September 6, 2009
September
09
Sep
6
06
2009
11:03 AM
11
11
03
AM
PDT
jerry, Since you are still declaring "fact" (like disparity vs diversity), and are apparently unaware if the extensive research and reevaluation of older data, while at the same time admonishing others to read up on the Cambrian explosion, then the embarrassment most definitely should be yours.Dave Wisker
September 6, 2009
September
09
Sep
6
06
2009
10:54 AM
10
10
54
AM
PDT
"jerry should catch up his reading of the literature on the Cambrian explosion before making such a declaration." It is still a show stopper. I would be embarrassed by the number of qualifications that exist in the abstracts you listed especially the last one. I would scrutinize your abstracts as to just what is being said.jerry
September 6, 2009
September
09
Sep
6
06
2009
09:17 AM
9
09
17
AM
PDT
1 2 3 5

Leave a Reply