Uncommon Descent Serving The Intelligent Design Community

An Eye Into The Materialist Assault On Life’s Origins

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Synopsis Of The Second Chapter Of  Signature In The Cell by Stephen Meyer

ISBN: 9780061894206; ISBN10: 0061894206; HarperOne

When the 19th century chemist Friedrich Wohler synthesized urea in the lab using simple chemistry, he set in motion the ball that would ultimately knock down the then-pervasive ‘Vitalistic’ view of biology.  Life’s chemistry, rather than being bound by immaterial ‘vital forces’ could indeed by artificially made.  While Charles Darwin offered little insight on how life originated, several key scientists would later jump on Wohler’s ‘Eureka’-style discovery through public proclamations of their own ‘origin of life’ theories.  The ensuing materialist view was espoused by the likes of Ernst Haeckel and Rudolf Virchow who built their own theoretical suppositions on Wohler’s triumph.  Meyer summed up the logic of the day

“If organic matter could be formed in the laboratory by combining two inorganic chemical compounds then perhaps organic matter could have formed the same way in nature in the distant past” (p.40)

Darwin’s theory generated the much-needed fodder to ‘extend’ evolution backward’ to the origin of life.  It was believed that “chemicals could “morph” into cells, just as one species could “morph” into another “ (p.43).   Appealing to the apparent simplicity of the cell, late 19th century biologists assured the scientific establishment that they had a firm grasp of the ‘facts’- cells were, in their eyes, nothing more than balls of protoplasmic soup.   Haeckel and British scientist Thomas Huxley were the ones who set the protoplasmic theory in full swing.  While the details expounded by each man differed somewhat, the underlying tone was the same- the essence of life was simple and thereby easily attainable through a basic set of chemical reactions.

Things changed in the 1890s.  With the discovery of cellular enzymes the complexity of the cell’s inner workings became all too apparent and a new theory that no longer relied on an overly simplistic protoplasm-style foundation, albeit one still bounded by materialism, had to be devised.  Several decades later, finding himself in the throws of a Marxist socio-political upheaval within his own country, Russian biologist Aleksandr Oparin became the man for the task. 

Oparin developed a neat scheme of inter-related processes involving the extrusion of heavy metals from the earth’s core and the accumulation of atmospheric reactive gases all of which, he claimed, could eventually lead to the making of life’s building blocks- the amino acids.  He extended his scenario further, appealing to Darwinian natural selection as a way through which functional proteins could progressively come into existence.  But the ‘tour de force’ in Oparin’s outline came in the shape of coacervates- small, fat-containing spheroids which, Oparin proposed, might model the formation of the first ‘protocell’.

Oparin’s neat scheme would in the 1940s and 1950s provide the impetus for a host of prebiotic synthesis experiments, most famous of which was that of Harold Urey and Stanley Miller who used a spark discharge apparatus to make the three amino acids- glycine, alpha-alanine and beta-alanine.  With little more than a few gases (ammonia, methane and hydrogen), water, a closed container and an electrical spark Urey and Miller had seemingly provided the missing link for an evolutionary chain of events that now extended as far back as the dawn of life.  And yet as Meyer concludes, the information revolution that followed the elucidation of the structure of DNA would eventually shake the underlying materialistic bedrock.          

Meyer’s historical overview of the key events that shaped origin-of-life biology is extremely readable and well illustrated.  Both the style and the content of his discourse keep the reader focused on the ID thread of reasoning that he gradually develops throughout his book.

Comments
KF:308 (**In order not to flood this page with cut'n'pastes from KF's post I'll just paste excerpts - see 308 for the proper context - I'm not trying to misquote by removing the context)
What then of the metrics in # 177 above [and the always linked] ... The case of a PC screen full of information? And of course, the case that triggers all of this dismissal: observed DNA etc in the cell.
Please read my posts 302-303 regarding how GA's are implemented and what FSCI they produce to see if this changes the way you calculate FSCI for a GA. If the GA is embodied in a non computational system how would the measurements apply. In the case of a computational system should we apply the measure to the written code or to the binary and if so what contribution to FSCI (or lack of FSCI) does the compiler make?
(As tot he notions that I do not understand hill-climbing algorithms ...
What notions, do you mean my reference to you not understanding the difference between simulation and simulator because if so then hill climbing is irrelevant. This is the same misunderstanding that Nakashima is trying to unpick with his comments on Second Life. When you create a model you are creating an algorithm and a set of equations which, when iteratively computed, will yield a series of variables that should approximate observable variables in the system or the part of the system you are modelling. You keep bringing up the computer hardware and operating system and talking about perturbations to it but this is completely irrelevant to the model. You ought to be able to take a model and iterate it using only a pen and paper, the computer is just a tool to speed things up. GA's, when used to model biological processes, are part of a model and you can do the math by hand, or with a computer. Messing around with the computer or the OS breaks the tool you are using, it has nothing to do with the model.
having given specific cases of FSCI, we have shown that in the cases where we do independently know the cause of FSCI, it is intelligence.
All I have seen is you pointing at designed things and saying 'look, FSCI' and then pointing at DNA and saying 'Look, FSCI'. The actual process by which you calculate it seem to be contingent on lots of assumptions as Mark Frank has highlighted. You dismiss GA's as examples of things that generate FSCI because humans design GA's and 'put the FSCI in at the start' somehow yet you also reject the idea that an intelligent designer could create life by embodying a GA in a universe. From what I can tell the only way to reliably infer FSCI is if you know it had an intelligent source.
we do know that complex functional, information rich algorithm-implementing organisation must have arisen for life as we observe it to exist.
We don't yet know what was required for life to exist, otherwise we would have a rigorous step-by-step explanation of the origin of life. Simple saying 'its complicated' is not sufficient. How can we measure the bit count of a process we haven't uncovered yet? --
Strawman, set up to be knocked over.
Perhaps you missed where I was responding to this from jerry:
Now maybe in some other language or by using some encryption technique we can relate the second string of information to something else. If that is true, then that string is also FSCI.
jerry seems to be claiming that the FSCI in something can change if we discover new things about its origin. Are you agreeing or were you actually accusing jerry of setting up the straw-man?
Once an entity has information carrying capacity and exhibits observable functionality ... shows islands of function in a sea of possible configurations ... the undirected search capacity of our cosmos.
You keep bringing up this islands of functionality idea, again here:
...islands of function and thresholds beyond which the search resources of the observed cosmos... ... ...rendering any at-random walk based search utterly .unlikely to hit on any reasonable islands of function.
What is this idea based on, apart from wishful thinking, and why does trying to make out that the universe is a search engine help, apart from setting up straw-men? You have assumed that all configuration spaces consist of isolated islands, why aren't some configuration spaces more like continents. If we are going to talk about the universe as a search and configuration spaces as landscapes then the first thing to realise is that the universe is not in one location in the landscape, it is all over the place performing billions of 'searches', or more properly configuration shifts, in parallel every second in a search space consisting of islands, Archipelagos and continents. I would say though, and to mirror what others have said, that the whole notion of seas and islands is poor when factoring in a pre-biotic universe. At best you should consider the config space to include the ocean floor and simply place sea level as a slightly arbitrary demarcation point between complex chemistry and self replicating systems. The 'search' that occurs in a universe is simply a mass collection of shifting configurations, some of which may be very close to these 'shores of function'.
GA’s in our observation are the product of designers.
Yes, Yes, Yes, how many times do I have to say this, I am not claiming that GA's pop into existence randomly. The question is about if a GA can produce this FSCI and if the universe can have a 'GA' built into it by an intelligent designer. Talking about how sensitive an implementation of a GA is to perturbation misses the point entirely which is why I question your understanding - the universe will break if you change the laws of physics : a simulation will break if you change the way the underlying computer does maths.
The FSCI of a pen:
I'm glad you mention a quill, I was going to ask about charcoal sticks. Nature, operating freely, can generate writing instruments but some specific processes are required to refine them (and they are only writing instruments if we use them as such), with non replicating entities like pens you need people but what if the entity can make variable copies of its self? In the pen example you seem, as in all other FSCI examples, to rely on knowledge of the pens history and its intended purpose. And yet you also claim:
FSCI is both observable and quantifiable, without reference to causal story.
So, which is is, and do you agree with jerry or joseph's claims about FSCI or are they wrong?BillB
July 27, 2009
July
07
Jul
27
27
2009
04:30 AM
4
04
30
AM
PDT
KF-san, Thank you explaining the 1925 example once more. From your discussion, I'm not sure how modest perturbation disrupting function relates to the previous use of compressibility. Is modest perturbation a test for C? But again, that is a side detail. From your explanation, I can't think of any photo of anything that won't be full of FCSI. I suppose that finding the world always exhibits design is a valid theological position, but I don't see how FSCI can be a useful metric if it is always returning as many bits as there are in the data array.Nakashima
July 27, 2009
July
07
Jul
27
27
2009
04:24 AM
4
04
24
AM
PDT
Pardon: CSB metric: ______________ FSCI is also an observable, measurable quantity; contrary to what is imagined, implied or asserted by many objectors. This may be most easily seen by using a quantity we are familiar with: functionally specific bits [FS bits], such as those that define the information on the screen you are most likely using to read this note. . . . . we can construct a rule of thumb functionally specific bit metric for FSCI: a] Let contingency [C] be defined as 1/0 by comparison with a suitable exemplar, e.g. a tossed die. b] Let specificity [S] be identified as 1/0 through functionality [FS] or by compressibility of description of the information [KS] or similar means. c] Let degree of complexity [B] be defined by the quantity of bits to store the relevant information, with 500 - 1,000 bits serving as the threshold for "probably" to "morally certainly" sufficiently complex to meet the FSCI/CSI threshold. d] Define the vector {C, S, B} based on the above [as we would take distance travelled and time required, D and t], and take the element product C*S*B [as we would take the ratio D/t to get speed]. e] Now we identify: C*S*B = X, the required FSCI/CSI-metric in [functionally] specified bits. . . . . For instance, for the 800 * 600 pixel PC screen, C = 1, S = 1, B = 11.52 * 10^6, so C*S*B = 11.52 * 10^6, FS bits. This is well beyond the threshold. [Notice that if the bits were not contingent or were not specific, then X = 0 automatically. Similarly, if B < 500, the metric would indicate the bits as functionally or compressibly etc specified, but without enough bits to be comfortably beyond the UPB threshold. Of course, the DNA strands of observed life forms start at about 200,000 FS bits, and that for forms that depend on others for crucial nutrients. 600,000 - 10^6 FS bits is a reported reasonable estimate for a minimally complex independent life form.] _____________________ In the case of ASCII text, 143 7-bit characters is equivalent to 1,000 bits or 10^301 possible configs. Random changes will rapidly convert contextually responsive English text of 18 - 20 words into meaningless hash. And tehre is an Inernet full of instances of such FSCI by known intelligence, but none of such by chance + mechanical necessity, as the Welcome to Wales example discusses. GEM of TKIkairosfocus
July 27, 2009
July
07
Jul
27
27
2009
02:54 AM
2
02
54
AM
PDT
CH: Please read 177 above. GEM of TKIkairosfocus
July 27, 2009
July
07
Jul
27
27
2009
02:45 AM
2
02
45
AM
PDT
Onlookers: MF of course claims not to ever read what I write. It shows; e.g. in:
when challenged to calculate the FSCI you can only do it by making all sorts of arbitrary assumptions about the specification and the cause.
1 --> The simple metric of FSCI is based on thresholds that give us the ability to make a conclusion based on a topology of islands of function in a sea of non functional configs, and in the further context of sufficient information storage in the function that unaided random walk based search strategies and the like will be maximally unlikely to succeed in reaching shores of function. 2 --> That is FSCI has a particular purpose: allowing a decision beyond the practical reach of false positives, that we are looking at a designed entity. 3 --> And, as tested on literally millions of known orign cases, it does render a reliable verdict; indeed to date no objector has been able to produce a good counter example where chance + necessity have spontaneously produced FSCI without active information from an intelligent source. 4 --> So, we have very good reason to conclude that FSCI is fit for its purpose: it is a reliable sign of intelligence. 5 --> In that context the simple FSB metric at 177 above allows us to set reasonable quantitative thresholds, in the context of using dummy variables to categorise the functional specificity and complexity beyond a threshold before we use the information stored and used to implement function. 6 --> Insofar as that is an example of fitness for purpose, that is not arbitrary, regardless of the conventional nature of such thresholds: mean sea level is a convention, too! 7 --> MF also indulges in a turnabout:
a] We have (per Orgel et al) observed that functionally specific complexity is an intersting feature of certain objects in the world. b] We further observe that -- on an intuitive basis -- it is a pattern that tends to show up in known designed entities. c] We set up reasonable thresholds and see that they reliably indicate cases of design where we INDEPENDENTLY KNOW the causal story. d] We can find no counter instances at these thresholds, and objectors, after years of effort are equally unable to find counter-examples. (Thus the types of strawmannish objections and red herrings we see above.) e] We have good reason to see that FSCI for instance is a reliable sign of design, and so may infer on best current explanation to design as cause when we see FSCI. f] To infer the presence of FSCI, we have used the simple approach: (i) specific function vulnerable to modest perturbation (i.e topology of islands of function), (ii) sufficient information storage capacity used in that function to soak up the search capacity of the observed cosmos (500 - 1,000 bits or more), (iii) a measure of actual capacity used so that we see how far beyond the threshold we are. g] the FSB metric then reports the result, in functionally specific bits that are known to be beyond the threshold. h] This is not a matter of imposed assumptions and question begging, but empirically based inference to best current explanation across known causal factors. i] If MF is able to, he can cirte a case of FSCI where we know directly that the cause is not intelligent. [And this includes that chance + necessity acting on an arbitrary initial configuration can and does form FSCI.] j] Similarly, he can show us that a fourth causal factor beyond teh observed chance, mechanical necessity and intelligence, is at work, and/or that he can credibly reduce say intelligence to chance + necessity. k] but instead of doing the level playing field thing, he has in effect tried to assert away the issue from inference to empirically anchored best explanation to suggest that questions are being begged.
8 --> He only succeeds in underscoring that there is no good counter to the triple observation that (a) FSCI is a known product of intelligence, (b) is only known to be so produced, and (c) is credibly beyond the reach of known non-intelligent causal factors alone or in concert. 9 --> Thus, FSCI plainly stands as a reliable sign of intelligence, including in cases where we do not directly know the causal story; e.g. origin of DNA-based cellular life. 10 --> And THAT is what best explains the stridency of materialistic objections to it. ___________ GEM of TKIkairosfocus
July 27, 2009
July
07
Jul
27
27
2009
02:40 AM
2
02
40
AM
PDT
Nakashima-San: Re:
Next unanswered question - I give your C*S*B procedure an array of data 800*600*24 bits, scanned from a photo of Mt Rushmore, ca 1925. You say the procedure should infer design. This is not a false positive because of the nature and structure of the photograph. What do you mean by nature and structure, and how would that apply to a photo of TV static? (I know I am dating myself because in the new era of digital TV there is no static.)
Actually, I addressed this one already. Once more unto the breach: 1 --> A scanned photograph shown on a PC screen will manifest FSCI in the image, once modest perturbation will destroy the image. 2 --> In the case of Mt Rushmore circa 1925, a fairly modest degree of noise will disrupt the recognisability of the particular mountain in view, then beyond a further modest threshold, that there is a specific image will be lost in the on-screen snowstorm. [And, there is such a thing as a disrupted digital image; I guess we see a lot more of that out here in the Caribbean [esp. when Cricket is "on" . . . ] than you do in Japan, doubtless.] 3 --> And, in speasking of the nature and structure of the photograph, i am alluding to the fact that here is a certain silver depositional pattern based on the optical information impinging on the film where the picture was taken. 4 --> On developing, we have a specific image of Mt Rushmore at a particular moment, and not of something else (as would happen if the camera were to have accidentally opened up and spoiled the exposed film). 5 --> This is now a reference point, and we can see that we now have a definite function: photo of Mt Rushmore, circa 1925. 6 --> Scan and put on screen, and we will have a further digitised version (strictly, the Ag particle pattern in the film is a digital image, just the pixels are at random not in a grid.) 7 --> Inject white noise to varying levels and we will see increasing disruption of function to the degree where beyond a certain point no discernible image of a particular object is observable. Further noise will just mush the snowstorm around, with no material difference in what is there. 8 --> that is, in the case of such a screen-full of "snow," moderate random changes (as previously discussed in raising e.g steganography) will not materially affect the overall image of a screen-full of snow. 9 --> That is, there is not an islands of function topology in this case. The general complexity [we doubtless have more than 1,000 bits of info storage capacity] is not functionally specific, so FSCI does not apply: F = 0. 10 --> If we, however, were to make one particular screen-full of snow a standard and do bitwise comparisons to it, we would see a difference that can be used functionally, but that is where we have now made a reference target. (This can be used to make up say a one-time message pad.) 11 --> The centrality to FSCI of a topology of islands of function in a config space dominated by non-functional configs that thus exhausts the unaided search capacity of the observable universe will be plain. GEM of TKIkairosfocus
July 27, 2009
July
07
Jul
27
27
2009
02:06 AM
2
02
06
AM
PDT
Onlookers: The desperation to dismiss the implications of FSCI has now reached a climax in which ideology is now triumphing over easily accessible and long since adequately discussed facts and their implicaitons: 1] BB, 284: Despite numerous requests, a blanket refusal to demonstrate how FSCI can be calculated for a specific example Excuse me! What then of the metrics in # 177 above [and the always linked], and even in the Weak Argument Correctives at 28? have we not seen that any string of ASCII charactes of at least 1543 length that consittutes contextually responsive English will be an example? Similarly for computer programs? The case of a PC screen full of information? And of course, the case that triggers all of this dismissal: observed DNA etc in the cell. (As tot he notions that I do not understand hill-climbing algorithms on differential "fitness" measures or do not understand biology etc., that boils down to that i disagree with the conventional wisdom; so much easier to dismiss than to address the serious epistemological issues as already linked. FYFI, while I do confess to being a penitent sinner under reconstruction and reformation, I am neither ignorant nor insane nor imbecilic on the relevant matters, pace Dawkins et al.) 2] . . . resorts to rhetorical claims and gestures towards products of human design with claims of ‘Look, FCSI, its obvious onlookers. On the contrary, having given specific cases of FSCI, we have shown that in the cases where we do independently know the cause of FSCI, it is intelligence. This means that FSCI is known to be produced by intelligence, and that there are no cases where it has been observed to be spontaneously produced by nature acting freely and without direction, by chance + necessity. (Given the threshold configuration space of 2^1,000 states and an observed universe that cannot access as many as 2^500 atomic states across its lifespan, this is also just what analysis of the challenge of search will tell us.) In short, here BB is distorting and dismissing the scientific inference from the fruit of widespread and reliable observation to empirically warranted inductive generalisation. But, such dismissal does not shift the balance: on inference to best and empirically well supported explanation, FSCI is a reliable sign of intelligence. The real problem: if FSCI is a reliable sign of intelligence, and DNA exhibits it, it is best explained as designed. Which is unacceptable to those committed to a priori Lewontinian materialism imposed on science. 3] 298: From a scientific point of view we don’t yet know what was required for life to arise so we can’t measure its information content and determine its FSCI. On the contrary, we do know that complex functional, information rich algorithm-implementing organisation must have arisen for life as we observe it to exist. For, such cell based life has these features. Just so, we can easily enough observe the information carrying capacity of DNA, etc, and observe both function and vulnerability to modest perturbation. DNA for minimally complex independent life is about 600 - 1,000 k bits, or a search space well beyond that of 1 k bits. In short, we here see FSCI and know that undirected chance + necessity is credibly unable to successfully sample the search space, on the gamut of probabilistic resources accessible in our observed cosmos. but, we also know that DNA is algorithmically functional based on digital information. this class of entity is routinely produced by intelligent agents, though we have not as yet mastered the arts to create something like DNA. (And BTW, hill-climbing algorithms do not address the real challenge here: to get TO shores of function in vast config spaces of non-function.) 4] 302: The short version of that would seem to be - we can’t tell if something has FSCI unless we know everything there is to know about it, and its history. Strawman, set up to be knocked over. Once an entity has information carrying capacity and exhibits observable functionality that is sufficiently specific to be vulnerable to modest perturbations [i.e. shows islands of function in a sea of possible configurations] then the object beyond a certain threshold is beyond the undirected search capacity of our cosmos. So -- and as has repeatedly been pointed out but ignored or dismissed -- FSCI is both observable and quantifiable, without reference to causal story. (The issue is that when we DO know the causal story directly, FSCI is invariably the product of intelligence, and that on search space grounds this is just what would be expected; i.e. we have grounded an induction on best explanation from observed FSCI to its most credible cause -- intelligence not nature acting spontaneously and without direction through forces of chance + necessity.) And, once we see FSCI we are entitled to infer to its general class of cause: design. 9Thus, for very relevant instance, cell based DNA-driven life shows the signs that point to its origin in design. By what designer is another question, one to be answered based on other evidence that allows us to identify possible candidates and select the likeliest.) 5] 303: a good GA when implemented on, say a computer, would be as compact as possible whilst a large GA (written by Microsoft probably ;)) might contain much complexity (lots of FSCI) but not perform any better than the small GA containing less FSCI. — By perform better in this context I mean generate more FSCI – The genetic algorithm implementing program in either case will with overwhelming probability exhibit 143+ ASCII characters, and will be functional and vulnerable to modest perturbation. It also depends on a computer which is itself based on much functionally specific, complex information. So, in either case we have excellent grounds to infer from seeing a GA program, that it was designed. And, in fact by discussing "written by Microsoft" BB in effect acknowledges this: GA's in our observation are the product of designers. It may well be that a more efficient algorithm will take up less storage, but that is besides the point: we are looking at islands of function and thresholds beyond which the search resources of the observed cosmos cannot credibly move to shores of function by undirected chance + necessity. If a GA were to have in it less than 143 characters, the inference on search resources vs config space would be locked out, but that is because we are seeking to eliminate the chance of a false positive inference to design, and are perfectly willing to accept that the particular test for design will not be applicable to a string that is below that threshold. (Other tests such as irreducible complexity might well apply. See what happens if you knock out each character in sequence: is there a core part of the program where once anything is knocked out function is lost, and on restoring function comes back? multi-part irreducibly complex functionality at an operating point is another sign of intelligent design.) 6] The "probability" bugbear The key issue is not PROBABILITY -- and metrics thereof -- but search space vs a known topology of islands of function. With 1,000 bits of information carrying capacity involved in a case of observed function vulnerable to moderate perturbation, we see that the config space has 2^1,000 ~ 10.7 * 10^300 cells. The observed universe over its lifetime, has some 10^150 or less possible states of its ~ 10^80 atoms. In short, viewing the universe as a search engine, it cannot sample as much as 1 in 10^150 of the possible configurations, rendering any at-random walk based search utterly unlikely to hit on any reasonable islands of function. We do routinely see complex functional entities that exhibit this degree of info carrying capacity: they are not produced by chance + necessity alone, but by intelligence (which can get to shores of minimal functionality and then increment function towards peaks by among other techniques, trial and error then improvement). But since such an otherwise reasonable inference cuts across the dominant evolutionary materialism of our day, it is stoutly resisted as we see above. 7] The FSCI of a pen: Is a pen functional? Does it show vulnerability to modest perturbation of information stored in it? Is the capacity of the stored information in excess of 500 - 1,000 bits? A pen does show function based on interacting parts, and is indeed vulnerable to modest perturbation [drying ink, broken knibs or ball points, broken springs and whatnot], but it does not exhibit EXPLICIT information storage. Since we see a cluster of mutually interacting parts working together at an operating point vulnerable to disruption on loss of one of a cluster of key co-adjusted parts [ink, ink storage, ink transfer to writing contact point, transfer to paper], functionality AS A MODERN PEN exhibits irreducible, more or less fine-tuned [to operating point] complexity. Thus, irreducible complexity would be the reasonable sign of intelligence to infer based on. But, if one insists on whether such a pen exhibits FSCI, the issue is implicit information storage: is there hidden information in a functioning pen of at least 1,000 bits? ANS: On reverse engineering [observe: no "science stopper" here . . . ], we see a chain of decision nodes that are tied to its functionality, which could be reduced to the classic chain of binary decision nodes. If that chain can reasonably be seen as going beyond 1,000 nodes, then we could apply FSCI as a criterion to the pen. Precision of fit and co-adaptation of key parts for function at operating point [e.g. the ink, the ink storage and ink transfer mechanisms,a s well as the transfer to paper mechanism], would allow us to make such a decision. (Similarly, 143 ASCII characters that function as contextually responsive English text embed a chain of at least 1,000 binary node decisions.) VERDICT: The typical modern pen is likely to be well above such a threshold, though something like a goose quill or stick dipped in ink would be below it. (And, to try the Berra's blunder game of imaging a chain of "ancestry" from a stick dipped in berry juice to a Parker 51 fountain pen simply would show that a taxonomic tree illustrates commonality of structure, not ancestry without intelligent direction; as happened with the Corvette from the 1950's - 70's.) _____________ In short, the balance on the merits is plain: FSCI is indeed a reliable sign of intelligence. And insofar as 143+ characters of ASCII text in contextually responsive English, a computer screen-full of functional information, genetic algorithm programs, pens or living cells exhibit FSCI,we have good grounds for inferring to design as the most credible cause in all of them. GEM of TKIkairosfocus
July 27, 2009
July
07
Jul
27
27
2009
01:42 AM
1
01
42
AM
PDT
#299 No, it is really quite simple and since you seem to have a hard time understanding this very simple concept, maybe you should refrain from commenting on it. Perhaps you should study some computer programming and some basic courses in English grammar. Jerry - as always very quick to resort to insults. However, you haven't responded to the argument. FSCI is meant to be a well defined concept. Yet when challenged to calculate the FSCI you can only do it by making all sorts of arbitrary assumptions about the specification and the cause. If I change the assumption I come up with a different value. Are you saying those assumptions are not required? Or do you have a justification for them?Mark Frank
July 26, 2009
July
07
Jul
26
26
2009
10:12 PM
10
10
12
PM
PDT
Mr Charrington, -------"Clive, No. Why?" Why not?Clive Hayden
July 26, 2009
July
07
Jul
26
26
2009
08:02 PM
8
08
02
PM
PDT
Kairosfocus @ 114
For those troubled by the issue on whether or not I am in agreement with Dembski
Actually, I rather asked if Dr. Dembski agrees with you. BTW, what do you think about Jerry's FSCI definitions? Are thy compatible with yours?sparc
July 26, 2009
July
07
Jul
26
26
2009
07:48 PM
7
07
48
PM
PDT
Bill, You say things that are demonstrably incoherent, then ignore that anyone noticed. Great.Upright BiPed
July 26, 2009
July
07
Jul
26
26
2009
03:21 PM
3
03
21
PM
PDT
A general point - a good GA when implemented on, say a computer, would be as compact as possible whilst a large GA (written by Microsoft probably ;)) might contain much complexity (lots of FSCI) but not perform any better than the small GA containing less FSCI. -- By perform better in this context I mean generate more FSCI -- Shouldn't a 'GA' implemented in let's say, a universe, by an all powerful God be as compact as possible and therefore contain almost no FSCI whilst generating almost infinite quantities of the stuff?BillB
July 26, 2009
July
07
Jul
26
26
2009
03:05 PM
3
03
05
PM
PDT
jerry:
Now maybe in some other language or by using some encryption technique we can relate the second string of information to something else. If that is true, then that string is also FSCI.
The short version of that would seem to be - we can't tell if something has FSCI unless we know everything there is to know about it, and its history. A GA on its own is useless, just a description. You need some mechanism to operate according to the rules laid out in the algorithm. This could be a computer, a man with pen, paper and an abacus or a naturally occurring set of environmental conditions. how do you work out which FSCI in all the entities that support the running of a particular instantiation contribute to the FSCI generated by the process, and if the process then affects or adds to the FSCI in the entities that sustain the process what will happen. Will the FSCI go into exponential growth?BillB
July 26, 2009
July
07
Jul
26
26
2009
02:55 PM
2
02
55
PM
PDT
Joseph, Where does measuring what it took to cause an object to come into existence stop. In the case of a pen do we need to account for the information in all the causes for all the objects required to cause the pen, and all those causes as well? If we do then ultimately all objects trace back to the origin of the universe so your methods require almost infinite regression back to things we cannot observe. Not a good basis for a scientific technique.BillB
July 26, 2009
July
07
Jul
26
26
2009
02:32 PM
2
02
32
PM
PDT
UB
In his rush to mock KF, our freind Billie passed over a post directed at him which shows his objection to algorithmic DNA was nothing but a cheap distraction. Apparently, his fragility is overwhelmed by the implications.
I'm sure KF isn't that fragile and I would hesitate to use the word cheap against him so his distractions are probably rooted in confusion about the difference between a simulation and a simulator.BillB
July 26, 2009
July
07
Jul
26
26
2009
02:20 PM
2
02
20
PM
PDT
""The point is that FSCI is meant to be a well defined concept. But you can only make it well defined by including a lot of arbitrary assumptions about both the target and the context in which the outcome is generated. No, it is really quite simple and since you seem to have a hard time understanding this very simple concept, maybe you should refrain from commenting on it. Perhaps you should study some computer programming and some basic courses in English grammar.jerry
July 26, 2009
July
07
Jul
26
26
2009
01:49 PM
1
01
49
PM
PDT
Joseph,
One would measure the information in an object by determining what it took to bring said object into existence.
Thanks, you have just shown how the concept can't ever demonstrate design in nature. From a scientific point of view we don't yet know what was required for life to arise so we can't measure its information content and determine its FSCI. From an ID perspective we have no knowledge of the creator so we are unable to determine what was required for it to bring life into existance. Therefore, according to you, we have no way of measuring the FSCI in life unless we can uncover a material, knowable cause.BillB
July 26, 2009
July
07
Jul
26
26
2009
12:48 PM
12
12
48
PM
PDT
Re #292 The point is not how low the number is. We all know that any outcome can be described in such a way that the probability of achieving that outcome is as low as you like. The point is that FSCI is meant to be a well defined concept. But you can only make it well defined by including a lot of arbitrary assumptions about both the target and the context in which the outcome is generated. I could for example assume that the target is any string which will effectively give a message of some kind to a recipient (i.e. the function is "gives a message"). But that in itself depends on the context. In the right context almost any string of characters gives a message.Mark Frank
July 26, 2009
July
07
Jul
26
26
2009
11:46 AM
11
11
46
AM
PDT
"Surely some of these assumptions, and hence your calculation, need some justification?" Not really. You can nibble around the edges but it will not change anything of consequence. Your comment used over 800 characters and expanded the mix to include numbers and 4 or 5 other characters. So we have 42^800 possibilities. Now using some nibbling here and there I bet you could get it down to 35^750 or some other even lower number of which your post is just one possibility (if one considers 35^750 a low number but it is much lower than 42^800 by mucho magnitudes.) So your post is both very complex and very rare. Why don't you take a crack at it and see how low a number you can get.jerry
July 26, 2009
July
07
Jul
26
26
2009
11:29 AM
11
11
29
AM
PDT
Interesting. In his rush to mock KF, our freind Billie passed over a post directed at him which shows his objection to algorithmic DNA was nothing but a cheap distraction. Apparently, his fragility is overwhelmed by the implications. He was, however, able to suggest to KF that he put his money where his mouth was.Upright BiPed
July 26, 2009
July
07
Jul
26
26
2009
09:15 AM
9
09
15
AM
PDT
#289 Oh I see - a blank and 26 characters. This calculation makes some of the assumptions: * There is a mechanism for producing character strings * All characters are equally likely to be selected * Uppercase characters, characters from other alphabets, special characters, mathematical characters etc are not available * Probability of selecting a character is independent of characters already selected * The attempt to produce the string happened just once * The specification is exactly this string (or possibly the mixed case version of the string)- not e.g. strings of 22 characters that make sense in English or strings of 22 characters that make sense in some language or strings of some length that achieve the same end Surely some of these assumptions, and hence your calculation, need some justification?Mark Frank
July 26, 2009
July
07
Jul
26
26
2009
09:02 AM
9
09
02
AM
PDT
Joseph
One would measure the information in an object by determining what it took to bring said object into existence.
Would you not also have to measure the information in the other things that it took to bring an object into existence? The information it took to make the hammer that was used to flatten the metal that was used to make the tie clip?
That said for a GA just count the bits it contains and that would give you the minimum amount of information (SI) it takes.
So FSCI = File Size? Is that compressed or uncompressed? Is the FSCI value different for compressed or uncompressed bits? What about working memory, that the GA would use when it was running? Is that counted? What about information it generates as it executes? That included also? Does the FSCI change at all or is it fixed?
The same goes for a pen.
When you put it like that, it sounds simple. Can you do it for a pen then, or are you limited to claiming that it it's possible in theory only? What is the value of the FSCI in a pen please? Or any other example you would like to give of a real physical object would be great. As other then simply counting the characters in a text message (KF's "millions of examples all over the internet" get out). I've never seen a value put on a everyday object for it's FSCI. Such as a, totally at random example, softball. Or a simple pen. Can it even be done?Mr Charrington
July 26, 2009
July
07
Jul
26
26
2009
08:37 AM
8
08
37
AM
PDT
Joseph
It proves you are arguing from ignorance.
Then the thing to do would be to dispel my ignorance by answering my simple questions? Let me put it another way. What is the smallest value for FSCI that an object can have? If I ask for an object with 1 bit of FSCI I'm ignorant. What's the minimum then? 2? 10? 100? 500? If I ask for an object with 499, 500 or 501 bits of FSCI I'm ignored. Jerry
There is no FSCI in a pen
And yet if you found a pen on the heath you would believe it to be a designed object. How can a designed object contain no FSCI? If it does indeed have FSCI then presumably it's over 500 bits of it as it's a designed object. How many bits of FSCI does that pen have, if it has any at all?Mr Charrington
July 26, 2009
July
07
Jul
26
26
2009
08:30 AM
8
08
30
AM
PDT
"I am intrigued. How did you get 27^21" The usual way. Two corrections, 27^22 and “ivjioe kjfe faod tm ql"jerry
July 26, 2009
July
07
Jul
26
26
2009
08:14 AM
8
08
14
AM
PDT
Mr Charrington, When you sayn things like: 1 bit of FSCI? It proves you are arguing from ignorance. So why do you choose to do so? I say it is because you cannot support your position and have no desire to learn what it is you are arguing against.Joseph
July 26, 2009
July
07
Jul
26
26
2009
08:04 AM
8
08
04
AM
PDT
BillB, One would measure the information in an object by determining what it took to bring said object into existence. That said for a GA just count the bits it contains and that would give you the minimum amount of information (SI) it takes. The same goes for a pen. The bottom line is it is a measurement. Information. The information age. Information technology. Information theory. When IDists speak of complex specified information they are using it in the following sense: information- the attribute inherent in and communicated by one of two or more alternative sequences or arrangements of something (as nucleotides in DNA or binary digits in a computer program) that produce specific effects When Shannon developed his information theory he was not concerned about "specific effects". It is producing those specific events which make the information specified! And that is what separates mere complexity from specified complexity.Joseph
July 26, 2009
July
07
Jul
26
26
2009
08:00 AM
8
08
00
AM
PDT
#285 I am intrigued. How did you get 27^21?Mark Frank
July 26, 2009
July
07
Jul
26
26
2009
07:52 AM
7
07
52
AM
PDT
I made an error in the last post and it should be 27^21. If anyone else wants to correct the math, go right ahead but the basic idea is there.jerry
July 26, 2009
July
07
Jul
26
26
2009
06:41 AM
6
06
41
AM
PDT
"the FSCI in a GA, in a pen, a rock or anything else for that matter." There is no FSCI in a pen or a rock though I could imagine how some intelligence might make it so. I assume it is a normal pen. In a GA, just use the letters or individual units of code and do an analysis such as the the amount of variation in an English sentence. "Methinks I am contrary" as opposed tp "ivjioe kjfe faod tm q" or 2^21 for each. Neither would be FSCI except there exist an independent mechanism to relate one to something else. Both of these other entities (that which does the relating and that which being related to) are completely independent of the initial entity or data set which is the source of information. Now maybe in some other language or by using some encryption technique we can relate the second string of information to something else. If that is true, then that string is also FSCI.jerry
July 26, 2009
July
07
Jul
26
26
2009
06:38 AM
6
06
38
AM
PDT
KF-san, Thanks for the replies. I brought up Second Life because it is a model, just as we have been talking about on this thread. Since you are on record as supporting a position on simulation, I was attempting to use a reference to a well known simulation to clarify your position. As you say here, an earthquake underneath Linden Labs might temporarily shut down the simulation. Just prior to that, would the simulated ground be shaking? This is all relevant to your contention that the results of a GA can be discounted because additioinal sources of error were not introduced into the operating system and hardware. Next unanswered question - I give your C*S*B procedure an array of data 800*600*24 bits, scanned from a photo of Mt Rushmore, ca 1925. You say the procedure should infer design. This is not a false positive because of the nature and structure of the photograph. What do you mean by nature and structure, and how would that apply to a photo of TV static? (I know I am dating myself because in the new era of digital TV there is no static.)Nakashima
July 26, 2009
July
07
Jul
26
26
2009
04:33 AM
4
04
33
AM
PDT
1 2 3 4 5 6 14

Leave a Reply