Uncommon Descent Serving The Intelligent Design Community

An Eye Into The Materialist Assault On Life’s Origins

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Synopsis Of The Second Chapter Of  Signature In The Cell by Stephen Meyer

ISBN: 9780061894206; ISBN10: 0061894206; HarperOne

When the 19th century chemist Friedrich Wohler synthesized urea in the lab using simple chemistry, he set in motion the ball that would ultimately knock down the then-pervasive ‘Vitalistic’ view of biology.  Life’s chemistry, rather than being bound by immaterial ‘vital forces’ could indeed by artificially made.  While Charles Darwin offered little insight on how life originated, several key scientists would later jump on Wohler’s ‘Eureka’-style discovery through public proclamations of their own ‘origin of life’ theories.  The ensuing materialist view was espoused by the likes of Ernst Haeckel and Rudolf Virchow who built their own theoretical suppositions on Wohler’s triumph.  Meyer summed up the logic of the day

“If organic matter could be formed in the laboratory by combining two inorganic chemical compounds then perhaps organic matter could have formed the same way in nature in the distant past” (p.40)

Darwin’s theory generated the much-needed fodder to ‘extend’ evolution backward’ to the origin of life.  It was believed that “chemicals could “morph” into cells, just as one species could “morph” into another “ (p.43).   Appealing to the apparent simplicity of the cell, late 19th century biologists assured the scientific establishment that they had a firm grasp of the ‘facts’- cells were, in their eyes, nothing more than balls of protoplasmic soup.   Haeckel and British scientist Thomas Huxley were the ones who set the protoplasmic theory in full swing.  While the details expounded by each man differed somewhat, the underlying tone was the same- the essence of life was simple and thereby easily attainable through a basic set of chemical reactions.

Things changed in the 1890s.  With the discovery of cellular enzymes the complexity of the cell’s inner workings became all too apparent and a new theory that no longer relied on an overly simplistic protoplasm-style foundation, albeit one still bounded by materialism, had to be devised.  Several decades later, finding himself in the throws of a Marxist socio-political upheaval within his own country, Russian biologist Aleksandr Oparin became the man for the task. 

Oparin developed a neat scheme of inter-related processes involving the extrusion of heavy metals from the earth’s core and the accumulation of atmospheric reactive gases all of which, he claimed, could eventually lead to the making of life’s building blocks- the amino acids.  He extended his scenario further, appealing to Darwinian natural selection as a way through which functional proteins could progressively come into existence.  But the ‘tour de force’ in Oparin’s outline came in the shape of coacervates- small, fat-containing spheroids which, Oparin proposed, might model the formation of the first ‘protocell’.

Oparin’s neat scheme would in the 1940s and 1950s provide the impetus for a host of prebiotic synthesis experiments, most famous of which was that of Harold Urey and Stanley Miller who used a spark discharge apparatus to make the three amino acids- glycine, alpha-alanine and beta-alanine.  With little more than a few gases (ammonia, methane and hydrogen), water, a closed container and an electrical spark Urey and Miller had seemingly provided the missing link for an evolutionary chain of events that now extended as far back as the dawn of life.  And yet as Meyer concludes, the information revolution that followed the elucidation of the structure of DNA would eventually shake the underlying materialistic bedrock.          

Meyer’s historical overview of the key events that shaped origin-of-life biology is extremely readable and well illustrated.  Both the style and the content of his discourse keep the reader focused on the ID thread of reasoning that he gradually develops throughout his book.

Comments
KF-san, If you are interested in self-replicators, please look into Sayama-sensei's Evoloops. You can get a link from the cellular automata Wiki page. A great example of intelligent design creating a universe perfectly tuned for life! But again, simply repeating an assertion doesn't make it true. How do we measure function in the pre-biotic world? Upon what objects are measuring function? until you can answer these questions you can't say anything about the reality of islands of function. Appealing to life today is not helpful.Nakashima
July 28, 2009
July
07
Jul
28
28
2009
05:11 AM
5
05
11
AM
PDT
Hot off the Presses: Abel strikes again: http://www.bioscience.org/2009/v14/af/3426/3426.pdf Tellingly relevant excerpt: ____________ All known organisms are prescribed and largely controlled by information (1-22). Most biological prescriptive information presents as linear digital programming (23-26). Living organisms arise only from computational halting. Fittest living organisms cannot be favored until they are first computed. Von Neumann, Turing and Wiener all got their computer design and engineering ideas from the linear digital genetic programming employed by life itself (27-32). All known life is cybernetic (33-35). Regulatory proteins, microRNAs and most epigenetic factors are digitally prescribed (3). MicroRNAs can serve as master regulators of gene expression (36-38). One microRNA can control multiple genes. One gene can be controlled by multiple microRNAs. Nucleotides function as physical symbol vehicles in a material symbol system (MSS) (39-41). Each selection of a nucleotide corresponds to pushing a quaternary (four- way) switch knob in one of four possible directions. Formal logic gates must be set that will only later determine folding and binding function through minimum- free-energy sinks. The most perplexing problem for evolutionary biology is to provide a natural mechanism for setting functional configurable switch-settings at the genetic level. These logic gates must be locked in open or closed positions with strong covalent bonds prior to folding of biopolymers. At the point of polymerization of informational positive single strands, no selectable three- dimensional shape exists for the environment to favor. In addition, the environment does not select for isolated function. The environment only selects for fittest already- living organisms. The challenge of finding a natural mechanism for linear digital programming extends from primordial genetics into the much larger realm of semantics and semiotics in general. Says Barham: "The main challenge for information science is to naturalize the semantic content of information. This can only be achieved in the context of a naturalized teleology (by 'teleology' is meant the coherence and the coordination of the physical forces which constitute the living state).” (42) The alternative term “teleonomy” has been used to attribute to natural process “the appearance of teleology” (43-45). Either way, the bottom line of such phenomena is selection for higher function at the logic gate programming level. ______________ GEM of TKIkairosfocus
July 28, 2009
July
07
Jul
28
28
2009
05:04 AM
5
05
04
AM
PDT
Kf-san, No, I haven't seen a photograph produced by unaided chance and necessity, unless of course humans are the result of chance and necessity, nature operating freely. In that case, I suppose that all photographs could fall into that category. (But as ana side, I was looking a t a book of alternate photographic techniques recently that did show examples of using large leaves as the 'film', taking advantage of the differential degradation of sugars in shadowed portions vs lighted portioins.) Again, you are basing your response on the least important aspect. Rhetorically, you should be trying to address the most important point not the least. This is not an objection that you raise against your own examples. Allowing this objection simply means that all experiment is invalid and/or FSCI is useless.Nakashima
July 28, 2009
July
07
Jul
28
28
2009
05:01 AM
5
05
01
AM
PDT
New business: The updated point 6 on the FSCI simple metric, in light of the waves of objections above: _______________ 6 --> . . . we can construct a rule of thumb functionally specific bit metric for FSCI: a] Let contingency [C] be defined as 1/0 by comparison with a suitable exemplar, e.g. a tossed die that on similar tosses may come up in any one of six states: 1/ 2/ 3/ 4/ 5/ 6. That is, diverse configurations of the component parts or of outcomes under similar initial circumstances must be credibly possible. b] Let specificity [S] be identified as 1/0 through specific functionality [FS] or by compressibility of description of the specific information [KS] or similar means that identify specific target zones in the wider configuration space. [Often we speak of this as "islands of function" in "a sea of non-function." (That is, if moderate application of random noise altering the bit patterns will beyond a certain point destroy function [notoriously common for engineered systems that require working parts mutually co-adapted at an operating point, and also for software and even text in a recognisable language] or move it out of the target zone, then the topology is similar to that of islands in a sea.)] c] Let degree of complexity [B] be defined by the quantity of bits to store the relevant information, with 500 - 1,000 bits serving as the threshold for "probably" to "morally certainly" sufficiently complex to meet the FSCI/CSI threshold. d] Define the vector {C, S, B} based on the above [as we would take distance travelled and time required, D and t: {D, t}], and take the element product C*S*B [as we would take the element ratio D/t to get speed]. e] Now we identify the simple FSCI metric, X: C*S*B = X, the required FSCI/CSI-metric in [functionally] specified bits. Once we are beyond 500 - 1,000 functionally specific bits, we are comfortably beyond a threshold of sufficient complex and specific functionality that the search resources of the observed universe would by far and away most likely be fruitlessly exhausted on the sea of non-functional states if a random walk based search (or generally equivalent process) were used to try to get to shores of function on islands of such complex, specific function. [WHY: For, at 1,000 bits, the 10^150 states scaned by the observed universe acting as search engine would be comparable to: marking one of the 10^80 atoms of the universe for just 10^-43 seconds out of 10^25 seconds of available time, then taking a spacecraft capable of time travel and at random going anywhere and "any-when" in the observed universe, reaching out, grabbing just one atom and voila that atom is the marked atom at just the instant it is marked. In short, the "search" resources are so vastly inadequate relative to the available configuration space for just 1,000 bits of information storage capacity that debates on "uniform probability distributions" etc are mooot: the whole observed universe acting as a search engine could not put up a credible search of such a configuration space. And, observed life credibly starts with DNA storage in the 100's of kilo bits of information storage. (100 k bits of information storage specifies a config space of order ~ 9.99 *10^30,102; which vastly dwarfs the ~ 1.07 * 10^301 states specified by 1,000 bits.)] 7 --> For instance, for the 800 * 600 pixel PC screen, C = 1, S = 1, B = 11.52 * 10^6, so C*S*B = 11.52 * 10^6, FS bits. This is well beyond the threshold. [Notice that if the bits were not contingent or were not specific, then X = 0 automatically. Similarly, if B [is less than] 500, the metric would indicate the bits as functionally or compressibly etc specified, but without enough bits to be comfortably beyond the UPB threshold. Of course, the DNA strands of observed life forms start at about 200,000 FS bits, and that for forms that depend on others for crucial nutrients. 600,000 - 10^6 FS bits is a reported reasonable estimate for a minimally complex independent life form.] ______________ One hopes that along with my first comment for today, this will provide adequate clarifying and corrective information. The relevance to the fact that complex, specifically functional information is now a recognised, clear fundamental constituent of life should be plain. And, onward in light of the von Neuman conditions for self replication -- an active self replicator has to embed blueprint, blueprint reader and blueprint executor to self-replicate -- teh relevance of the resulting need to get to islands of fucntion in a sea of non function to the OOL challenge should be even more massively evident. Abel's remarks as excerpted by Upright simply underscore the consequent predicted and observed empirical realities. GEM of TKIkairosfocus
July 28, 2009
July
07
Jul
28
28
2009
03:44 AM
3
03
44
AM
PDT
PPPS: Rob, re 324. the C-program's text contains FSCI. Similarly, the screen that displays the output contains FSCI. The C program needs not be run on any given system, and a given screen needs not show only the program, i.e. the cases are not mutually dependent. We have two coincident instances of functionally specific, complex information: [a] a program that carries out a particular function, and [b] a screen that shows content fed to it by an operating system. That you as a highly informed person should seek to conflate the two suggests a manufactured strawman, not a serious question or objection.kairosfocus
July 28, 2009
July
07
Jul
28
28
2009
03:34 AM
3
03
34
AM
PDT
PPS: Upright: Excellent rebuttal.kairosfocus
July 28, 2009
July
07
Jul
28
28
2009
03:25 AM
3
03
25
AM
PDT
3] Ideological warfare by rhetorical attrition: In the 1st World War on the Western Front, the Allied powers were forever seeking a breakthrough and doing so by throwing waves of men at well organised German defenses backed by Krupp's version of the Maxim machine Gun and quick firing artillery. Eventually, after suffering millions of casualties and after the French army mutinied in 1917, over the next year, the Germans were simply exhausted. (Of course, the cost to the "victorious" allies was in the long term so ruinous that they were not able to buck up in time to stand up to Hitler's thirst for a rematch.) Something fairly similar to this is going on in this blog: a --> There are relatively few well informed ID commenters capable of rapidly rebutting endlessly repeated versions on "standard" distractions, strawmannish distortions and denigrations led out to rhetorical dismissals. (I won't do more just now than point out that such tactics are fundamentally corrosive to the civility and good sense that must underlie a sustainable free civilisation.) b --> But, if one is willing to deploy endless rhetorical waves of such red herrings led out to strawmen soaked in ad hominems and ignited to cloud, poison and polarise the atmosphere [never acknowledge the force of corrective counter-arguments, just launch yet another rhetorical wave . . . ], then eventually one can wear down those who would have to rebut, not through the merits but by sheer weight of numbers and rhetoric. c --> At that time, the APPEARANCE of victory on the merits can be put up, and in our day, perception is often more important in the short term than reality. (long term, ther eis a terrible price to be paid, and believe you me the historical exemplar of the impact of the first wave of Islamist expansionism on disaffected Christian populations in Syria and Egypt resentful of Byzantine domination and oppression, is a sobering warning. There was a REASON why these areas fell to Islam so fast and so easily! Sadly, out of the frying pan into the fire . . . ] d --> Beyond a certain point, if the rhetorical waves are allowed to pound away UD is going to be overwhelmed by endless waves of long since answered fallacies [just cf the Weak Argument Correctives], reaching a point where any original post will at once be swarmed under by a wave of misleading arguments. And, naive onlookers will be primed to simply go tot he objections to see the "answer." e --> Indeed,t here are several recent threads at UD that have already been "overwhelmed." f --> In short, there is need for a very different counter-rhetorical strategy for UD and other similar sites. (The old one of simply banning those who insist on inane or too obviously uncivil remarks led to accusations of "censorship." We cannot revert to that; though a few exemplary cases do merit such banning.) g --> I therefore suggest that it is time to deploy not just a set of weak argument correctives and a brief glossary but at minimum highlighted links to adequate tutorials across the range of ID studies, constituting an ID 101 with actual FAQ's addressing not just rhetorical dismissals and distortions, but need for basic information. [A good start to that would be a critical review of the Wikipedia page on ID.] h --> This should be augmented by links to major ID papers and works on the net, including where relevant Google Books online. i --> I also advocate for a fresh start on origins science education, that will break the evolutionary materialist monopoly and prepare a new generation for breaking out of the Lewontinaian version of Plato's cave with the shadow shows based on so many misleading icons. [A wiki based set of tutorials covering underlying issues, cosmology, origin of life, origin of biodiversity, origin of mind and origins science in society would I think do a lot of good. Not least by simply breaking he monopoly out there.] j --> I believe this will also help redress the manpower imbalance at UD and elsewhere. +++++++++++++ GEM of TKI PS: Again Nakshima-San: have you ever seen a photograph produced by unaided chance + necessity? Why or why not?kairosfocus
July 28, 2009
July
07
Jul
28
28
2009
03:23 AM
3
03
23
AM
PDT
Footnotes: First, thanks Upright for taking time to bring Abel's excellent work to bear. His work, in significant part, is in effect a technical level version of the ideas descriptively summarised under the term "functionally specific, complex information." Observe, onlookers: again, there is no effective response on substance. (And, FYI Rob, a pen does not explicitly contain FSCI; I addressed first that it is a complex, functionally co-ordinated object with a core that exhibits irreducible complexity -- as is common for engineered systems. indeed, Darwinian type "spontaneous, ratcheting, hill-climbing" processes are deeply challenged to get to such IC systems, directly or indirectly. Such a core will have in it a cluster of decisions to form components and integrate them at an operating point. In turn, that can be turned into a chain of decisions which are expressible in binary sequential form, i.e. we can assess that there will be IMPLICIT functional sequence complexity -- or at a simpler level, FSCI -- associated with such an object. But, an item can be irreducibly complex without being beyond the 1,000 bit threshold, and such irreducible complexity is already strong evidence of design.) Next, there is one point of unfinished business I wish to address before doing anything else: 1] On islands of fucntionality:
[BB, 314:] I would say though, and to mirror what others have said, that the whole notion of seas and islands is poor when factoring in a pre-biotic universe. At best you should consider the config space to include the ocean floor and simply place sea level as a slightly arbitrary demarcation point between complex chemistry and self replicating systems. The ’search’ that occurs in a universe is simply a mass collection of shifting configurations, some of which may be very close to these ’shores of function’.
1 --> The implicitly conceded point in this objection is that once we DO have islands of function in a sea of non-function, we then have a challenge to first get to shores of function before we can properly use hill-climbing ratchets to get to peaks of function. 2 --> That is why there is an attempt to extend the slope below the "non-functionality" sea level. (There was also an attempt to dismiss the fact that the search onducte4d by the atoms of the cosmos acting across its lifespan specifies an upper bound on search. But 10^80 or so atoms changing state every 10^-43 seconds and doing so for 10^25 seconds is a reasonable upper bound on cosmic search: 10^150 "moves.") 3 --> Now, as I have repeatedly pointed out above [most recently by showing how noise would corrupt a photo of Mt Rushmore but once we are in a snowstorm, further noise will simply move us around in the sea of non-images], such an islands of function configuration space topology is COMMON and quite reasonable to expect with complex functional systems. 4 --> When it comes to observed life, we first see that it is an actively self-replicating entity -- not like a crystal that grows passively by inter atomic or intermolecular forces. 5 --> As Von Neumann pointed out in the 1940's (this is a proto-ID prediction!), such a system will require not only general operating machines, but a stored blueprint and a self-assembling factory that reads and uses it to copy itself; rendering such an entity that incorporates self-replication even more complex than one that does not. [Indeed, the much derided William Paley reflected on that, speaking about a self-assembling watch.] 6 --> Thus, such an actively self-replicating entity is necessarily based on functionally specific and complex information, once the function is vulnerable to perturbation; which is notoriously true. (Just think about what radiation damage does to cells, esp as the level is gradually turned up: at first, repairable [we have to function in an environment in which minor damage is a commonplace], then triggers cancers, then simply wipes out the cells -- what radiation sickness is about.) 7 --> In short, life -- including hypothesised first life that is based on empirical evidence of how life exists and operates -- manifests an islands of function in a sea of non-function topology. 8 --> That is, until metabolic, genetic/info storage and self replicating subsystems are appropriately constructed and integrated, you do not have life. There is a shoreline of function, and beyond, a sea of non-function. 9 --> Thus, the attempt to rhetorically extend the island of function by appealing to a sloping ocean floor fails. Until you are on the shoreline, you have no basis for empirically credible life, which is why the main schools of thought on OOL as exemplified by Shapiro and Orgel, have mutually refuted themselves. [NB: slight update to the always linked.] In short, the bottomline is still as it has been stated above. However, a remark or two on rhetorical tactics: [ . . . ]kairosfocus
July 28, 2009
July
07
Jul
28
28
2009
03:23 AM
3
03
23
AM
PDT
Nakashima, At the top of your post at 334, you provide a quote from Abel and then bolded the text you wanted attention drawn to.
We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses.
And then you say that you “don’t know what in Abel’s work justifies this exception”. I then removed the exception from the quote: “We repeat that a single incident of nontrivial algorithmic programming success would falsify any of these null hypotheses.” It then occurs to me that the original sentence says that the null hypotheses can be falsified if a single instance of algorithmic programming success can be found that isn’t the product of selection at the programming level…but your revised sentence says that the hypotheses can be falsified if any successful algorithmic programming can be found. Very nice, Nakashima. - - - - - - - - - - Not having done enough damage to the original sentence and its meaning, you take aim at it again in the middle of your post by suggesting that its “all the more odd in light of” a second quote coming from Abel’s paper.
Functional switch-setting sequences are produced only by uncoerced selection pressure. There is a cybernetic aspect of life processes that is directly analogous to that of computer programming. More attention should be focused on the reality and mechanisms of selection at the decision-node level of biological algorithms. This is the level of covalent bonding in primary structure. Environmental selection occurs at the level of post-computational halting. The fittest already-computed phenotype is selected.
You now have my interest peaked, so I went through the quote. Abel states that functional sequencing doesn’t come from coerced pressure (law-like cause and effect necessity). Then he highlights the analogy between biological algorithms and computer programming (they both operate from a set of selections that precede function). He then points out that environmental selection only operates on the value of the end product (but does not cause the selections that precede it). So, if I may now put your two thoughts together: It seems odd to you that Abel would want you to know his null hypotheses can be falsified by “a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level, -and- that this is even more odd because physical laws don’t explain functional sequencing, computers and biosystems run off programs, and environmental selection selects for function after its functioning. Also, you wish that Abel would be more consistent. Finally at the end of your post, you say “If FSC is the product of selection, you can’t rule out selection and then declare victory.” Now, I already know that you’ve read David Abel’s work (thank you). And there is no ambiguity in that he concludes FSC can only result from the act of a volitional agent. FSC is the product of an agent selecting for function at the organizational level. Nowhere does Abel say that FSC is the product of anything else. Your thinking on this is so twisted that I can only assume you are simply saying something in order to say anything at all. - - - - - - - - - - - - In your next post at 335, you say “KF-san has given us several points and assertions we are nowhere near finished talking about.” Believe me; I am quite certain you have more to say on the matter. I think incessant is an appropriate term. The question is, will the posts be as incomprehensible as you ones you've already made.Upright BiPed
July 28, 2009
July
07
Jul
28
28
2009
01:57 AM
1
01
57
AM
PDT
Mr BiPed, Abel's work is indeed fascinating. I wish he would participate here to talk about it with us. but in the mean time KF-san has given us several points and assertions we are nowhere near finished talking about. What is function in the pre-biotic world? KF-san seems to know there are islands of it. Where does this knowledge come from?Nakashima
July 27, 2009
July
07
Jul
27
27
2009
09:46 PM
9
09
46
PM
PDT
We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. I don't know what in Abel's work justifies this exception. Allthe more odd in light of this: Functional switch-setting sequences are produced only by uncoerced selection pressure. There is a cybernetic aspect of life processes that is directly analogous to that of computer programming. More attention should be focused on the reality and mechanisms of selection at the decision-node level of biological algorithms. This is the level of covalent bonding in primary structure. Environmental selection occurs at the level of post-computational halting. The fittest already-computed phenotype is selected. Granted it's just a hash of assertion and opinion, but it would be nice if Abel were consistent. If FSC is the product of selection, you can't rule out selection and then declare victory.Nakashima
July 27, 2009
July
07
Jul
27
27
2009
09:31 PM
9
09
31
PM
PDT
BillB, Why not address the evidence? You've read Abel's work, why not address it if you want to show ID is vacant? Why not stop with the harping over the edges of an argument with KF and just lay ID bare? That is what you want isn't it? Don't you want the evidence for design to be shown false? What does Abels' paper have wrong - tell us specifically what that is? Does chance ever not operate at maximum uncertainty? Does any chance event ever lead to a another chance event that isn't operating at maximum uncertainty? Are there any chemical affinities along the linear sequencing of DNA (where the information is)? If DNA was the product of ordered states could it hold the amount of information it contains? Does complex coordinated function between disperate physical objects require selection at the information level, or no? Tell us Bill. Address the evidence for ID that is already a part of the peer-reviewed scientific record - just as a novel change of pace. Why not?Upright BiPed
July 27, 2009
July
07
Jul
27
27
2009
09:07 PM
9
09
07
PM
PDT
from David Abel... (Theor Biol Med Model. 2005; 2: 29. Published 2005 August 11. doi: 10.1186/1742-4682-2-29. PMCID: PMC1208958) ABSTRACT: Genetic algorithms instruct sophisticated biological organization. Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC). FSC alone provides algorithmic instruction. Random and Ordered Sequence Complexities lie at opposite ends of the same bi-directional sequence complexity vector. Randomness in sequence space is defined by a lack of Kolmogorov algorithmic compressibility. A sequence is compressible because it contains redundant order and patterns. Law-like cause-and-effect determinism produces highly compressible order. Such forced ordering precludes both information retention and freedom of selection so critical to algorithmic programming and control. Functional Sequence Complexity requires this added programming dimension of uncoerced selection at successive decision nodes in the string. Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC). EXCERPT: What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses: Null hypothesis #1 Stochastic ensembles of physical units cannot program algorithmic/cybernetic function. Null hypothesis #2 Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function. Null hypothesis #3 Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function. Null hypothesis #4 Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time. We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified. The fundamental contention inherent in our three subsets of sequence complexity proposed in this paper is this: without volitional agency assigning meaning to each configurable-switch-position symbol, algorithmic function and language will not occur. The same would be true in assigning meaning to each combinatorial syntax segment (programming module or word). Source and destination on either end of the channel must agree to these assigned meanings in a shared operational context. Chance and necessity cannot establish such a cybernetic coding/decoding scheme [71]. How can one identify Functional Sequence Complexity empirically? FSC can be identified empirically whenever an engineering function results from dynamically inert sequencing of physical symbol vehicles. It could be argued that the engineering function of a folded protein is totally reducible to its physical molecular dynamics. But protein folding cannot be divorced from the causality of critical segments of primary structure sequencing. This sequencing was prescribed by the sequencing of Hamming block codes of nucleotides into triplet codons. This sequencing is largely dynamically inert. Any of the four nucleotides can be covalently bound next in the sequence. A linear digital cybernetic system exists wherein nucleotides function as representative symbols of "meaning." This particular codon "means" that particular amino acid, but not because of dynamical influence. No direct physicochemical forces between nucleotides and amino acids exist. - - - - - - -Upright BiPed
July 27, 2009
July
07
Jul
27
27
2009
08:07 PM
8
08
07
PM
PDT
from David Abel... (Theor Biol Med Model. 2005; 2: 29. Published 2005 August 11. doi: 10.1186/1742-4682-2-29. PMCID: PMC1208958) ABSTRACT: Genetic algorithms instruct sophisticated biological organization. Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC). FSC alone provides algorithmic instruction. Random and Ordered Sequence Complexities lie at opposite ends of the same bi-directional sequence complexity vector. Randomness in sequence space is defined by a lack of Kolmogorov algorithmic compressibility. A sequence is compressible because it contains redundant order and patterns. Law-like cause-and-effect determinism produces highly compressible order. Such forced ordering precludes both information retention and freedom of selection so critical to algorithmic programming and control. Functional Sequence Complexity requires this added programming dimension of uncoerced selection at successive decision nodes in the string. Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC). EXCERPT: What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses: Null hypothesis #1 Stochastic ensembles of physical units cannot program algorithmic/cybernetic function. Null hypothesis #2 Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function. Null hypothesis #3 Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function. Null hypothesis #4 Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time. We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified. The fundamental contention inherent in our three subsets of sequence complexity proposed in this paper is this: without volitional agency assigning meaning to each configurable-switch-position symbol, algorithmic function and language will not occur. The same would be true in assigning meaning to each combinatorial syntax segment (programming module or word). Source and destination on either end of the channel must agree to these assigned meanings in a shared operational context. Chance and necessity cannot establish such a cybernetic coding/decoding scheme [71]. How can one identify Functional Sequence Complexity empirically? FSC can be identified empirically whenever an engineering function results from dynamically inert sequencing of physical symbol vehicles. It could be argued that the engineering function of a folded protein is totally reducible to its physical molecular dynamics. But protein folding cannot be divorced from the causality of critical segments of primary structure sequencing. This sequencing was prescribed by the sequencing of Hamming block codes of nucleotides into triplet codons. This sequencing is largely dynamically inert. Any of the four nucleotides can be covalently bound next in the sequence. A linear digital cybernetic system exists wherein nucleotides function as representative symbols of "meaning." This particular codon "means" that particular amino acid, but not because of dynamical influence. No direct physicochemical forces between nucleotides and amino acids exist. - - - - - - -Upright BiPed
July 27, 2009
July
07
Jul
27
27
2009
08:07 PM
8
08
07
PM
PDT
BillB, “Does anyone know anything about the evolution of early life or proteins?” the answer is, Yes, the people who do OOL research know a lot more than people who do ID research or KF." Would that include David Abel?Upright BiPed
July 27, 2009
July
07
Jul
27
27
2009
07:47 PM
7
07
47
PM
PDT
Joseph, You are incorrect - Archaeologists don't try and calculate the FSCI of objects to determine if they are designed. How could they - the concept of FSCI barely exists beyond this website.BillB
July 27, 2009
July
07
Jul
27
27
2009
03:02 PM
3
03
02
PM
PDT
Nakashima,
Where you have used the words autocatalytic set, I would prefer just ‘collection of molecules’. Autocatalytic set is a description of the goal state, the fitness function.
I confess I'm showing my ignorance of OOL research and chemistry. My area is cybernetics but I do work with some people doing OOL and other related stuff in ALife, in particular daisyworld models. Have you come across Chematon models of chemical replicators? Someone I know did his PhD on them. I agree, it would be nice to see some here actually testing their claims.BillB
July 27, 2009
July
07
Jul
27
27
2009
02:56 PM
2
02
56
PM
PDT
ScottAndrews: Alternative to what?, you seem to have just proposed a god of the gaps. It's KF who is claiming to know enough about life's origins to know what happened. What I'm objecting to is this nebulous FSCIdea as some kind of proof of design and to the constant confusions over GA's and models. I don't do OOL research but I know some people who do so when you ask "Does anyone know anything about the evolution of early life or proteins?" the answer is, Yes, the people who do OOL research know a lot more than people who do ID research or KF.BillB
July 27, 2009
July
07
Jul
27
27
2009
02:45 PM
2
02
45
PM
PDT
BillB @319: I don't object to your use of my words to serve your own purpose. I stand by them.
Does anyone know anything about the evolution of early life or proteins? How can we say what it is or isn’t consistent with?
But by doing so, you acknowledge that you don't know anything about the evolution of early life or proteins. Whatever your argument with ID is, you've just confessed to having no specific alternative.ScottAndrews
July 27, 2009
July
07
Jul
27
27
2009
12:24 PM
12
12
24
PM
PDT
P.P.P.S I also think it's cool that millions of bits of FSCI can be generated simply by switching to 8-byte pixel boundaries for image data. But it's a shame that lossless compression of image data can make millions of bits of FSCI disappear.R0b
July 27, 2009
July
07
Jul
27
27
2009
11:58 AM
11
11
58
AM
PDT
P.P.S. Below is a 2 kilobyte C# program that fills your screen with meaningful text. I don't see how this program can have any more than 16kbits of FSCI. Isn't it amazing that it can produce 11.52 million bits of FSCI for a 800x600 screen, and much, much more than that for larger screens? using System.Windows.Forms; using System.Drawing; class Program : Form { static void Main(string[] args) { Program prog = new Program(); Label label = new Label(); label.Text = @" Four score and seven years ago, our fathers brought forth upon this continent a new nation: conceived in liberty, and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war. . .testing whether that nation, or any nation so conceived and so dedicated. . . can long endure. We are met on a great battlefield of that war. We have come to dedicate a portion of that field as a final resting place for those who here gave their lives that this nation might live. It is altogether fitting and proper that we should do this. But, in a larger sense, we cannot dedicate. . .we cannot consecrate. . . we cannot hallow this ground. The brave men, living and dead, who struggled here have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember, what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us. . .that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion. . . that we here highly resolve that these dead shall not have died in vain. . . that this nation, under God, shall have a new birth of freedom. . . and that government of the people. . .by the people. . .for the people. . . shall not perish from this earth."; label.Font = new Font(FontFamily.GenericMonospace, Screen.PrimaryScreen.Bounds.Width / 50, GraphicsUnit.Pixel); prog.Controls.Add(label); label.Bounds = Screen.PrimaryScreen.Bounds; prog.TopMost = true; prog.WindowState = FormWindowState.Maximized; prog.ShowDialog(); } } Isn't itR0b
July 27, 2009
July
07
Jul
27
27
2009
11:45 AM
11
11
45
AM
PDT
kairosfocus, your style reminds me of trial lawyers' closing arguments. You know, "Ladies and gentlemen of the jury, we have seen clearly that..., the evidence undeniably proves that..." Of course, that's what the lawyer is paid to say, and it tells us nothing about the credibility of his case, or the jurors' views, or even the lawyer's views. You're still claiming that we regularly observe intelligence creating FSCI, so I'll keep pointing out that we don't. At best, we observe FSCI and somehow infer its origin. You still haven't given us a method for determining which link of the causal chain introduced the FSCI. For GA's, you trace the FSCI past the computer to the programmer. Why not the computer? Or why not trace it further to the designer of the programmer? Your answers to these questions are ad hoc, vague, and question-begging. Basically it comes down to your a priori conviction that computers, being mere mechanical entities, can't create FSCI, while humans can. P.S. If FSCI is so clear and simple, why can't the handful of ID proponents who know about it agree on whether a pen has FSCI?R0b
July 27, 2009
July
07
Jul
27
27
2009
11:33 AM
11
11
33
AM
PDT
Mr Charrington, If you wish to discuss something then it is up to you to come to the discussion prepared. However it is obvious that you don't even have a basic understanding of ID and you also don't have any intention of supporting the claims of your position.Joseph
July 27, 2009
July
07
Jul
27
27
2009
09:39 AM
9
09
39
AM
PDT
One would measure the information in an object by determining what it took to bring said object into existence. BillB:
Thanks, you have just shown how the concept can’t ever demonstrate design in nature.
Yet archaeologists do it all the time. One would measure the information in an object by determining what it took to bring said object into existence.
Would you not also have to measure the information in the other things that it took to bring an object into existence?
Only if one is anal retentive. All we are trying to do is determine if nature, operating freely can account for it or if agency involvement was required. Then once that is determined we investigate accordingly. Your other "objections"- about counting bits- demonstrates you are clueless.Joseph
July 27, 2009
July
07
Jul
27
27
2009
09:37 AM
9
09
37
AM
PDT
Mr BillB, I've tried to engage KF-san on this question of what is pre-biotic function, and by extension what is the object of which it is a measure. Where you have used the words autocatalytic set, I would prefer just 'collection of molecules'. Autocatalytic set is a description of the goal state, the fitness function. If you wanted to fit this into a GA, you could expand the four letters of the RNA alphabet with a BREAK letter that signified the end of one molecule and the beginning of another. Thus one GA population member could hold multiple kinds of RNA. To evaluate, fold the RNAs into their tertiary structure per the parameters of the experiment. Drop multiple copies of each into the experiment, crank up your molecular dynamics simulator, come back later and see what you've got. The experiment will yield a negative result if every random set of RNA molcules degrades instead of producing more reaction products than you started with. That would be an important, publishable result. Anyone from the ID side who did this experiment would be a hero in my book, no matter what the result, simply for putting their beliefs to the test.Nakashima
July 27, 2009
July
07
Jul
27
27
2009
09:26 AM
9
09
26
AM
PDT
R Daneel Olivaw is not going to be a GA. Similarly, a GA will not write itself out of randomly varied noise on a disk, nor is it credibly improved by allowing the object or source code to be hit by white noise; rapidly such would result in NON-function, which is why islands of function are seen as sitting in a sea of non-function.
No, R Daneel Olivaw isn't going to be a GA - what a bizarre thing to even think that it might! You have yet again launched into this idea that a GA ought to 'write its self out of noise'. No one is claiming that GA's pop into existence out of nothing or that making random changes to a GA won't stop it from functioning as a GA. What we are talking about is what a GA does, not how they are created. I don't understand why you are finding these two concepts so difficult.
When it comes to life forms: first life credibly requires 600 - 1,000 kilo bits of initial information capacity to function ...
Remember this from ScottAndrews:
Does anyone know anything about the evolution of early life or proteins? How can we say what it is or isn’t consistent with?
I hope you are going to back up your claims about what is required for the origin of life with some evidence. Would you define an autocatalytic set as having any 'function' or does it only have function when we can define it as a living system?BillB
July 27, 2009
July
07
Jul
27
27
2009
08:43 AM
8
08
43
AM
PDT
KF-san, Sorry, I am not attached, latched or quasi-latched to the data array being a PHOTOGRAPH. It is only a data array. I happen to know it contains an image of Mt Rushmore, ca 1925. If your design detection procedure can't tell me anything about the content of the PHOTOGRAPH, but rather relies on the artifact of it being a PHOTOGRAPH, delivered to the detection procedure via the intelligently designed INTERNET, then the design detection procedure is useless. I'm quite willing to go back to a discussion of more abstract data arrays if you prefer. Here is generation 0 of my population of IPD competitors. Every bit of the array was assigned by a call to the random number function. How much FCSI does it contain? Here is the data array for generation 1000. The population members now function much better. How much FSCI does it contain? You can assume the use of the GA algorithm I gave previously, if it helps. Please answer about the data array, not the algorithm, the operating system, the microprocessor, etc. None of those things seem to matter when discussing text on a screen, or 143 ASCII characters.Nakashima
July 27, 2009
July
07
Jul
27
27
2009
08:26 AM
8
08
26
AM
PDT
PPPS: We can see that, for example, a particular text string in front of us is of n characters, and that it constitutes contextually responsive text in English. Where N > 143 we can confidently conclude based on the text string and its characteristics -- not direct observation of its causal story -- that it is an artifact of design. (For, we have separately seen that and why FSCI is a reliable sign of design. Validation of sign is not ot be confused with its use.)kairosfocus
July 27, 2009
July
07
Jul
27
27
2009
05:50 AM
5
05
50
AM
PDT
PPS: I trust it is clear enough that I am showing that a stick that falls in berry juice can be used to write, but a parker 51 shows a kind of functionally specific complex information that takes us out of the credible reach of nature acting freely by forces of chance + necessity. Similarly, 3 letters by chance that spell out an English word are fairly easy to get to, but 143 ASCII characters forming a contextually responsive utterance in English is another entirely. And, when it comes to life forms, origin of life credibly requires 100's k bits of information, well beyond the credible reach of chance + necessity -- for INITIAL function.kairosfocus
July 27, 2009
July
07
Jul
27
27
2009
05:46 AM
5
05
46
AM
PDT
Nakashima-San: You have chosen the PHOTOGRAPH as an example. Can you show me a photograph that has appeared anywhere in our known observation without a design-based process? As for the issue of the impact of modest random perturbation [here through white noise aka "snow"], this goes to specificity of function within an island of functionality. FSCI is about functionality that is sufficiently specific that it exists in a target zone that can more or less be characterised as islands or archipelagoes. That is why noise dumped into the picture of Mt Rushmore C 1925 eventually makes it unrecognisable as a specific location, then as a picture of a mountain then of a picture of anything in particular. and, once we are at the snowstorm effect, further random change has a very different effect: moving around in a vast sea of snowy images. Such an entity is there3fore not functionally specific, save in the sort of scenario where we take one particular config and use it as say the basis of a one time message pad cipher system, or maybe as a way to do a lottery outcome. then, we have made a reference point from the otherwise non-functional config and have defined a new target for a new purpose. GEM of TKI __________ PS: BB: GA's etc do output data strings that exhibit FSCI, and such show that design is at work in the underlying causal process, per reliable sign. The ASCII text of the code is similarly an index of design, and we do in fact know that GA's are artifacts of design. What I have objected to is that GA's are not credible as artificial intelligences in any sense worth having: they are incapable of autonomy, real decision and imaginative creativity. R Daneel Olivaw is not going to be a GA. Similarly, a GA will not write itself out of randomly varied noise on a disk, nor is it credibly improved by allowing the object or source code to be hit by white noise; rapidly such would result in NON-function, which is why islands of function are seen as sitting in a sea of non-function. In short, FSCI is again seen to be the product of intelligence. Technological evolution -- by design -- allows the functional complexity of systems to increase over time: such systems are complex, functionally specific and informational. Similarly, pens an the like -- of sufficient complexity -- show implicit information that can be worked out and used to estimate the required FSCI, but it would be much easier to simply observe the complexity and irreducibility that occur with core parts for a modern pen. (It is conceivable that a stick or feather could get itself stuck in berry juice and form a "natural" pen" but that has nothing to say to the Parker 51 or the like. And, because of the IC of such systems -- and of the many subsystems in cell based life -- the creation of novel functionality of complex order that exhibits irreducibility is maximally improbable by Darwinian type processes -- in short novel body plans have to get to shores of function too, before they can be improved incrementally. A quill stuck in berry juice is one thing, a Parker 51 another entirely different one.) When it comes to life forms: first life credibly requires 600 - 1,000 kilo bits of initial information capacity to function, and has in it many irreducibly complex carefully organised subsystems. Novel body plans require 10's - 100's+ of MILLIONS of new bits.kairosfocus
July 27, 2009
July
07
Jul
27
27
2009
05:39 AM
5
05
39
AM
PDT
1 2 3 4 5 14

Leave a Reply