Uncommon Descent Serving The Intelligent Design Community

ID Foundations, 8: Switcheroo — the error of asserting without adequate observational evidence that the design of life (from OOL on) is achievable by small, chance- driven, success- reinforced increments of complexity leading to the iconic tree of life

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

[Cont’d. fr. p. 1]

[UD ID Founds Series, cf. Bartlett on IC]

Going onward, the proposed spontaneous, chance plus necessity origin of Carbon chemistry, aqueous medium, cell-based life would imply the spontaneous creation of the observed von Neumann Self-Replicator [vNSR] based self-duplication mechanism, which is:

a] irreducibly complex,

b] functionally specific,

c] code-based, with use of data structures on a storage “tape” [i.e a language based phenomenon],

d] algorithmic and

e] dependent on complex, functionally specific, highly coordinated, organised implementing nanomolecular machinery:

A von Neumann Self Replicator

Now, following von Neumann generally (and as previously noted in brief), such a machine requires . . .

(i) an underlying storable code to record the required information to create not only (a) the primary functional machine [[here, for a “clanking replicator” as illustrated, a Turing-type “universal computer”; in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also (b) the self-replicating facility; and, that (c) can express step by step finite procedures for using the facility;
(ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with
(iii) a tape reader [[called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling:
(iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by
(v) either:
(1) a pre-existing reservoir of required parts and energy sources, or
(2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment.

Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor. That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).]

This irreducible complexity is compounded by the requirement (i) for codes, requiring organised symbols and rules to specify both steps to take and formats for storing information to express required algorithms and data structures.  The only known source of languages, and of algorithms is intelligence.

Immediately, that sort of complex specificity and integrated, irreducibly complex organisation mean that we are looking at islands of organised function for both the machinery and the information in the wider sea of possible (but overwhelmingly mostly non-functional) configurations of components.

In short, outside such functionally specific — thus, isolated — information-rich hot (or, “target”) zones, want of correct components and/or of proper organisation and/or co-ordination will block function from emerging or being sustained across time from generation to generation. That is, until you reach the shores of an island of function, there simply is no hill to climb. So, once the set of possible configurations is large enough and the islands of function are credibly sufficiently specific/isolated, it is unreasonable to expect such function to arise from chance, or from chance circumstances driving blind natural forces under the known laws of nature in a regime of trial and error/success.

And, since such a vNSR is a requisite for reproduction of the sort of cell-based life we actually observe, absent a chance plus necessity mechanism backed up by observations, evolutionary materialistic accounts of the origin and diversification of life lack both a root and a trunk.

Other than, in the imaginative reconstructions of those who teach such speculations as “science.”

[U/D, Dec 2, 2011] As an illustration, let us examine the protein synthesis process:

The step-by-step process of protein synthesis, controlled by the digital (= discrete state) information stored in DNA
The Ribosome, assembling a protein step by step based on the instructions in the mRNA "control tape"

Video zoom-in, courtesy Vuk Nikolic:

[vimeo 31830891]

This problem extends to the issue of missing main branches in the Darwinian tree of life along the usual geochronological timeline, as we can see from Gould’s key observations on speciation — the gateway to body-plan level evolution, and his similar remarks on the trade secret of Paleontology:

. . . long term stasis following geologically abrupt origin of most fossil morphospecies, has always been recognized by professional paleontologists. [[The Structure of Evolutionary Theory (2002), p. 752.]

. . . .  The great majority of species do not show any appreciable evolutionary change at all. These species appear in the section [[first occurrence] without obvious ancestors in the underlying beds, are stable once established and disappear higher up without leaving any descendants.” [[p. 753.]

. . . . proclamations for the supposed ‘truth’ of gradualism – asserted against every working paleontologist’s knowledge of its rarity – emerged largely from such a restriction of attention to exceedingly rare cases under the false belief that they alone provided a record of evolution at all! The falsification of most ‘textbook classics’ upon restudy only accentuates the fallacy of the ‘case study’ method and its root in prior expectation rather than objective reading of the fossil record. [[p. 773.]

And,

“The absence of fossil evidence for intermediary stages between major transitions in organic design, indeed our inability, even in our imagination, to construct functional intermediates in many cases, has been a persistent and nagging problem for gradualistic accounts of evolution.” [[Stephen Jay Gould (Professor of Geology and Paleontology, Harvard University), ‘Is a new and general theory of evolution emerging?Paleobiology, vol.6(1), January 1980,p. 127.]

“All paleontologists know that the fossil record contains precious little in the way of intermediate forms; transitions between the major groups are characteristically abrupt.” [[Stephen Jay Gould ‘The return of hopeful monsters’. Natural History, vol. LXXXVI(6), June-July 1977, p. 24.]

The extreme rarity of transitional forms in the fossil record persists as the trade secret of paleontology. The evolutionary trees that adorn our textbooks have data only at the tips and nodes of their branches; the rest is inference, however reasonable, not the evidence of fossils. Yet Darwin was so wedded to gradualism that he wagered his entire theory on a denial of this literal record:

The geological record is  extremely imperfect and this fact will to a large extent explain why we do not find intermediate varieties, connecting together all the extinct and existing forms of life by the finest graduated steps [[ . . . . ] He who rejects these views on the nature of the geological record will rightly reject my whole theory.[[Cf. Origin, Ch 10, “Summary of the preceding and present Chapters,” also see similar remarks in Chs 6 and 9.]

Darwin’s argument still persists as the favored escape of most paleontologists from the embarrassment of a record that seems to show so little of evolution. In exposing its cultural and methodological roots, I wish in no way to impugn the potential validity of gradualism (for all general views have similar roots). I wish only to point out that it was never “seen” in the rocks.

Paleontologists have paid an exorbitant price for Darwin’s argument. We fancy ourselves as the only true students of life’s history, yet to preserve our favored account of evolution by natural selection we view our data as so bad that we never see the very process we profess to study.” [[Stephen Jay Gould ‘Evolution’s erratic pace‘. Natural History, vol. LXXXVI95), May 1977, p.14.]  [[HT: Answers.com]

In short, ever since Darwin’s day, the overwhelmingly obvious, general observed pattern of the fossil record, plainly, has actually always been sudden appearances, morphological stasis, and disappearance (or continuation into the modern world).  Indeed, the Cambrian revolution is a classic in point, as Meyer highlighted in his recent PBSW paper (which passed proper peer review by “renowned” scientists.):

The Cambrian explosion represents a remarkable jump in the specified complexity or “complex specified information” (CSI) of the biological world. For over three billions years, the biological realm included little more than bacteria and algae (Brocks et al. 1999). Then, beginning about 570-565 million years ago (mya), the first complex multicellular organisms appeared in the rock strata, including sponges, cnidarians, and the peculiar Ediacaran biota (Grotzinger et al. 1995). Forty million years later, the Cambrian explosion occurred (Bowring et al. 1993) . . . One way to estimate the amount of new CSI that appeared with the Cambrian animals is to count the number of new cell types that emerged with them (Valentine 1995:91-93) . . . the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types . . . New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information. Thus an increase in the number of cell types implies (at a minimum) a considerable increase in the amount of specified genetic information. Molecular biologists have recently estimated that a minimally complex single-celled organism would require between 318 and 562 kilobase pairs of DNA to produce the proteins necessary to maintain life (Koonin 2000). More complex single cells might require upward of a million base pairs. Yet to build the proteins necessary to sustain a complex arthropod such as a trilobite would require orders of magnitude more coding instructions. The genome size of a modern arthropod, the fruitfly Drosophila melanogaster, is approximately 180 million base pairs (Gerhart & Kirschner 1997:121, Adams et al. 2000). Transitions from a single cell to colonies of cells to complex animals represent significant (and, in principle, measurable) increases in CSI . . . .

In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types, but also for the origin of new body plans . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes–the very stuff of macroevolution–apparently do not vary. In other words, mutations of the kind that macroevolution doesn’t need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don’t occur.

In short, so far as observations are concerned, the main branches of the Darwinian Tree of Life are also missing without leave. What we actually see are twig-level variations within complex, specifically functional forms. It is these twig like variations that are then question-beggingly extrapolated backwards into the imagined grand tree of life.

In addition [Oct 15, HT UD News], some breaking news at Sci Daily highlights the algorithmic specificity and patent programming of the unfolding of the body plan through the incremental step by step implementation of the Hox genes set:

The Hox bodyplan algorithm (Sci Daily, Fair Use)

Clipping:

Why don’t our arms grow from the middle of our bodies? The question isn’t as trivial as it appears. Vertebrae, limbs, ribs, tailbone … in only two days, all these elements take their place in the embryo, in the right spot and with the precision of a Swiss watch. Intrigued by the extraordinary reliability of this mechanism, biologists have long wondered how it works. Now, researchers at EPFL (Ecole Polytechnique Fédérale de Lausanne) and the University of Geneva (Unige) have solved the mystery . . . .

During the development of an embryo, everything happens at a specific moment. In about 48 hours, it will grow from the top to the bottom, one slice at a time — scientists call this the embryo’s segmentation. “We’re made up of thirty-odd horizontal slices,” explains Denis Duboule, a professor at EPFL and Unige. “These slices correspond more or less to the number of vertebrae we have.”

Every hour and a half, a new segment is built. The genes corresponding to the cervical vertebrae, the thoracic vertebrae, the lumbar vertebrae and the tailbone become activated at exactly the right moment one after another. “If the timing is not followed to the letter, you’ll end up with ribs coming off your lumbar vertebrae,” jokes Duboule. How do the genes know how to launch themselves into action in such a perfectly synchronized manner? “We assumed that the DNA played the role of a kind of clock. But we didn’t understand how.” . . . .

Very specific genes, known as “Hox,” are involved in this process. Responsible for the formation of limbs and the spinal column, they have a remarkable characteristic. “Hox genes are situated one exactly after the other on the DNA strand, in four groups. First the neck, then the thorax, then the lumbar, and so on,” explains Duboule. “This unique arrangement inevitably had to play a role.”

The process is astonishingly simple. In the embryo’s first moments, the Hox genes are dormant, packaged like a spool of wound yarn on the DNA. When the time is right, the strand begins to unwind. When the embryo begins to form the upper levels, the genes encoding the formation of cervical vertebrae come off the spool and become activated. Then it is the thoracic vertebrae’s turn, and so on down to the tailbone. The DNA strand acts a bit like an old-fashioned computer punchcard, delivering specific instructions as it progressively goes through the machine.

“A new gene comes out of the spool every ninety minutes, which corresponds to the time needed for a new layer of the embryo to be built,” explains Duboule. “It takes two days for the strand to completely unwind; this is the same time that’s needed for all the layers of the embryo to be completed.”

This system is the first “mechanical” clock ever discovered in genetics. And it explains why the system is so remarkably precise . . . .

The process discovered at EPFL is shared by numerous living beings, from humans to some kinds of worms, from blue whales to insects. The structure of all these animals — the distribution of their vertebrae, limbs and other appendices along their bodies — is programmed like a sheet of player-piano music by the sequence of Hox genes along the DNA strand.

This is functionally specific, complex organisation and associated information in action, using a classic programming structure, the sequence of steps. With, a timer involved — we have stumbled across Paley’s self-replicating watch, not in a field but in the genome, and as expressed at a crucial time in embryological development. And with a strong hint that we are dealing with an island of function per body plan, as minor disturbances (as Meyer pointed out)  are patently likely to be disruptive and possibly fatal.

So, we must ask and try to answer a couple of the begged questions:

Q 1: If these three main points — missing tap-root, missing trunk, missing main branches — are so, then how was the Darwinian/Neo-Darwinian framework established as the standard, confidently presented theoretical explanation of the origin and body plan level diversity of life?

A 1: The establishment is in the main philosophical, not observational. This is aptly summarised by ID thinker, Philip Johnson, in his reply to Lewontin’s a priori materialism, in First Things:

For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”
. . . .   The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

Q 2: But, is design capable of accounting for the origin of life and of body plan level diversity? Or, is the inference to design simply a negative, discredited God of the gaps argument that boils down to giving up on scientific explanation?

A 2: As was pointed out in response to Petrushka, right off, design is a commonly observed and experienced causal mechanism: purposefully directed arrangement of parts towards a desired end.  One that often leves behind detectable signs that have long since been tested where we can directly know the causal story as a cross-check, and these tests consistently tell us that the sign is empirically reliable. Plainly, such is not a God of the gaps argument: we explain not on what we do not know, but on what we do know.

So, per the well-known inductive, provisional pattern of warrant that underlies science, absent credible observational evidence that points to such signs not being reliable signs of design as cause, we are entitled to infer from signs such as FSCO/I to their signified causal pattern or process.  Further, what hinders that process of inference on matters tied to origin of life and origin of body plans is not the weight or balance of the actual evidence, but the power of an institutionally dominant worldview, evolutionary materialism.

Which is exactly what Harvard evolutionary biologist Richard Lewontin admits (and which Johnson rebukes as cited just above):

It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [[another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [[i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. The eminent Kant scholar Lewis Beck used to say that anyone who could believe in God could believe in anything. To appeal to an omnipotent deity is to allow that at any moment the regularities of nature may be ruptured, that miracles may happen. [[Perhaps the second saddest thing is that some actually believe that these last three sentences that express hostility to God and then back it up with a loaded strawman caricature of theism and theists JUSTIFY what has gone on before. As a first correction, accurate history — as opposed to the commonly promoted rationalist myth of the longstanding war of religion against science — documents (cf. here, here and here) that the Judaeo-Christian worldview nurtured and gave crucial impetus to the rise of modern science through its view that God as creator made and sustains an orderly world. Similarly, for miracles — e.g. the resurrection of Jesus — to stand out as signs pointing beyond the ordinary course of the world, there must first be such an ordinary course, one plainly amenable to scientific study. The saddest thing is that many are now so blinded and hostile that, having been corrected, they will STILL think that this justifies the above. But, nothing can excuse the imposition of a priori materialist censorship on science, which distorts its ability to seek the empirically warranted truth about our world.]

[[From: “Billions and Billions of Demons,” NYRB, January 9, 1997. Bold emphasis added. ]

“Absolute” and “a priori adherence to materialist causes,” maintained “no matter how counter-intuitive, no matter how mystifying to the uninitiated” are not exactly the stuff of open-minded, empirically driven science.

This is ideological agenda, not science.

And, as was noted, the caricature of theistic thought on science offered by Lewontin as a justification for his attitude, is grossly ill-informed both philosophically and historically.  The nature of miracles is such that to stand out as signs from beyuond the ordinary course of events, they require just that: a generally predictable order to the cosmos; and, the explicit teaching of the Judaeo-Christian tradition is that God is the God of Order, not chaos, who upholds creation by his intention that it be inhabited, by the likes of us.

Blend in the equally biblical concept that God wishes for us to discern his hand in nature and to be stewards of our world, and we see that people within that tradition will generally be friendly to the project of exploring and identifying the intelligible principles that drive that order.  Which is precisely the documented historical root of modern science: “thinking God’s [creative and sustaining] thoughts after him.”

The myth of an ages long war and irreconcilable hostility between Religion and Science is just that, a C19 rationalist myth long since past its sell-by date. But such (however important) is a bit adrift of the main focus for this post.

We may now return to and summarise the key take-home message:

1 –> Design is a real and observed phenomenon, one that is both meaningful and relevant to the study of the origins of life [unless one wishes to beg the question and decide with Lewontin et al ahead of looking at evidence, that the possibility of deisgn is not to be entertained, regardless of evidence]. We may usefully define such design as the purposeful and intelligent choice and/or arrangement of parts to achieve a goal.

2 –> Artifacts of such design, once we pass a reasonable threshold of functionally specific complexity, are not credibly explained on incremental and cumulative development driven by chance and necessity without intelligence. We are therefore entitled to accept as a scientific conclusion, the verdict of what we do see: that FSCO/I is routinely and only produced by design. FSCO/I, then, is a reliable sign of design.

3 –> Attempts to explain away FSCO/I without reference to the empirically known source — design — typically tell only a part of the story, and for example the genetic algorithms presented as simulations of what more or less plausibly was the case, implicitly assume and impose an intelligently designed smooth fitness function type of model and proceed to do hill-climbing optimisation, within an island of function.  That is, such algorithms are implicitly targeted and constrained, intelligently designed searches.

4 –> They explain  origins, in short by yet another switcheroo: intelligent design and artificial selection are substituted for the actual chance variation and natural selection that were to be tested. Then, ironically, the result is presented as though it undermines what it demonstrates: the power of intelligent design.

5 –> Going to the actual record of the past, ever since Darwin’s day 150 years ago, the fossil record has simply not provided strong support for a gradually branching evolutionary tree of life model, driven by Darwinian natural selection. This, with now over 250,000 fossil species studied, and millions of fossils in museums, with billions more seen in the known beds. Instead we see islands of function, and a pattern of sudden appaearance, stasis and gaps, as Gould noted on. An astonishing related case in point from the now popular molecular evidence is the result of  kangaroo/human genome comparison, where on a timeline that is said to have diverged 150 million years ago on the evolutionary tree of life:

The tammar wallaby (Macropus eugenii), was the model kangaroo used for the genome mapping.

Like the o’possum, there are about 20,000 genes in the kangaroo’s genome . . . . That makes it about the same size as the human genome, but the genes are arranged in a smaller number of larger chromosomes.

“Essentially it’s the same houses on a street being rearranged somewhat,” Graves says.

In fact there are great chunks of the [human] genome sitting right there in the kangaroo genome.”

6 –> In this overall context, Orgel and Wicken give us a classic contrast, one that highlights how the FSCO/I in life is distinct from the sort of patterns that are produced by blind chance and/or mechanical necessity:

Orgel:

 . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.]

Wicken:

‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems.  Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]

7 –> Let us focus:

organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’

8 –> It is that requirement for specification of elements, relationships and integration through a “wiring diagram” that is the root of the information-rich functionally specific organisation that is the heart of the design inference, once a sufficiently complex threshold is reached.

9 –> Wicken, of course, hoped that “selection” would include what the early Darwinists termed “natural selection,” or “survival of the fittest.”

10 –> But in fact, that is the the fatal weakness. For, such selection is a weeding out, necessarily a subtraction of information. It is the chance variation component of the variation-selection mechanism that is the hoped for source of increments in information. And, it is only when we have an integrated functional, self-replicating, embryologically feasible body plan, that incremental success leads to hill-climbing.

11 –> This leads to the implicit commitment to requiring that the domain of life forming a continent of function that can be traversed by a Darwinian tree of life, from tap-root on to shoot, branches and twigs. Precisely what is credibly not there in the fossil record, and precisely what the nature of integrated functionally specific complex organisation dependent on codes, algorithms, molecular execution  machines and a von Neumann Self Replication facility would not lead us to expect.  (Recall, the deep isolation of protein fold domains, per Axe’s findings of such being of order 1 in 10^70 or thereabouts of amino acid sequences. Similarly, in general, it is utterly implausible that something like a Hello World program could be be incrementally converted into say an operating sytem where at each incremental step the resulting program is functional.)

12 –> In short, the reasonable conclusion is that the body plans of the observed world of life are based on islands of complex, specific function in a much larger configuration space of non-functional forms, starting from the first cell based life. In such a context, the observed variability and adaptability of life forms within islands of function — there are no actual observations of the origin of novel body plans — is best understood as an intentional design feature: flexibility and adaptability, giving robustness.

So, on the evidence, we are epistemologically entitled to confidently infer that the FSCO/I that appears in the world of life is the product of the only observed, empirically known source of such an information-rich pattern: design.

Wallace, co-founder of the theory of evolution, thus had a serious point, when he characterised “the world of life” as: a manifestation of Creative Power, Directive Mind and Ultimate Purpose. END

Comments
F/N: I have just now added illustrations and a video on the protein synthesis process (on p. 2), HT BA 77, UD News, UD Web master. KF.kairosfocus
December 1, 2011
December
12
Dec
1
01
2011
11:40 PM
11
11
40
PM
PDT
F/N: I have just now added illustrations and a video on the protein synthesis process, HT BA 77, UD News, UD Web master. KF.kairosfocus
December 1, 2011
December
12
Dec
1
01
2011
11:40 PM
11
11
40
PM
PDT
F/N 3: EA's discussion on IC and FSCI vs Avida here in response to Dr Liddle is well worth a read.kairosfocus
October 16, 2011
October
10
Oct
16
16
2011
05:18 AM
5
05
18
AM
PDT
F/N: I have added comments above, (a) here in response to Dr Bot against Venter as providing proof of concept of the engineering design of cell based life and its components through lab methods, and (b) here in response to Dr Liddle (and KH) on "where are the gaps" and the like. The new layout makes it hard to follow sub threads and spot comments, so we eagerly await the chronological view modification. I also must note that it is just a tad frustrating to see direct evidence of failure to do basic due diligence on reading a post before commenting to object, and on similarly failing to do due diligence on what has been going on in nanotechnology over these 20 and more years, and why it is that Venter's work is so significant. GEM of TKIkairosfocus
October 16, 2011
October
10
Oct
16
16
2011
02:42 AM
2
02
42
AM
PDT
Dr Liddle (and KH): It is a little hard to follow sub discussions on the new format, so responses will occasionally lag comments. (JC tells us it will take a while to program the requested viewing options feature. Does someone out there know of a WP plug-in that will do the trick?) On the main matter, the trade secret of paleontology and Cambrian revolution excerpts on p. 2 of the original post should make it clear that there is a major problem of a want of relevant branching in the fossil record; the only actual record of the remote past of life beyond written report, however we may date it. Those are facts, facts based on over 250,000 fossil species, millions of samples in museums, and billions in known fossil beds. The plain record is that the first time we see complex body plans, dozens at phylum and sub phylum levels, there are no credible antecedents and on the usual timeline there is a very narrow window in which they appear. This suddenness was known to Darwin, but he anticipated that extensive investigation would correct the gaps then apparent. The extensive investigations have been carried forth over 150 years, and the net result is that we are back at the same point of sudden appearance, stasis, disappearance and/or continuation into the modern world, with gaps at root, trunk and branch level in the tree of life. What the fossils overwhelmingly tell us about -- headlines, museum exhibits and textbooks with breezily confident declarations notwithstanding -- is what everyone agrees on: twig level minor adaptations. In addition, whether or not you are willing to accept it, the proposed darwinian means of generating variations, chance, is sharply challenged to cross config spaces to reach isolated islands of function. This starts with first life, and continues with major body plan innovations that require many co-ordinated changes to all be in place at once. (Cf no 3 in the series on IC and the co-option problem in light of constraints C1 - 5, as well as the Bartlett paper now linked top of this page). It is highly significant that the standard Darwinist response and tendency is to emphasise NATURAL SELECTION, rather than chance variation as the claimed engine by which innovations of function can happen incrementally. But this is misleading, usually inadvertently so. For, the "selection" part is a matter of destructive culling out, not actual addition of information. The fittest surviving is based on the less fit NOT surviving, i.e. certain variants disappear across time, whether at once or in the Malthusian "struggle for existence" that Darwin highlighted. Subtraction is not addition. So, necessarily, it is the chance variation that darwinian mechanism is relying on as the candidate to create: protein etc codes, regulatory code, co-ordinated development mechanisms that trigger at proper stages from zygote on, and more, all in data structures, and in body plans that must unfold in a precise and specific sequence with fatal error as a highly likely outcome if the precise unfolding for body plan formation [cf. addition on p. 2 of OP] is randomly varied. Chance plus trial and error in short. But we already know that complex and functionally specific configurations are rare and isolated in spaces of all possible configurations, as a general rule. As, your friendly local junkyard will tell you. Thus, there is a definite search-space traversal challenge in a context where isolated islands of function sit in seas of non-function, and the resources of the solar system will be overwhelmed at 500 bits, and those of the observed cosmos at 1,000. So, there is a particular burden of proof on advocates of darwinian mechanisms to demonstrate that he tree of life, at its main nodes, is well demonstrated empirically. The first body plan, i.e OO cell based life, is a particularly important point, as the OP highlights: no root, no shoots, no branches and so no tree. As p. 2 of the OP shows, this confronts us with the need to originate a vNSR, which is massively irreducibly complex and riddled with FSCI well beyond the 500 - 1,000 bit threshold. As the mutual ruin of Orgel and Shapiro, now backed up by Freeman's recent summary on the "mystery," the only reasonable candidate on the table for OOL is design. And, despite Dr BOT's strictures, Venter did provide proof of concept, and indeed, it is now 20 years since the underlying basic technical means and possibilities have been demonstrated. Worse, this is all in an observed cosmos that shows all sorts of signs of being finely tuned and set up for C-chemistry, aqueous medium, cell-based life. That is, we have strong reason to infer to design of the cosmos for life, and strong reason to infer to the design of life from the outset. That sharply shifts the balance of epistemological plausibility on the origin of body plans, and in particular the human body plan and the human mind and conscience. Design -- despite much hostility and many attempts to lock it out a priori, by the likes of Lewontin, Sagan, the US NAS, the US NSTA, the NCSE etc etc etc, and despite the way that politically correct thought police tactics have been resorted to to lock it out -- is a viable explanation, and on OO of cosmos and life, is arguably the best or only viable causal explanation. (Indeed, that is what best explains the a priori tactics being used to try to lock it out.) And, pardon directness: you plainly have not read p. 2 of the OP, BTW -- "I don’t see anything in the OP about “observed gaps”, apart from the well-known OOL gap." Finally, please explain to me what the Gould and Meyer excerpts on p 2 are about, if not observed and major fossil record "gaps." (And,the OOL gap you seem to wish to glide over as if it were insignificant is pivotal, as it drastically shifts the balance of epistemic plausibility in favour of design explanations, especially when joined to the cosmological evidence on fine tuning of our observed cosmos.) So, pardon if you find the above exchanges frustrating, but from my perspective, they do show the reasons for the sort of concerns and issues and gaps in addressing evidence on the table that I have pointed to. GEM of TKIkairosfocus
October 16, 2011
October
10
Oct
16
16
2011
02:23 AM
2
02
23
AM
PDT
F/N: It's hard to follow the sub-threads, so when I spot something . . . Above it is claimed in effect that Venter was only rearranging the parts of already existing cell based self-replicating forms, so this cannot be proof of concept for the origin of said cells by engineering methods. This re-statement should be enough to show the fatal defect in the objection. For, we see from Venter that the COMPOSITION of DNA etc can be manipulated by engineering means. As the just linked CSM article summarises:
After almost 15 years of work and $40 million, a team of scientists at the J. Craig Venter Institute says they have succeeded in creating the first living organism with a completely synthetic genome. This advance could be proof that genomes designed in a computer and assembled in a lab can function in a donor cell, eventually reproducing fully functional living creatures, that is, artificial life. As described today in the journal Science, the study scientists constructed the genome of the bacterium Mycoplasma mycoides from more than 1,000 sections of preassembled units of DNA. Researchers then transplanted the artificially assembled genome into a M. capricolum cell that had been emptied of its own genome. Once the DNA "booted up," the bacteria began to function and reproduce in the same manner as naturally occurring M. mycoides. "It's a culmination of a series of impressive steps," Ron Weiss, an associate professor of biological engineering at MIT who was not associated with the study, told LiveScience.com. "If you look over the last few years, at what they've been able to produce, it's definitely impressive. Being able to create genomes of this scale? That's impressive." To boot up, the DNA utilized elements of the M. capricolum recipient cells, according to study team member Carole Lartigue of the Venter Institute. The bacterial cells still contained certain "machinery" that let them carry out the process of expressing a gene, or taking the genetic code and using it to build proteins – called transcription. When the artificial genome entered the cell, the cellular machines that run DNA transcription recognized the DNA, and began doing their job, Lartigue said. "This cell's lineage is the computer, it's not any other genetic code," said Daniel Gibson, lead author of the Science paper, also of Venter Institute.
So, we know that nanotech manipulation and composition methods exist and can be used in a molecular nanotech lab. This should be grounds for seeing that the same general sort of methods extend to the original composition of said components, modules and organised integrated frameworks in a cell. Indeed, we did not strictly NEED Venter -- or his Yeast cut and paste methods -- to see that manipulation of nanotech, atom level components by chemical and mechanical means is possible. It is over 20 years ago that the famous "IBM" made of 35 Xenon atoms on a substrate by means of a scanningt-tunnelling microscope, was published and blazed to the world as a picture. (Cf discussion and image here and timeline of achievements here.) In sum, it has been publicly and generally known for over 20 years, that atomic manipulation nanotechnology is possible, and indeed a field of research and applications has grown up across that time. What Venter has done, is to directly bshow that this new field of science and technology is relevant to the world of life, starting with the creation and modification of individual cells. And -- whatever selectively hyperskeptical objectors may want to declare dismissively -- that is indeed empirical proof of concept for the use of similar technologies as a SUFFICIENT cause of the origin of life. GEM of TKIkairosfocus
October 16, 2011
October
10
Oct
16
16
2011
01:40 AM
1
01
40
AM
PDT
Elizabeth,
Starting with a minimally functional population of self-replicators in an environment, replicating with variance, adaptation will tend to occur, as we see both in the field and the lab, and in simulations like AVIDA.
What are these minimally functional self-replicators, and where are their populations? What sort of adaptations have been observed, not counting loss-of-function mutations? Statements such as that cited give the impression that the observation of adaptation beyond meaningless phenotypic changes that come and go have been observed, perhaps more than once. In reality they are more like bigfoot, but only if scientists owned stock in bigfoot and were trying to drive it up.ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
08:36 PM
8
08
36
PM
PDT
Re KH:
how does “design” explain anything at all other then it allows you to say “the explanation for it’s origin is that it was designed”. It was designed because it appears to have been designed. It was designed because it shares features with other things that we know to be designed. But that’s not an “explanation” at all, it’s just changing the label.
1 --> Start from a basic reality: we ourselves are designers, and we produce artifacts that often show characteristic empirical signs of design. And it is patently a good explanation of the cause of a house, a house-fire [--> a highly significant case], a car, a pencil, a computer, a Stonehenge, etc to say: it was designed, per reliable signs manifest from the object and/or its traces itself. (BTW, have you looked up TRIZ, the theory of inventive problem solving, as already pointed to? Try here as a start. This suffices to show that how-to is a world of relevant study. That X was designed on empirically tested reliable signs opens up a world of reverse engineering and onward forward engineering methods. Contrary to ever so many glib and/or barbed talking points, such is a science-starter, not a science stopper. Indeed, historically, the designed world view of science was pivotal to the Scientific revolution from 1543 - 1700+. Cf. discussion here.] 2 --> So, design is a known causal factor, i.e something that initiates and/or sustains something else that had a beginning, in existence. All of this is or should be basic, even common sensical, but when a dominant ideology has been imposed in a community in the teeth of what is common sensical, it becomes necessary to clear the air on even basic points. Even in a world in which we are literally surrounded by designed artifacts, and see their characteristic signs almost every time we turn around. (I recall, this was the same problem in the heyday of Marxism.) 3 --> So, as a preliminary step, in Dembski's summary of design as causal process:
. . . (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi.)
4 --> It is also helpful to recall here the common definition of engineering that it [intelligently] uses the forces and materials of nature, through knowledge, skill, experience and imagination, to create desired and specified objects, processes, systems and networks, etc, economically, that are to serve the common good. (That last part is key to the ethical requisites of engineering praxis.) 5 --> We may directly observe engineering and design in action -- the stringing of chains of glyphs serving as digital symbols for meaningful sentences such as in blog comments will do for an example, and we have many other cases that are subject to record or are notorious, e.g. the remains of the works of classical civilisations. 6 --> A common feature of such is evidently purposeful arrangement of parts towards a plausible goal. Aqueducts convey water, computers process information based on symbols and physically implemented logical operators [typically, based on controlled switch circuits expressing NAND logic], and typed sentences communicate. 7 --> In some cases, chance and/or mechanical necessity -- two other known causal factors -- may plausibly give rise to phenomena that look like objects of design. Random production of texts is a case in point, i.e. it is possible to get short words and sentences or the like. 8 --> But, this soon runs into a barrier: want of search resources to plausibly get to sufficiently complex cases. As Wiki summarises on random text generation:
The [infinite monkeys] theorem concerns a thought experiment which cannot be fully carried out in practice, since it is predicted to require prohibitive amounts of time and resources. Nonetheless, it has inspired efforts in finite random text generation. One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[21] A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d...
9 --> This also shows that even when such cases occur, they have to be detected by a wider system, or they would be lost in a sea of gibberish, the TYPICAL product of random stringing of bits or ASCII characters. In short, we see here the point about the deep isolation in the space of possible configurations, of functional clusters. 10 --> In addition, we can see that spaces of order 10^50 have been successfully searched, but that underscores how the maximum reasonable Planck time, quantum state resources of the 10^57 atoms of our solar system are about 10^102, where even the fastest chemical reactions [ionic ones] take about 10^30 such P-times. Just 500 bits of binary digital places in a string structure, would have 3 * 10^150 possibilities, so the dedication of our solar system to search such a space, would be equivalent to drawing a 1- straw sized sample from a hay bale 3 1/2 light days across. A solar system could be lurking in there, but sampling theory would tell us that, overwhelmingly we would only detect a straw. 11 --> Extending to our observed cosmos -- what we are empirically warranted to discuss -- we have 10^80 or so atoms, and 1,000 bits of info carrying capacity would swamp the 10^150 PTQS resources, to 1 in 10^150. A one straw sized sample from such a haystack could have millions of universes the scope of our observed universe in it, and would still be only likely to pick up a straw. Reasonable sized, but small relative to population samples overwhelmingly tend to pick up the typical, not the atypical. 12 --> Just so, we now can see why one key sign of design is empirically reliable: complex specified information, especially in the form of functionally specific, complex information [and in particular, digitally coded information]. Once we pass a reasonable threshold of complexity, 500 - 1,000 bits, a blind chance and necessity search on the gamut of the observed solar system or cosmos, would overwhelmingly be likely to pick up gibberish or non-functional arrangements of components. (Using meshes of nodes and arcs and/or exploded views, we can convert a complex functionally organised structure into a wiring diagram, thence a structured set of yes-no questions, hence a specified number of bits adequate to give a functional specification. e.g. we could so reduce Mt Rushmore, noting that by contrast with say the former Old Man of the Mountain, the requirements of portraiture impose a tight specification that sharply constrains the number of acceptable configs.) 13 --> We also see here where the island of function in a sea of non-function imagery comes from: complexity and specificity sharply constrain the acceptable subset of the overall space of possibilities. When that complexity passes a threshold -- the solar system is our practical universe -- if a complex object is functionally specific or otherwise confined to a narrow zone of interest, we can be confident that an object with FSCI is designed, not just apparently designed. Statistical miracles of that order are operationally impossible. Indeed, that is the statistical foundation of the second law of thermodynamics. (Note Wiki's infinite monkeys discussion.) 14 --> This can be quantified, as was done here as a summary:
Chi_500 = I*S - 500, bits beyond the solar system threshold, where I is information measured in bits per the well known expression I = - log p or the like, and S is a dummy variable identifying whether or not the circumstance is specific. 73 ASCII characters in English is specific, a string of 500 coins that are tossed at random is not.
15 --> This criterion may be tested in a great many instances, and it will be shown reliable, Indeed the whole Internet, or just the above thread, are enough to show how well it works, and why. 16 --> In short, we have candidate causal explanations, and we have a criterion to objectively distinguish reliably, when chance and or necessity are not plausible relative to design. 17 --> Such a conclusion can then be reasonably extended to cases where we may only see the traces of the actual deep past cause, i.e effects that have endured to the present. 18 --> When that is done, it plainly indicates that C-chemistry, cell based life shows strong signs of design, e.g. just from DNA, which starts out at 100,000 - 1 mn and upwards of billions of bits of information. 19 --> That is itself highly significant, as it potentially revolutionises the institutional status quo on origins science. And, that explains easily the sort of controversies and selectively hyperskeptical objections and strawman objections such as has been cited in the clip being responded to. 20 --> It should be noted to KH, that inference to best, empirically and analytically anchored explanation is not a mere easily dismissed simplistic analogy. And, to set up and knock over such a strawman in the teeth of easily accessible evidence and reasoning to the contrary hardly commends the view being so advanced. (Cf. for instance, the ongoing UD ID Foundations series linked at the head of this post [link to be added as a line], and the IOSE summary page, here as well. Many other sources, including the weak argument correctives at the head of this and every UD page under the resources tab, etc, could and should have been properly consulted before tossing off the clipped argument.) GEM of TKIkairosfocus
October 15, 2011
October
10
Oct
15
15
2011
05:30 PM
5
05
30
PM
PDT
Re Dr Liddle:
It is certainly true that we do not yet have a working model of how DNA-based life came to be, although there are plenty of theories, some of which are looking promising. But so far, not supported by persuasive empirical evidence. But then, nor is ID . . .
Actually, Venter's recent work provides proof of concept that shows that intelligent design of life is a sufficient cause of what we see in the living cell. Similarly, given the mutual ruin by Orgel and Shapiro as documented in the OP, blind chance and mechanical necessity, on genes first or metabolism first approaches, is definitely not "promising." (Given the political climate, any promising result would have been awarded a Nobel Prize and would certainly be blazed all across our headlines.) As UD News reminds us, Freeman Dyson has aptly summed up the dilemma for advocates of spontaneous abiogenesis in plausible pre-life contexts:
The origin of life is the deepest mystery in the whole of science. Many books and learned papers have been written about it, but it remains a mystery. There is an enormous gap between the simplest living cell and the most complicated naturally occurring mixture of nonliving chemicals. We have no idea when and how and where this gap was crossed. We only know that it was crossed somehow, either on Earth or on Mars or in some other place from which the ancestors of life on Earth might have come. - Freeman J. Dyson, A Many-Colored Glass: Reflections on the Place of Life in the Universe (Charlotteseville, VA: University of Virginia Press, 2010), 104.
In short, what is known is that once there was no cell based life, and now there is, but there is no credible naturalistic mechanism for OOL. And, yet, thanks to Venter et al, we have proof of concept in hand that intelligent design of the cell is a doable project. Beyond that, the cell is riddled with phenomena and objects that manifest signs that in our observation and on the infinite monkeys type analysis, reliably point to design. Phenomena like digitally coded, functionally specific info beyond the plausible reach of chance and/or necessity on the gamut of the observed cosmos, the presence of coded algorithms and data structures with executing machines, and the like, including a von Neumann self-replicating, code based mechanism. But, design of cell based life is so alien to the mindset of dominant scientific elites today that they will do almost anything rather than seriously discuss such a prospect. As the OP clip from Lewontin shows, and further links highlight that this sort of thinking dominates the US NAS, NSTA, etc. That -- usually implicit -- a priori commitment to materialism and so also to naturalistic explanation by the relevant elites is a part of why it is increasingly obvious that the real issue is not the status of evidence and reasoning on it, but, sadly, prior worldview level ideological commitment and, frankly, indoctrination, by the said elites and their promoters. When it comes to the scientific or epistemic warrant status of the design inference, the start point is that we ourselves are designers, and we have a world full of artifacts that manifest the signs of design. Once such signs show themselves empirically reliable in cases we know directly, we have every reason to trust their reliability until shown otherwise in cases we do not have the opportunity to observe directly. Indeed, this is the premise of many serious areas of praxis, in origins science and studies of the deep past, in forensics, and in day to day life. So strong is this that it is patently selective hyperskepticism -- a fallacy -- to impose a sudden dispute when the results of an inference to best explanation on traces in the present and known causal forces sufficient to cause said patterns, do not lead where elites are willing to go. To infer that life is the product of design and manifests strong signs of being designed, after all, would be no more than the basic view of the co-founder of the modern theory of evolution took, Wallace. For, he titled his book on the subject:
The World of Life: a manifestation of Creative Power, Directive Mind and Ultimate Purpose
On what sound grounds is it to be regarded as unscientific, or "giving up" on science, to use scientific methods to infer inductively by abduction that, per reliable sign, a known sufficient causal factor is responsible for an object, process or phenomenon that shows characteristic signs of said factor? GEM of TKIkairosfocus
October 15, 2011
October
10
Oct
15
15
2011
04:07 PM
4
04
07
PM
PDT
Well, not in my view, Eugene :)Elizabeth Liddle
October 15, 2011
October
10
Oct
15
15
2011
03:33 PM
3
03
33
PM
PDT
"It was designed because it shares features with other things that we know to be designed." Well, it is something already. Also, it depends what features we talk about. Some of them may be pretty convincing unless one wishes to remain prejudiced. To me, it has a lot more weight than simply asserting that complexity can emerge by itself and that everything came about by fluke.Eugene S
October 15, 2011
October
10
Oct
15
15
2011
02:39 PM
2
02
39
PM
PDT
14.1.1.2.34 "Please don’t be so quick to assume that the logical fallacies are on the other side :)" One does not have to assume that, Elizabeth. It is, in fact, the case.Eugene S
October 15, 2011
October
10
Oct
15
15
2011
02:31 PM
2
02
31
PM
PDT
Yes, always.ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
01:38 PM
1
01
38
PM
PDT
OK, well, we'll leave it at that. It is certainly true that we do not yet have a working model of how DNA-based life came to be, although there are plenty of theories, some of which are looking promising. But so far, not supported by persuasive empirical evidence. But then, nor is ID :) In fact I'm not sure what it's supposed to be "supported" by, at all. But that seems a reasonable place to leave things - nice to talk to you! Cheers LizzieElizabeth Liddle
October 15, 2011
October
10
Oct
15
15
2011
01:31 PM
1
01
31
PM
PDT
Elizabeth,
True, I don’t know that it wasn’t, Scott. But can you provide me with one iota of evidence that it was?
I'm reasonable about the weight of the ID inference. But if it's anything, it's an iota. Nonetheless, to trace such a remarkable, self-replicating entity as a cell back to a manufacturing process which in turn traces back to what sure resembles symbolic codes - that bears a remarkable resemblance to the processes long employed by intelligent agents who have never even heard of DNA. I'm not even mentioning the functions within those cells, or all the other cool stuff you can make out of them. So let's call that two iotas. That makes it the most well-supported theory by two iotas.ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
01:14 PM
1
01
14
PM
PDT
Oh, it probably was arbitrary, Scott. I am sure that a different set of tRNA molecules would have worked just as well - the important thing is that there is only one type per codon, which having a reproductive advantage, would tend to evolve. The relevant part of my sentence, oddly, which was the part about "agreed by the community of code-sharers". True, I don't know that it wasn't, Scott. But can you provide me with one iota of evidence that it was? It's not me who is "question-begging" here! You can't infer intelligent input by hypothesising intelligent input! Please don't be so quick to assume that the logical fallacies are on the other side :)Elizabeth Liddle
October 15, 2011
October
10
Oct
15
15
2011
12:57 PM
12
12
57
PM
PDT
Elizabeth, But this is just one more.
Once you drop the requirement that the assignment of symbol to referent is an arbitrary one agreed by the community of code-sharers, which you have to do to include DNA
First you are assuming that the assignment of symbol was not arbitrary. How do you know this? Us ID folks are constantly reminded that neither OOL nor evolution searches for a specific target. But now you are saying the very opposite, that if it were designed then only these precise symbols could be used. The integrated circuits within my Pentium processor will only accept highly specified combinations of symbols as instructions. You could argue then that the symbols are not arbitrary, because only certain ones will initiate the right reaction. But in reality the symbols, the medium, and the processor were all designed to function together. That is precisely what makes their arrangement so intelligent. You are asserting that this could not be the case with DNA. Support it. If these symbols could not be arbitrary then what other elements of life would you like to identify which necessarily conform to the exact pattern we observe and could not have occurred any other way, and what impact does that have on the probability of life arising by chance?
agreed by the community of code-sharers
And this is blatant question-begging. DNA is not a symbol because it was not agreed to by a community of code-sharers. And you know this how? You have contradicted your own logic, made assumptions without supporting them, and begged the question. Blow - poof!ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
12:40 PM
12
12
40
PM
PDT
I’ve lost count of how many arguments you’ve made attempting to distinguish DNA from symbolic code, and each disintegrates if you blow on it.
Well, not in my view, Scott. You seem to be missing my point every time! That's why I've tried presenting you with the consequences of your (in my view metaphorical) use of the word code. In order to define "code" or "symbol" in a way that includes DNA, you have to drop the very property gives you the inference you want! Once you drop the requirement that the assignment of symbol to referent is an arbitrary one agreed by the community of code-sharers, which you have to do to include DNA, then you no longer have any case for saying that therefore the code must have been designed! I am using the words in the standard semiotic senses. If we drop the semiotics, you also lose your inference. Take your pick :)Elizabeth Liddle
October 15, 2011
October
10
Oct
15
15
2011
12:12 PM
12
12
12
PM
PDT
Darn. Let me repost this part:
Moreover, the DNA molecule itself, or its sequence, without the cell it normally inhabits, does not represent the organism it belongs to, in any sense.
The code to Windows 7 tapped out in morse code doesn’t do me any good without a computer. Again, what distinction are you making?ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
12:08 PM
12
12
08
PM
PDT
Elizabeth,
Without an actual DNA molecule, nothing will happen. Writing “CAG” on a piece of paper won’t give you a glutamine molecule, no matter how many molecules of ink you use. Writing "horse" on a piece of paper doesn't give me horse, either. What distinction are you making?
All jackasses have long ears. He is a jackass. Therefore, he has long ears. In other words, it’s fallacious.
The code to Windows 7 tapped out in morse code doesn't do me any good without a computer. Again, what distinction are you making?
All jackasses have long ears. He is a jackass. Therefore, he has long ears. In other words, it’s fallacious.
It wouldn't be fallacious if every available definition of jackass was "a thing with long ears." And if someone picked a thing with long ears and started making up arbitrary, meaningless reasons why it wasn't a jackass, one would have to wonder what they have invested in it not being a jackass.
human codes involve the agreed assignations of symbols to referents by a group of code users, as well as the fact that those symbols are not themselves instrumental in rendering their meaning.
You're begging the question. (It was only a matter of time.) How do you know whether a group of code users didn't do just that?
the fact that those symbols are not themselves instrumental in rendering their meaning.
My Pentium interprets a certain set of symbols as a specific instruction. Those symbols initiate electronic reactions. One could make the exact same case that those symbols are instrumental in rendering their meaning. I've lost count of how many arguments you've made attempting to distinguish DNA from symbolic code, and each disintegrates if you blow on it. I haven't even pointed out anything you didn't already know. You're repeatedly making arguments that contradict your own knowledge. You do not appear to be reasoning on these things, applying what you already know. That it why I say it is irrational.
ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
12:04 PM
12
12
04
PM
PDT
He = a human being. Sorry that wasn't clear! (Though it's an old chestnut). The point being that taking a generalisation about something that is true, then concluding that when you use that something as a metaphor the generalisation also applies, is fallacious. My favorite is the politician's one that goes: Something must be done This is something Therefore this must be done. All too true, unfortunately.Elizabeth Liddle
October 15, 2011
October
10
Oct
15
15
2011
12:03 PM
12
12
03
PM
PDT
Just a comment on board mechanics. UD now has threaded comments, so it makes no sense to edit someone else's post in order to reply. I assume this is a habit left over from the previous board software.Petrushka
October 15, 2011
October
10
Oct
15
15
2011
11:52 AM
11
11
52
AM
PDT
All jackasses have long ears. He is a jackass. Therefore, he has long ears. In other words, it’s fallacious.
Shouldn't that be: All jackasses have long ears. He has long ears. Therefore, he is a jackass. Perhaps you have an inability to commit fallacy.Petrushka
October 15, 2011
October
10
Oct
15
15
2011
11:33 AM
11
11
33
AM
PDT
Yes, you can represent codons symbolically. That doesn't make codons symbols though! And, when rendered as alphabetic letters, they are incapable of making proteins or RNA molecules. Moreover, the DNA molecule itself, or its sequence, without the cell it normally inhabits, does not represent the organism it belongs to, in any sense. Did you watch that wonderful Denis Noble lecture? And you are completely missing my point about the molecules. Most symbols are made of molecules. Some aren't - some are patterns of energy, for instance auditory symbols like words. But that's beside the point - the point is that it isn't at the molecule level that the meaning is carried. This is not the case with DNA. Without an actual DNA molecule, nothing will happen. Writing "CAG" on a piece of paper won't give you a glutamine molecule, no matter how many molecules of ink you use. But nonetheless, you can call it a "code" or a "symbol" if you want to. But in that case, don't go saying - see, it's a code! And we know that codes (in the normal usage) are made by minds! Therefore DNA was made by a mind! That's equivocation! It's tantamount to saying: All jackasses have long ears. He is a jackass. Therefore, he has long ears. In other words, it's fallacious. Yes, the DNA sequence has something in common with human codes. It also has a great deal that is different, not least being the fact that human codes involve the agreed assignations of symbols to referents by a group of code users, as well as the fact that those symbols are not themselves instrumental in rendering their meaning.Elizabeth Liddle
October 15, 2011
October
10
Oct
15
15
2011
11:15 AM
11
11
15
AM
PDT
Elizabeth, I'll expand on this:
But again, those symbols are assigned by minds, then read off by minds
Do you know how many messages and signals encoded in symbols are being passed back and forth within your computer right now? Or how many are traded back and forth between your computer and various servers on the internet? There are components between your computer and those servers that are sending each other messages to help them send and receive your messages. What mind is reading these off? Yes, the origin is a mind. That is why we infer that other such meaningful arrangements of symbols likely also are. But what mind is reading them? None. They are set in motion to communicate with one another. Factoring out the unknown origin, what is the logical inconsistency between such processes and the what occurs when cells reproduce? If the information in DNA were deliberately arranged and the processes for transcribing them designed, this would be entirely consistent with the known intelligent pattern of designing systems that use symbols for internal communication. (And, as previously stated, if they were not arranged and designed then they would be consistent with absolutely nothing.) Why would you even suggest that symbols must be processed by a mind?ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
11:13 AM
11
11
13
AM
PDT
Elizabeth,
Well, I’m just using it as usually used – some kind of representation, that can be in any medium
You make this distinction as if the contents of DNA cannot be in any other medium. They already are. A, G, C, and T are popular. When biologists map genes, how do they store the data? In more DNA molecules? They use a computer. They could write it on paper. They could use morse code if they wanted to.
That doesn’t seem to me to include molecules at all.
I don't know of any medium that doesn't include molecules. You're making an arbitrary determination about what can be a medium. Bytes, yes. Symbols on paper, yes. Hand-clapping modulated as morse code, yes. A specific facial expression, yes. A sequence of molecules, no. I'm sorry but you're just making that up. The point is that symbols and language are known implements of intelligence. The ability to use language is sometimes used as an indicator of intelligence. In contrast, there are no known instances of languages or symbols arising apart from intelligent purpose. I'll posit that it is unimaginable. Prove me wrong. Show that you or anyone else can even imagine it in any amount of detail without using the word "somehow." The arrangements of DNA are clearly not random, as the results of their transcription are not random. That such a code of unknown origin was also purposefully designed is a valid inference. And it's the only conclusion with any connection whatsoever to observed reality. Given that, I'd guess in this order: 1) It was designed 2) Its origin is an absolute mystery 3) Yes is no and true is false 3) It emerged naturally from something which had no use for it because inanimate things have no use for anything. (The last two are neck-in-neck.)ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
10:42 AM
10
10
42
AM
PDT
Well, I'm just using it as usually used - some kind of representation, that can be in any medium, that has a referent agreed by all the people who use it. That doesn't seem to me to include molecules at all. But as I said, let's nonetheless accept your usage: what point are you making?Elizabeth Liddle
October 15, 2011
October
10
Oct
15
15
2011
10:03 AM
10
10
03
AM
PDT
Elizabeth,
I think it’s a huge stretch, because a molecule takes part in the in the process of “translation”
Then we're back to the demarcation problem (or, rather, you are.) Define "symbols" in a manner which includes all known means of information processing but excludes this specific means of processing. The definition cannot include an assumption regarding whether it was designed.ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
09:50 AM
9
09
50
AM
PDT
OK (well, as I say, I think it's a huge stretch, because a molecule takes part in the in the process of "translation" - acts as a template for instance, while a picture of a dog doesn't, which seems to me to be rather an important difference) - that's fine as far as it goes, but I meant, what part of your ID argument does your identification of codons as symbols form? Or are we just arguing nomenclature because we are both pedants :)?Elizabeth Liddle
October 15, 2011
October
10
Oct
15
15
2011
09:45 AM
9
09
45
AM
PDT
Elizabeth,
What exactly are you inferring from your identification of codons as symbols?
I am inferring that rather than storing miniature representations of finished products, the needed data has altered and compressed to a form that better suits the purpose of both storage and transcription, but which no longer bears a resemblance to what it represents. That is the essence of language. I can say "dog," which is easy, instead of drawing a picture of a dog or producing an actual dog. I can also produce a fully detailed description of a dog in the form of its DNA, which again is easier to transport and transcribe than an actual dog, and does not even resemble one. Compare that to a book about dogs and a book about ships. How do you tell which is which? By which one looks more like a book and which one smells like a dog? No, the books look more like each other. If you don't read the language, you cannot even distinguish them. The same could be said of spoken words. You would have a point if the medium and elements for representing a dog were different than those for tulips. But the same medium and processes are used in both cases. Like it or not, that's a language. Whether we are writing about dogs or tulips we use the same letters and most of the same words. It's not the reactions that make them symbols. It's their consistent reuse to describe varying things.ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
08:17 AM
8
08
17
AM
PDT
1 2 3 8

Leave a Reply