Uncommon Descent Serving The Intelligent Design Community

ID Foundations, 8: Switcheroo — the error of asserting without adequate observational evidence that the design of life (from OOL on) is achievable by small, chance- driven, success- reinforced increments of complexity leading to the iconic tree of life

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Algorithmic hill-climbing first requires a hill . .

[UD ID Founds Series, cf. Bartlett on IC]

Ever since Dawkins’ Mt Improbable analogy, a common argument of design objectors has been that such complex designs as we see in life forms can “easily” be achieved incrementally, by steps within plausible reach of chance processes, that are then stamped in by success, i.e. by hill-climbing. Success, measured by reproductive advantage and what used to be called “survival of the fittest.”

[Added, Oct 15, given a distractive strawmannisation problem in the thread of discussion:  NB: The wide context in view, plainly,  is the Dawkins Mt Improbable type hill climbing, which is broader than but related to particular algorithms that bear that label.]

Weasel’s “cumulative selection” algorithm (c. 1986/7) was the classic — and deeply flawed, even outright misleading — illustration of Dawkinsian evolutionary hill-climbing.

To stir fresh thought and break out of the all too common stale and predictable exchanges over such algorithms, let’s put on the table a key remark by Stanley and Lehman, in promoting their particular spin on evolutionary algorithms, Novelty Search:

. . . evolutionary search is usually driven by measuring how close the current candidate solution is to the objective. [ –> Metrics include ratio, interval, ordinal and nominal scales; this being at least ordinal] That measure then determines whether the candidate is rewarded (i.e. whether it will have offspring) or discarded. [ –> i.e. if further moderate variation does not improve, you have now reached the local peak after hill-climbing . . . ] In contrast, novelty search [which they propose] never measures progress at all. Rather, it simply rewards those individuals that are different.

Instead of aiming for the objective, novelty search looks for novelty; surprisingly, sometimes not looking for the goal in this way leads to finding the goal [–> notice, an admission of goal- directedness . . . ] more quickly and consistently. While it may sound strange, in some problems ignoring the goal outperforms looking for it. The reason for this phenomenon is that sometimes the intermediate steps to the goal do not resemble the goal itself. John Stuart Mill termed this source of confusion the “like-causes-like” fallacy. In such situations, rewarding resemblance to the goal does not respect the intermediate steps that lead to the goal, often causing search to fail . . . .

Although it is effective for solving some deceptive problems, novelty search is not just another approach to solving problems. A more general inspiration for novelty search is to create a better abstraction of how natural evolution discovers complexity. An ambitious goal of such research is to find an algorithm that can create an “explosion” of interesting complexity reminiscent of that found in natural evolution.

While we often assume that complexity growth in natural evolution is mostly a consequence of selection pressure from adaptive competition (i.e. the pressure for an organism to be better than its peers), biologists have shown that sometimes selection pressure can in fact inhibit innovation in evolution. Perhaps complexity in nature is not the result of optimizing fitness, but instead a byproduct of evolution’s drive to discover novel ways of life.

While their own spin is not without its particular problems in promoting their own school of thought — there is an unquestioned matter of factness about evolution doing this that is but little warranted by actual observed empirical facts at body-plan origins level, and it is by no means a given that “evolution” will reward mere novelty —  some pretty serious admissions against interest are made.

Now, since this “mysteriously” seems to be controversial in the comment thread below, courtesy Wikipedia, let us add [Sat, Oct 15] a look at a “typical” topology of a fitness landscape, noticing how there is an uphill slope all around it, i.e. we are looking at islands of function that lead uphill to local maxima by hill-climbing in the broad, Dawkinsian, cumulative steps up Mt Improbable sense:

A "typical" fitness landscape, with local maxima, saddle and uphill trends

Now, too, right from the opening remarks in the clip, Stanley and Lehman acknowledge how targetted searches dominate the evolutionary algorithm field, a point often hotly denied by advocates of GA’s as good models of how evolution is said to have happened:

. . . evolutionary search is usually driven by measuring how close the current candidate solution is to the objective. [ –> i.e. if further moderate variation does not improve, you have now reached the local peak after hill-climbing . . . ] That measure  [ –> Metrics include ratio, interval, ordinal and nominal scales; this being at least ordinal] then determines whether the candidate is rewarded (i.e. whether it will have offspring) or discarded . . . .  in some problems ignoring the goal outperforms looking for it. The reason for this phenomenon is that sometimes the intermediate steps to the goal do not resemble the goal itself. John Stuart Mill termed this source of confusion the “like-causes-like” fallacy. In such situations, rewarding resemblance to the goal does not respect the intermediate steps that lead to the goal, often causing search to fail

We should also explicitly note what should be obvious, but is obviously not to many:  nice, trend-based uphill climbing in a situation where the authors of a program have loaded in a function with trends and peaks, is built-in goal-seeking behaviour (as the first illustration above shows).

Similarly, we see how the underlying assumption of a smoothly progressive Hill- Climbing trend to the goal is highly misleading in a world where there may be irreducibly complex outcomes, where the components, separately do not move you to the target of performance, but when suitably joined together we see an emergent result not predictable from projecting trend lines. (Of course, Stanley and Lehman tiptoe quietly around explicitly naming that explosive concept. But that is exactly what is at work in the case where “intermediate steps” do not lead to a goal: it is not “steps” but components that as a core cluster must all be present and must be organised in the right pattern to work together, to have the resulting function. Even something as common as a sentence tends to exhibit this pattern, and algorithm-implementing software is a special case of that. Think about how often a single error can trigger failure.)

The incrementalist claim, then, is by no means a sure thing to be presented with the usual ever so confident, breezily assured assertions that we hear ever so often. For, the fallacy of confident manner lurks.

Secondly, let us also note how the incrementalist objection actually implies a key admission or two.

For one, we can see that apparent design is a recognised fact of the world of life, i.e. as Dawkins acknowledges in opening remarks of his The Blind Watchmaker, 1986; as, Proponentist has raised in the current Free Thinker UD thread:

Biology is the study of complicated things that give the appearance of having been designed for a purpose.

Elsewhere, in River out of Eden (1995), as Proponentist also highlights, Dawkins adds:

The illusion of purpose is so powerful that biologists themselves use the assumption of good design as a working tool.

These two remarks underscore a point objectors to design thought are often loathe to acknowledge: namely, that Design Scientist, William Dembski is fundamentally right: significant increments in functionally specific complexity beyond a threshold by blind chance and/or mechanical necessity, are so improbable as to be effectively operationally impossible on the gamut of our observed universe.

Similarly, as Proponentist goes on to ask:

How does Mr. Dawkins know that something gives the appearance of design? Can his statement be tested scientifically?

Obviously, if Mr. Dawkins is correct, then he is talking about “evidence that design can be observed in nature” . . . . You can either observe design (of some kind) or not. If you can observe it, then you already distinguish it from non-design.

This is already a key point: as a routine matter, we recognise that — on a wealth of experience and observation — complex, functionally specific arrangements of parts towards a goal, are best explained as intentionally and intelligently chosen, composed or directed. That is, as designed.

Darwin's original sketch of his Tree of Life icon of Evolution

But, the onward Darwinist idea is that every instance of claimed design in the world of life can be reduced to a process of incremental changes that gradually accumulate from some primitive original self-replicating organism (and beyond that, original self replicating molecule or molecular cluster), through the iconic Darwinian tree of life — already, a consciously ironic switcheroo on the Biblical Tree of Life in Genesis and Revelation.

So, already, through the battling cultural icons, we know that much more than simply science is at stake here.

So also, we know to be on special guard against questionable worldview assumptions such as those promoted by Lewontin and so many others.

Now, too, Design objector Petrushka, has thrown down a rhetorical gauntlet in the current UD Freethinker thread:

One can accept the inference that a complex system didn’t arise in one step by chance without saying anything specific about its history.

The argument is about the specific history, not whether 500 or whatever bits of code arose purely by chance . . . . The word “design,” whether apparent or otherwise means nothing. It’s a smoke screen. The issue is whether known mechanisms can account for the history.

Words like “smoke screen” imply an unfortunate accusation of deception, and put a fairly stiff burden of proof on those who use them. Which — on fair comment — has not been met, and cannot be soundly met, as the accusation is simply false.

Similarly “purely by chance” is a strawman caricature.

One, that ducks the observed fact that there are exactly two observed sources of highly contingent outcomes: chance [e.g. what would happen by tossing a tray of dice] and intelligent arrangement [e.g. arranging the same tray of dice in a specific pattern]. Mechanical necessity [e.g. a dropped heavy object reliably falls at 9.8 m/s2 near earth’s surface] is not a source of high contingency. So, in the combination of blind chance and mechanical necessity, the highly contingent outcomes would be coming from the chance component.

Nevertheless, we need to show that “design” is most definitely not a meaningless or utterly confusing term, generally or in the context of the world of life.

That’s why I replied:

Design is itself a known, empirically observed, causal mechanism. Its specific methods may vary, but designs are as familiar as the composition of the above clipped sentences of ASCII text: purposeful arrangement of parts, towards a goal, and typically manifesting a coherence in light of that purpose.

The arrangement of 151 ASCII 128-state characters above as clipped [from the first part of the cite from Petrushka], is one of 1.544*10^318 possibilities for that many ASCII characters.

The Planck Time Quantum State resources of the observed universe, across its thermodynamically credible lifespan, 50 million times the time since the usual date for the big bang, could not take up as many as 1 in 10^150 of those possibilities. Translated into a one-straw sized sample, millions of cosmi comparable to the observed universe could be lurking in a haystack that big, and yet, a single cosmos full of PTQS’s sized sample would overwhelmingly be only likely to pick up a straw. (And, it takes about 10^30 PTQS’s for the fastest chemical interactions.)

It is indisputable that a coherent, contextually responsive sequence of ASCII characters in English — a definable zone of interest T, from which your case E above comes — is a tiny and unrepresentative sample of the space of possibilities for 151 ASCII characters, W.

We habitually and routinely know of just one cause that can credibly account for such a purposeful arrangement of ASCII characters in a string structure that fits into T: design. The other main known causal factors at this level — chance and/or necessity, without intelligent intervention — predictably would only throw out gibberish in creating strings of that length, even if you were to convert millions of cosmi the scope of our own observed one, into monkeys and world processors, with forests, banana plantations etc to support them.

In short, there is good reason to see that design is a true causal factor. One, rooted in intelligence and purpose, that makes purposeful arrangements of parts; which are often recognisable from the resulting functional specificity in the field of possibilities, joined to the degree of complexity involved.

As a practical matter, 500 – 1,000 bits of information-carrying capacity, is a good enough threshold for the relevant degree of complexity. Or, using the simplified chi metric at the lower end of that range:

Chi_500 = I*S – 500, in bits beyond the solar system threshold.

So, when we see the manifestation of FSCO/I, we do have a known, adequate mechanism, and ONLY one known, adequate mechanism. Design.

That is why FSCO/I is so good as an empirically detectable sign of design, even when we do not otherwise know the causal history of origin.

{Added: this can be expressed through the explanatory filter, applied per aspect of a phenomenon or process, allowing individual aspects best explained by mechanical necessity, chance and intelligence to be separated out, step by step in our analysis:

The (per aspect) Design Inference Explanatory Filter}

Do you really mean to demand of us that we believe that design by an intelligence with a purpose is not a known causal mechanism? If so, what then accounts for the PC you are using? The car you may drive, or the house or apartment etc. that you may live in?

Do you see how you have reduced your view to blatant, selectively hyperskeptical absurdity?

And, of course, the set of proteins and DNA for even the simplest living systems, is well beyond the FSCI threshold. 100,000 – 1 mn+ DNA bases is well beyond 1,000 bits of information carrying capacity.

Yes, that points to design as the best explanation of living systems in light of the known cause of FSCO/I. What’s new about that or outside the range of views of qualified and even eminent scientists across time and today?

Similarly, the incrementalist mechanism of blind chance and mechanical necessity through trial and error/success thesis has some stiff challenges to meet:

. . . the usual cases of claimed observed incremental creation of novel info beyond the FSCI threshold, as a general rule boil down to:

(a) targetted movements within an island of function, where the implicit, designed in information of a so-called fitness function of a well behaved type — trends help rather than lead to traps — is allowed to emerge step by step. (Genetic Algorithms are a classic of this.)

(b) The focus is made on a small part of the process, much like how if a monkey were to indeed type out a Shakespearean sonnet by random typing, there would now be a major search challenge to identify that this has happened, i.e. to find the case in the field of failed trials.

(c) We are discussing relatively minor adaptations of known functions, well beyond the FSCI threshold — hybridisation, or breaking down based on small mutations etc. For instance, antibiotic resistance, from a Design Theory view, must be recognised in light of the prior question: how do we get to a functioning bacterium based on coded DNA? (Somehow, the circularity of evolutionary materialism leads ever so many to fail to see that ability to adapt to niches and changes may well be a part of a robust design!)

(d) We see a gross exaggeration of the degree and kind of change involved, e.g. copying of existing info is not creation of new FSCI. A small change in a regulatory component of the genome that shifts how a gene is expressed, is a small change, not a jump in FSCI. Insertion of a viral DNA segment is creation of a copy and transfer to a new context, not innovation of information. Etc.

(e) We see circularity, e.g. the viral DNA is assumed to be of chance origin.

And so forth.

In short, some big questions were silently being begged all along in the discussions and promotions of genetic algorithms as reasonable analogies for body plan level evolution, and in the assertions that blind chance variations plus culling out of the less reproductively successful can account for complex functional organisation and associated information as we see in cell based life.

Let us therefore ask a key question about the state of actual observed evidence: has the suggested gradual emergence of life from an organic chemical stew in some warm little pond or a deep-sea volcano vent or a comet core or a moon of Jupiter, etc, been empirically warranted?

Nope, as the following recent exchange between Orgel and Shapiro will directly confirm — after eighty years of serious trying to substantiate Darwin’s warm little pond suggestion, neither the metabolism first nor the Genes/RNA first approaches work or are even promising:

[Shapiro:] RNA’s building blocks, nucleotides contain a sugar, a phosphate and one of four nitrogen-containing bases as sub-subunits. Thus, each RNA nucleotide contains 9 or 10 carbon atoms, numerous nitrogen and oxygen atoms and the phosphate group, all connected in a precise three-dimensional pattern . . . .  [[S]ome writers have presumed that all of life’s building could be formed with ease in Miller-type experiments and were present in meteorites and other extraterrestrial bodies. This is not the case.A careful examination of the results of the analysis of several meteorites led the scientists who conducted the work to a different conclusion: inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life . . . .To rescue the RNA-first concept from this otherwise lethal defect, its advocates have created a discipline called prebiotic synthesis. They have attempted to show that RNA and its components can be prepared in their laboratories in a sequence of carefully controlled reactions, normally carried out in water at temperatures observed on Earth . . . .Unfortunately, neither chemists nor laboratories were present on the early Earth to produce RNA . . .
[Orgel:] If complex cycles analogous to metabolic cycles could have operated on the primitive Earth, before the appearance of enzymes or other informational polymers, many of the obstacles to the construction of a plausible scenario for the origin of life would disappear . . . .It must be recognized that assessment of the feasibility of any particular proposed prebiotic cycle must depend on arguments about chemical plausibility, rather than on a decision about logical possibility . . . few would believe that any assembly of minerals on the primitive Earth is likely to have promoted these syntheses in significant yield . . . .  Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of [[for instance] the reverse citric acid cycle was present anywhere on the primitive Earth [[8], or that the cycle mysteriously organized itself topographically on a metal sulfide surface [[6]? . . .  Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own . . . .  The prebiotic syntheses that have been investigated experimentally almost always lead to the formation of complex mixtures. Proposed polymer replication schemes are unlikely to succeed except with reasonably pure input monomers. No solution of the origin-of-life problem will be possible until the gap between the two kinds of chemistry is closed. Simplification of product mixtures through the self-organization of organic reaction sequences, whether cyclic or not, would help enormously, as would the discovery of very simple replicating polymers. However, solutions offered by supporters of geneticist or metabolist scenarios that are dependent on “if pigs could fly” hypothetical chemistry are unlikely to help.  [[Emphases added.]

Of course, in the three or so years since (and despite occasional declarations to the contrary; whether in this blog or elsewhere . . . ), the case has simply not got any better. [If you doubt me, simply look for the Nobel Prize that has been awarded for the resolution of the OOL challenge in the past few years. To save time, let me give the answer: there simply is none.]

Bottomline: the proposed Darwinian Tree of Life has no tap-root.

Modern presentation of the Darwinian Tree of Life -- note the origin of life bubble at its root, which shows the pivotal importance of the root, the main trunk and branches

No roots, no shoots, and no branches.

[Cont’d. on  p. 2]

Comments
F/N: I have just now added illustrations and a video on the protein synthesis process (on p. 2), HT BA 77, UD News, UD Web master. KF.kairosfocus
December 1, 2011
December
12
Dec
1
01
2011
11:40 PM
11
11
40
PM
PDT
F/N: I have just now added illustrations and a video on the protein synthesis process, HT BA 77, UD News, UD Web master. KF.kairosfocus
December 1, 2011
December
12
Dec
1
01
2011
11:40 PM
11
11
40
PM
PDT
F/N 3: EA's discussion on IC and FSCI vs Avida here in response to Dr Liddle is well worth a read.kairosfocus
October 16, 2011
October
10
Oct
16
16
2011
05:18 AM
5
05
18
AM
PDT
F/N: I have added comments above, (a) here in response to Dr Bot against Venter as providing proof of concept of the engineering design of cell based life and its components through lab methods, and (b) here in response to Dr Liddle (and KH) on "where are the gaps" and the like. The new layout makes it hard to follow sub threads and spot comments, so we eagerly await the chronological view modification. I also must note that it is just a tad frustrating to see direct evidence of failure to do basic due diligence on reading a post before commenting to object, and on similarly failing to do due diligence on what has been going on in nanotechnology over these 20 and more years, and why it is that Venter's work is so significant. GEM of TKIkairosfocus
October 16, 2011
October
10
Oct
16
16
2011
02:42 AM
2
02
42
AM
PDT
Dr Liddle (and KH): It is a little hard to follow sub discussions on the new format, so responses will occasionally lag comments. (JC tells us it will take a while to program the requested viewing options feature. Does someone out there know of a WP plug-in that will do the trick?) On the main matter, the trade secret of paleontology and Cambrian revolution excerpts on p. 2 of the original post should make it clear that there is a major problem of a want of relevant branching in the fossil record; the only actual record of the remote past of life beyond written report, however we may date it. Those are facts, facts based on over 250,000 fossil species, millions of samples in museums, and billions in known fossil beds. The plain record is that the first time we see complex body plans, dozens at phylum and sub phylum levels, there are no credible antecedents and on the usual timeline there is a very narrow window in which they appear. This suddenness was known to Darwin, but he anticipated that extensive investigation would correct the gaps then apparent. The extensive investigations have been carried forth over 150 years, and the net result is that we are back at the same point of sudden appearance, stasis, disappearance and/or continuation into the modern world, with gaps at root, trunk and branch level in the tree of life. What the fossils overwhelmingly tell us about -- headlines, museum exhibits and textbooks with breezily confident declarations notwithstanding -- is what everyone agrees on: twig level minor adaptations. In addition, whether or not you are willing to accept it, the proposed darwinian means of generating variations, chance, is sharply challenged to cross config spaces to reach isolated islands of function. This starts with first life, and continues with major body plan innovations that require many co-ordinated changes to all be in place at once. (Cf no 3 in the series on IC and the co-option problem in light of constraints C1 - 5, as well as the Bartlett paper now linked top of this page). It is highly significant that the standard Darwinist response and tendency is to emphasise NATURAL SELECTION, rather than chance variation as the claimed engine by which innovations of function can happen incrementally. But this is misleading, usually inadvertently so. For, the "selection" part is a matter of destructive culling out, not actual addition of information. The fittest surviving is based on the less fit NOT surviving, i.e. certain variants disappear across time, whether at once or in the Malthusian "struggle for existence" that Darwin highlighted. Subtraction is not addition. So, necessarily, it is the chance variation that darwinian mechanism is relying on as the candidate to create: protein etc codes, regulatory code, co-ordinated development mechanisms that trigger at proper stages from zygote on, and more, all in data structures, and in body plans that must unfold in a precise and specific sequence with fatal error as a highly likely outcome if the precise unfolding for body plan formation [cf. addition on p. 2 of OP] is randomly varied. Chance plus trial and error in short. But we already know that complex and functionally specific configurations are rare and isolated in spaces of all possible configurations, as a general rule. As, your friendly local junkyard will tell you. Thus, there is a definite search-space traversal challenge in a context where isolated islands of function sit in seas of non-function, and the resources of the solar system will be overwhelmed at 500 bits, and those of the observed cosmos at 1,000. So, there is a particular burden of proof on advocates of darwinian mechanisms to demonstrate that he tree of life, at its main nodes, is well demonstrated empirically. The first body plan, i.e OO cell based life, is a particularly important point, as the OP highlights: no root, no shoots, no branches and so no tree. As p. 2 of the OP shows, this confronts us with the need to originate a vNSR, which is massively irreducibly complex and riddled with FSCI well beyond the 500 - 1,000 bit threshold. As the mutual ruin of Orgel and Shapiro, now backed up by Freeman's recent summary on the "mystery," the only reasonable candidate on the table for OOL is design. And, despite Dr BOT's strictures, Venter did provide proof of concept, and indeed, it is now 20 years since the underlying basic technical means and possibilities have been demonstrated. Worse, this is all in an observed cosmos that shows all sorts of signs of being finely tuned and set up for C-chemistry, aqueous medium, cell-based life. That is, we have strong reason to infer to design of the cosmos for life, and strong reason to infer to the design of life from the outset. That sharply shifts the balance of epistemological plausibility on the origin of body plans, and in particular the human body plan and the human mind and conscience. Design -- despite much hostility and many attempts to lock it out a priori, by the likes of Lewontin, Sagan, the US NAS, the US NSTA, the NCSE etc etc etc, and despite the way that politically correct thought police tactics have been resorted to to lock it out -- is a viable explanation, and on OO of cosmos and life, is arguably the best or only viable causal explanation. (Indeed, that is what best explains the a priori tactics being used to try to lock it out.) And, pardon directness: you plainly have not read p. 2 of the OP, BTW -- "I don’t see anything in the OP about “observed gaps”, apart from the well-known OOL gap." Finally, please explain to me what the Gould and Meyer excerpts on p 2 are about, if not observed and major fossil record "gaps." (And,the OOL gap you seem to wish to glide over as if it were insignificant is pivotal, as it drastically shifts the balance of epistemic plausibility in favour of design explanations, especially when joined to the cosmological evidence on fine tuning of our observed cosmos.) So, pardon if you find the above exchanges frustrating, but from my perspective, they do show the reasons for the sort of concerns and issues and gaps in addressing evidence on the table that I have pointed to. GEM of TKIkairosfocus
October 16, 2011
October
10
Oct
16
16
2011
02:23 AM
2
02
23
AM
PDT
F/N: It's hard to follow the sub-threads, so when I spot something . . . Above it is claimed in effect that Venter was only rearranging the parts of already existing cell based self-replicating forms, so this cannot be proof of concept for the origin of said cells by engineering methods. This re-statement should be enough to show the fatal defect in the objection. For, we see from Venter that the COMPOSITION of DNA etc can be manipulated by engineering means. As the just linked CSM article summarises:
After almost 15 years of work and $40 million, a team of scientists at the J. Craig Venter Institute says they have succeeded in creating the first living organism with a completely synthetic genome. This advance could be proof that genomes designed in a computer and assembled in a lab can function in a donor cell, eventually reproducing fully functional living creatures, that is, artificial life. As described today in the journal Science, the study scientists constructed the genome of the bacterium Mycoplasma mycoides from more than 1,000 sections of preassembled units of DNA. Researchers then transplanted the artificially assembled genome into a M. capricolum cell that had been emptied of its own genome. Once the DNA "booted up," the bacteria began to function and reproduce in the same manner as naturally occurring M. mycoides. "It's a culmination of a series of impressive steps," Ron Weiss, an associate professor of biological engineering at MIT who was not associated with the study, told LiveScience.com. "If you look over the last few years, at what they've been able to produce, it's definitely impressive. Being able to create genomes of this scale? That's impressive." To boot up, the DNA utilized elements of the M. capricolum recipient cells, according to study team member Carole Lartigue of the Venter Institute. The bacterial cells still contained certain "machinery" that let them carry out the process of expressing a gene, or taking the genetic code and using it to build proteins – called transcription. When the artificial genome entered the cell, the cellular machines that run DNA transcription recognized the DNA, and began doing their job, Lartigue said. "This cell's lineage is the computer, it's not any other genetic code," said Daniel Gibson, lead author of the Science paper, also of Venter Institute.
So, we know that nanotech manipulation and composition methods exist and can be used in a molecular nanotech lab. This should be grounds for seeing that the same general sort of methods extend to the original composition of said components, modules and organised integrated frameworks in a cell. Indeed, we did not strictly NEED Venter -- or his Yeast cut and paste methods -- to see that manipulation of nanotech, atom level components by chemical and mechanical means is possible. It is over 20 years ago that the famous "IBM" made of 35 Xenon atoms on a substrate by means of a scanningt-tunnelling microscope, was published and blazed to the world as a picture. (Cf discussion and image here and timeline of achievements here.) In sum, it has been publicly and generally known for over 20 years, that atomic manipulation nanotechnology is possible, and indeed a field of research and applications has grown up across that time. What Venter has done, is to directly bshow that this new field of science and technology is relevant to the world of life, starting with the creation and modification of individual cells. And -- whatever selectively hyperskeptical objectors may want to declare dismissively -- that is indeed empirical proof of concept for the use of similar technologies as a SUFFICIENT cause of the origin of life. GEM of TKIkairosfocus
October 16, 2011
October
10
Oct
16
16
2011
01:40 AM
1
01
40
AM
PDT
Elizabeth,
Starting with a minimally functional population of self-replicators in an environment, replicating with variance, adaptation will tend to occur, as we see both in the field and the lab, and in simulations like AVIDA.
What are these minimally functional self-replicators, and where are their populations? What sort of adaptations have been observed, not counting loss-of-function mutations? Statements such as that cited give the impression that the observation of adaptation beyond meaningless phenotypic changes that come and go have been observed, perhaps more than once. In reality they are more like bigfoot, but only if scientists owned stock in bigfoot and were trying to drive it up.ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
08:36 PM
8
08
36
PM
PDT
Re KH:
how does “design” explain anything at all other then it allows you to say “the explanation for it’s origin is that it was designed”. It was designed because it appears to have been designed. It was designed because it shares features with other things that we know to be designed. But that’s not an “explanation” at all, it’s just changing the label.
1 --> Start from a basic reality: we ourselves are designers, and we produce artifacts that often show characteristic empirical signs of design. And it is patently a good explanation of the cause of a house, a house-fire [--> a highly significant case], a car, a pencil, a computer, a Stonehenge, etc to say: it was designed, per reliable signs manifest from the object and/or its traces itself. (BTW, have you looked up TRIZ, the theory of inventive problem solving, as already pointed to? Try here as a start. This suffices to show that how-to is a world of relevant study. That X was designed on empirically tested reliable signs opens up a world of reverse engineering and onward forward engineering methods. Contrary to ever so many glib and/or barbed talking points, such is a science-starter, not a science stopper. Indeed, historically, the designed world view of science was pivotal to the Scientific revolution from 1543 - 1700+. Cf. discussion here.] 2 --> So, design is a known causal factor, i.e something that initiates and/or sustains something else that had a beginning, in existence. All of this is or should be basic, even common sensical, but when a dominant ideology has been imposed in a community in the teeth of what is common sensical, it becomes necessary to clear the air on even basic points. Even in a world in which we are literally surrounded by designed artifacts, and see their characteristic signs almost every time we turn around. (I recall, this was the same problem in the heyday of Marxism.) 3 --> So, as a preliminary step, in Dembski's summary of design as causal process:
. . . (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi.)
4 --> It is also helpful to recall here the common definition of engineering that it [intelligently] uses the forces and materials of nature, through knowledge, skill, experience and imagination, to create desired and specified objects, processes, systems and networks, etc, economically, that are to serve the common good. (That last part is key to the ethical requisites of engineering praxis.) 5 --> We may directly observe engineering and design in action -- the stringing of chains of glyphs serving as digital symbols for meaningful sentences such as in blog comments will do for an example, and we have many other cases that are subject to record or are notorious, e.g. the remains of the works of classical civilisations. 6 --> A common feature of such is evidently purposeful arrangement of parts towards a plausible goal. Aqueducts convey water, computers process information based on symbols and physically implemented logical operators [typically, based on controlled switch circuits expressing NAND logic], and typed sentences communicate. 7 --> In some cases, chance and/or mechanical necessity -- two other known causal factors -- may plausibly give rise to phenomena that look like objects of design. Random production of texts is a case in point, i.e. it is possible to get short words and sentences or the like. 8 --> But, this soon runs into a barrier: want of search resources to plausibly get to sufficiently complex cases. As Wiki summarises on random text generation:
The [infinite monkeys] theorem concerns a thought experiment which cannot be fully carried out in practice, since it is predicted to require prohibitive amounts of time and resources. Nonetheless, it has inspired efforts in finite random text generation. One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[21] A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d...
9 --> This also shows that even when such cases occur, they have to be detected by a wider system, or they would be lost in a sea of gibberish, the TYPICAL product of random stringing of bits or ASCII characters. In short, we see here the point about the deep isolation in the space of possible configurations, of functional clusters. 10 --> In addition, we can see that spaces of order 10^50 have been successfully searched, but that underscores how the maximum reasonable Planck time, quantum state resources of the 10^57 atoms of our solar system are about 10^102, where even the fastest chemical reactions [ionic ones] take about 10^30 such P-times. Just 500 bits of binary digital places in a string structure, would have 3 * 10^150 possibilities, so the dedication of our solar system to search such a space, would be equivalent to drawing a 1- straw sized sample from a hay bale 3 1/2 light days across. A solar system could be lurking in there, but sampling theory would tell us that, overwhelmingly we would only detect a straw. 11 --> Extending to our observed cosmos -- what we are empirically warranted to discuss -- we have 10^80 or so atoms, and 1,000 bits of info carrying capacity would swamp the 10^150 PTQS resources, to 1 in 10^150. A one straw sized sample from such a haystack could have millions of universes the scope of our observed universe in it, and would still be only likely to pick up a straw. Reasonable sized, but small relative to population samples overwhelmingly tend to pick up the typical, not the atypical. 12 --> Just so, we now can see why one key sign of design is empirically reliable: complex specified information, especially in the form of functionally specific, complex information [and in particular, digitally coded information]. Once we pass a reasonable threshold of complexity, 500 - 1,000 bits, a blind chance and necessity search on the gamut of the observed solar system or cosmos, would overwhelmingly be likely to pick up gibberish or non-functional arrangements of components. (Using meshes of nodes and arcs and/or exploded views, we can convert a complex functionally organised structure into a wiring diagram, thence a structured set of yes-no questions, hence a specified number of bits adequate to give a functional specification. e.g. we could so reduce Mt Rushmore, noting that by contrast with say the former Old Man of the Mountain, the requirements of portraiture impose a tight specification that sharply constrains the number of acceptable configs.) 13 --> We also see here where the island of function in a sea of non-function imagery comes from: complexity and specificity sharply constrain the acceptable subset of the overall space of possibilities. When that complexity passes a threshold -- the solar system is our practical universe -- if a complex object is functionally specific or otherwise confined to a narrow zone of interest, we can be confident that an object with FSCI is designed, not just apparently designed. Statistical miracles of that order are operationally impossible. Indeed, that is the statistical foundation of the second law of thermodynamics. (Note Wiki's infinite monkeys discussion.) 14 --> This can be quantified, as was done here as a summary:
Chi_500 = I*S - 500, bits beyond the solar system threshold, where I is information measured in bits per the well known expression I = - log p or the like, and S is a dummy variable identifying whether or not the circumstance is specific. 73 ASCII characters in English is specific, a string of 500 coins that are tossed at random is not.
15 --> This criterion may be tested in a great many instances, and it will be shown reliable, Indeed the whole Internet, or just the above thread, are enough to show how well it works, and why. 16 --> In short, we have candidate causal explanations, and we have a criterion to objectively distinguish reliably, when chance and or necessity are not plausible relative to design. 17 --> Such a conclusion can then be reasonably extended to cases where we may only see the traces of the actual deep past cause, i.e effects that have endured to the present. 18 --> When that is done, it plainly indicates that C-chemistry, cell based life shows strong signs of design, e.g. just from DNA, which starts out at 100,000 - 1 mn and upwards of billions of bits of information. 19 --> That is itself highly significant, as it potentially revolutionises the institutional status quo on origins science. And, that explains easily the sort of controversies and selectively hyperskeptical objections and strawman objections such as has been cited in the clip being responded to. 20 --> It should be noted to KH, that inference to best, empirically and analytically anchored explanation is not a mere easily dismissed simplistic analogy. And, to set up and knock over such a strawman in the teeth of easily accessible evidence and reasoning to the contrary hardly commends the view being so advanced. (Cf. for instance, the ongoing UD ID Foundations series linked at the head of this post [link to be added as a line], and the IOSE summary page, here as well. Many other sources, including the weak argument correctives at the head of this and every UD page under the resources tab, etc, could and should have been properly consulted before tossing off the clipped argument.) GEM of TKIkairosfocus
October 15, 2011
October
10
Oct
15
15
2011
05:30 PM
5
05
30
PM
PDT
Re Dr Liddle:
It is certainly true that we do not yet have a working model of how DNA-based life came to be, although there are plenty of theories, some of which are looking promising. But so far, not supported by persuasive empirical evidence. But then, nor is ID . . .
Actually, Venter's recent work provides proof of concept that shows that intelligent design of life is a sufficient cause of what we see in the living cell. Similarly, given the mutual ruin by Orgel and Shapiro as documented in the OP, blind chance and mechanical necessity, on genes first or metabolism first approaches, is definitely not "promising." (Given the political climate, any promising result would have been awarded a Nobel Prize and would certainly be blazed all across our headlines.) As UD News reminds us, Freeman Dyson has aptly summed up the dilemma for advocates of spontaneous abiogenesis in plausible pre-life contexts:
The origin of life is the deepest mystery in the whole of science. Many books and learned papers have been written about it, but it remains a mystery. There is an enormous gap between the simplest living cell and the most complicated naturally occurring mixture of nonliving chemicals. We have no idea when and how and where this gap was crossed. We only know that it was crossed somehow, either on Earth or on Mars or in some other place from which the ancestors of life on Earth might have come. - Freeman J. Dyson, A Many-Colored Glass: Reflections on the Place of Life in the Universe (Charlotteseville, VA: University of Virginia Press, 2010), 104.
In short, what is known is that once there was no cell based life, and now there is, but there is no credible naturalistic mechanism for OOL. And, yet, thanks to Venter et al, we have proof of concept in hand that intelligent design of the cell is a doable project. Beyond that, the cell is riddled with phenomena and objects that manifest signs that in our observation and on the infinite monkeys type analysis, reliably point to design. Phenomena like digitally coded, functionally specific info beyond the plausible reach of chance and/or necessity on the gamut of the observed cosmos, the presence of coded algorithms and data structures with executing machines, and the like, including a von Neumann self-replicating, code based mechanism. But, design of cell based life is so alien to the mindset of dominant scientific elites today that they will do almost anything rather than seriously discuss such a prospect. As the OP clip from Lewontin shows, and further links highlight that this sort of thinking dominates the US NAS, NSTA, etc. That -- usually implicit -- a priori commitment to materialism and so also to naturalistic explanation by the relevant elites is a part of why it is increasingly obvious that the real issue is not the status of evidence and reasoning on it, but, sadly, prior worldview level ideological commitment and, frankly, indoctrination, by the said elites and their promoters. When it comes to the scientific or epistemic warrant status of the design inference, the start point is that we ourselves are designers, and we have a world full of artifacts that manifest the signs of design. Once such signs show themselves empirically reliable in cases we know directly, we have every reason to trust their reliability until shown otherwise in cases we do not have the opportunity to observe directly. Indeed, this is the premise of many serious areas of praxis, in origins science and studies of the deep past, in forensics, and in day to day life. So strong is this that it is patently selective hyperskepticism -- a fallacy -- to impose a sudden dispute when the results of an inference to best explanation on traces in the present and known causal forces sufficient to cause said patterns, do not lead where elites are willing to go. To infer that life is the product of design and manifests strong signs of being designed, after all, would be no more than the basic view of the co-founder of the modern theory of evolution took, Wallace. For, he titled his book on the subject:
The World of Life: a manifestation of Creative Power, Directive Mind and Ultimate Purpose
On what sound grounds is it to be regarded as unscientific, or "giving up" on science, to use scientific methods to infer inductively by abduction that, per reliable sign, a known sufficient causal factor is responsible for an object, process or phenomenon that shows characteristic signs of said factor? GEM of TKIkairosfocus
October 15, 2011
October
10
Oct
15
15
2011
04:07 PM
4
04
07
PM
PDT
Well, not in my view, Eugene :)Elizabeth Liddle
October 15, 2011
October
10
Oct
15
15
2011
03:33 PM
3
03
33
PM
PDT
"It was designed because it shares features with other things that we know to be designed." Well, it is something already. Also, it depends what features we talk about. Some of them may be pretty convincing unless one wishes to remain prejudiced. To me, it has a lot more weight than simply asserting that complexity can emerge by itself and that everything came about by fluke.Eugene S
October 15, 2011
October
10
Oct
15
15
2011
02:39 PM
2
02
39
PM
PDT
14.1.1.2.34 "Please don’t be so quick to assume that the logical fallacies are on the other side :)" One does not have to assume that, Elizabeth. It is, in fact, the case.Eugene S
October 15, 2011
October
10
Oct
15
15
2011
02:31 PM
2
02
31
PM
PDT
Yes, always.ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
01:38 PM
1
01
38
PM
PDT
OK, well, we'll leave it at that. It is certainly true that we do not yet have a working model of how DNA-based life came to be, although there are plenty of theories, some of which are looking promising. But so far, not supported by persuasive empirical evidence. But then, nor is ID :) In fact I'm not sure what it's supposed to be "supported" by, at all. But that seems a reasonable place to leave things - nice to talk to you! Cheers LizzieElizabeth Liddle
October 15, 2011
October
10
Oct
15
15
2011
01:31 PM
1
01
31
PM
PDT
Elizabeth,
True, I don’t know that it wasn’t, Scott. But can you provide me with one iota of evidence that it was?
I'm reasonable about the weight of the ID inference. But if it's anything, it's an iota. Nonetheless, to trace such a remarkable, self-replicating entity as a cell back to a manufacturing process which in turn traces back to what sure resembles symbolic codes - that bears a remarkable resemblance to the processes long employed by intelligent agents who have never even heard of DNA. I'm not even mentioning the functions within those cells, or all the other cool stuff you can make out of them. So let's call that two iotas. That makes it the most well-supported theory by two iotas.ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
01:14 PM
1
01
14
PM
PDT
Oh, it probably was arbitrary, Scott. I am sure that a different set of tRNA molecules would have worked just as well - the important thing is that there is only one type per codon, which having a reproductive advantage, would tend to evolve. The relevant part of my sentence, oddly, which was the part about "agreed by the community of code-sharers". True, I don't know that it wasn't, Scott. But can you provide me with one iota of evidence that it was? It's not me who is "question-begging" here! You can't infer intelligent input by hypothesising intelligent input! Please don't be so quick to assume that the logical fallacies are on the other side :)Elizabeth Liddle
October 15, 2011
October
10
Oct
15
15
2011
12:57 PM
12
12
57
PM
PDT
Elizabeth, But this is just one more.
Once you drop the requirement that the assignment of symbol to referent is an arbitrary one agreed by the community of code-sharers, which you have to do to include DNA
First you are assuming that the assignment of symbol was not arbitrary. How do you know this? Us ID folks are constantly reminded that neither OOL nor evolution searches for a specific target. But now you are saying the very opposite, that if it were designed then only these precise symbols could be used. The integrated circuits within my Pentium processor will only accept highly specified combinations of symbols as instructions. You could argue then that the symbols are not arbitrary, because only certain ones will initiate the right reaction. But in reality the symbols, the medium, and the processor were all designed to function together. That is precisely what makes their arrangement so intelligent. You are asserting that this could not be the case with DNA. Support it. If these symbols could not be arbitrary then what other elements of life would you like to identify which necessarily conform to the exact pattern we observe and could not have occurred any other way, and what impact does that have on the probability of life arising by chance?
agreed by the community of code-sharers
And this is blatant question-begging. DNA is not a symbol because it was not agreed to by a community of code-sharers. And you know this how? You have contradicted your own logic, made assumptions without supporting them, and begged the question. Blow - poof!ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
12:40 PM
12
12
40
PM
PDT
I’ve lost count of how many arguments you’ve made attempting to distinguish DNA from symbolic code, and each disintegrates if you blow on it.
Well, not in my view, Scott. You seem to be missing my point every time! That's why I've tried presenting you with the consequences of your (in my view metaphorical) use of the word code. In order to define "code" or "symbol" in a way that includes DNA, you have to drop the very property gives you the inference you want! Once you drop the requirement that the assignment of symbol to referent is an arbitrary one agreed by the community of code-sharers, which you have to do to include DNA, then you no longer have any case for saying that therefore the code must have been designed! I am using the words in the standard semiotic senses. If we drop the semiotics, you also lose your inference. Take your pick :)Elizabeth Liddle
October 15, 2011
October
10
Oct
15
15
2011
12:12 PM
12
12
12
PM
PDT
Darn. Let me repost this part:
Moreover, the DNA molecule itself, or its sequence, without the cell it normally inhabits, does not represent the organism it belongs to, in any sense.
The code to Windows 7 tapped out in morse code doesn’t do me any good without a computer. Again, what distinction are you making?ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
12:08 PM
12
12
08
PM
PDT
Elizabeth,
Without an actual DNA molecule, nothing will happen. Writing “CAG” on a piece of paper won’t give you a glutamine molecule, no matter how many molecules of ink you use. Writing "horse" on a piece of paper doesn't give me horse, either. What distinction are you making?
All jackasses have long ears. He is a jackass. Therefore, he has long ears. In other words, it’s fallacious.
The code to Windows 7 tapped out in morse code doesn't do me any good without a computer. Again, what distinction are you making?
All jackasses have long ears. He is a jackass. Therefore, he has long ears. In other words, it’s fallacious.
It wouldn't be fallacious if every available definition of jackass was "a thing with long ears." And if someone picked a thing with long ears and started making up arbitrary, meaningless reasons why it wasn't a jackass, one would have to wonder what they have invested in it not being a jackass.
human codes involve the agreed assignations of symbols to referents by a group of code users, as well as the fact that those symbols are not themselves instrumental in rendering their meaning.
You're begging the question. (It was only a matter of time.) How do you know whether a group of code users didn't do just that?
the fact that those symbols are not themselves instrumental in rendering their meaning.
My Pentium interprets a certain set of symbols as a specific instruction. Those symbols initiate electronic reactions. One could make the exact same case that those symbols are instrumental in rendering their meaning. I've lost count of how many arguments you've made attempting to distinguish DNA from symbolic code, and each disintegrates if you blow on it. I haven't even pointed out anything you didn't already know. You're repeatedly making arguments that contradict your own knowledge. You do not appear to be reasoning on these things, applying what you already know. That it why I say it is irrational.
ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
12:04 PM
12
12
04
PM
PDT
He = a human being. Sorry that wasn't clear! (Though it's an old chestnut). The point being that taking a generalisation about something that is true, then concluding that when you use that something as a metaphor the generalisation also applies, is fallacious. My favorite is the politician's one that goes: Something must be done This is something Therefore this must be done. All too true, unfortunately.Elizabeth Liddle
October 15, 2011
October
10
Oct
15
15
2011
12:03 PM
12
12
03
PM
PDT
Just a comment on board mechanics. UD now has threaded comments, so it makes no sense to edit someone else's post in order to reply. I assume this is a habit left over from the previous board software.Petrushka
October 15, 2011
October
10
Oct
15
15
2011
11:52 AM
11
11
52
AM
PDT
All jackasses have long ears. He is a jackass. Therefore, he has long ears. In other words, it’s fallacious.
Shouldn't that be: All jackasses have long ears. He has long ears. Therefore, he is a jackass. Perhaps you have an inability to commit fallacy.Petrushka
October 15, 2011
October
10
Oct
15
15
2011
11:33 AM
11
11
33
AM
PDT
Yes, you can represent codons symbolically. That doesn't make codons symbols though! And, when rendered as alphabetic letters, they are incapable of making proteins or RNA molecules. Moreover, the DNA molecule itself, or its sequence, without the cell it normally inhabits, does not represent the organism it belongs to, in any sense. Did you watch that wonderful Denis Noble lecture? And you are completely missing my point about the molecules. Most symbols are made of molecules. Some aren't - some are patterns of energy, for instance auditory symbols like words. But that's beside the point - the point is that it isn't at the molecule level that the meaning is carried. This is not the case with DNA. Without an actual DNA molecule, nothing will happen. Writing "CAG" on a piece of paper won't give you a glutamine molecule, no matter how many molecules of ink you use. But nonetheless, you can call it a "code" or a "symbol" if you want to. But in that case, don't go saying - see, it's a code! And we know that codes (in the normal usage) are made by minds! Therefore DNA was made by a mind! That's equivocation! It's tantamount to saying: All jackasses have long ears. He is a jackass. Therefore, he has long ears. In other words, it's fallacious. Yes, the DNA sequence has something in common with human codes. It also has a great deal that is different, not least being the fact that human codes involve the agreed assignations of symbols to referents by a group of code users, as well as the fact that those symbols are not themselves instrumental in rendering their meaning.Elizabeth Liddle
October 15, 2011
October
10
Oct
15
15
2011
11:15 AM
11
11
15
AM
PDT
Elizabeth, I'll expand on this:
But again, those symbols are assigned by minds, then read off by minds
Do you know how many messages and signals encoded in symbols are being passed back and forth within your computer right now? Or how many are traded back and forth between your computer and various servers on the internet? There are components between your computer and those servers that are sending each other messages to help them send and receive your messages. What mind is reading these off? Yes, the origin is a mind. That is why we infer that other such meaningful arrangements of symbols likely also are. But what mind is reading them? None. They are set in motion to communicate with one another. Factoring out the unknown origin, what is the logical inconsistency between such processes and the what occurs when cells reproduce? If the information in DNA were deliberately arranged and the processes for transcribing them designed, this would be entirely consistent with the known intelligent pattern of designing systems that use symbols for internal communication. (And, as previously stated, if they were not arranged and designed then they would be consistent with absolutely nothing.) Why would you even suggest that symbols must be processed by a mind?ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
11:13 AM
11
11
13
AM
PDT
Elizabeth,
Well, I’m just using it as usually used – some kind of representation, that can be in any medium
You make this distinction as if the contents of DNA cannot be in any other medium. They already are. A, G, C, and T are popular. When biologists map genes, how do they store the data? In more DNA molecules? They use a computer. They could write it on paper. They could use morse code if they wanted to.
That doesn’t seem to me to include molecules at all.
I don't know of any medium that doesn't include molecules. You're making an arbitrary determination about what can be a medium. Bytes, yes. Symbols on paper, yes. Hand-clapping modulated as morse code, yes. A specific facial expression, yes. A sequence of molecules, no. I'm sorry but you're just making that up. The point is that symbols and language are known implements of intelligence. The ability to use language is sometimes used as an indicator of intelligence. In contrast, there are no known instances of languages or symbols arising apart from intelligent purpose. I'll posit that it is unimaginable. Prove me wrong. Show that you or anyone else can even imagine it in any amount of detail without using the word "somehow." The arrangements of DNA are clearly not random, as the results of their transcription are not random. That such a code of unknown origin was also purposefully designed is a valid inference. And it's the only conclusion with any connection whatsoever to observed reality. Given that, I'd guess in this order: 1) It was designed 2) Its origin is an absolute mystery 3) Yes is no and true is false 3) It emerged naturally from something which had no use for it because inanimate things have no use for anything. (The last two are neck-in-neck.)ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
10:42 AM
10
10
42
AM
PDT
Well, I'm just using it as usually used - some kind of representation, that can be in any medium, that has a referent agreed by all the people who use it. That doesn't seem to me to include molecules at all. But as I said, let's nonetheless accept your usage: what point are you making?Elizabeth Liddle
October 15, 2011
October
10
Oct
15
15
2011
10:03 AM
10
10
03
AM
PDT
Elizabeth,
I think it’s a huge stretch, because a molecule takes part in the in the process of “translation”
Then we're back to the demarcation problem (or, rather, you are.) Define "symbols" in a manner which includes all known means of information processing but excludes this specific means of processing. The definition cannot include an assumption regarding whether it was designed.ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
09:50 AM
9
09
50
AM
PDT
OK (well, as I say, I think it's a huge stretch, because a molecule takes part in the in the process of "translation" - acts as a template for instance, while a picture of a dog doesn't, which seems to me to be rather an important difference) - that's fine as far as it goes, but I meant, what part of your ID argument does your identification of codons as symbols form? Or are we just arguing nomenclature because we are both pedants :)?Elizabeth Liddle
October 15, 2011
October
10
Oct
15
15
2011
09:45 AM
9
09
45
AM
PDT
Elizabeth,
What exactly are you inferring from your identification of codons as symbols?
I am inferring that rather than storing miniature representations of finished products, the needed data has altered and compressed to a form that better suits the purpose of both storage and transcription, but which no longer bears a resemblance to what it represents. That is the essence of language. I can say "dog," which is easy, instead of drawing a picture of a dog or producing an actual dog. I can also produce a fully detailed description of a dog in the form of its DNA, which again is easier to transport and transcribe than an actual dog, and does not even resemble one. Compare that to a book about dogs and a book about ships. How do you tell which is which? By which one looks more like a book and which one smells like a dog? No, the books look more like each other. If you don't read the language, you cannot even distinguish them. The same could be said of spoken words. You would have a point if the medium and elements for representing a dog were different than those for tulips. But the same medium and processes are used in both cases. Like it or not, that's a language. Whether we are writing about dogs or tulips we use the same letters and most of the same words. It's not the reactions that make them symbols. It's their consistent reuse to describe varying things.ScottAndrews
October 15, 2011
October
10
Oct
15
15
2011
08:17 AM
8
08
17
AM
PDT
1 2 3 8

Leave a Reply