Uncommon Descent Serving The Intelligent Design Community

The Image of Pots and Kettles ….

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I was just reading this fairly-well written article, and came upon one of the last paragraphs.

It’s an interesting take by a, shall we say, “non-scientist”:

“These scientists argue that only ‘rational agents’ could have possessed the ability to design and organise such complex systems.

Whether or not they are right (and I don’t know), their scientific argument about the absence of evidence to support the claim that life spontaneously created itself is being stifled – on the totally perverse grounds that this argument does not conform to the rules of science which require evidence to support a theory.”

You have to like this logic: the scientific community doesn’t want to entertain the idea of ID with its implicit argument that there is no evidence to support RM+NS, since ID is not a scientific theory given that it doesn’t have evidence to support its theory.

Yes, indeed, the “image of pots and kettles”!

Here’s the link.
Arrogance, dogma and why science – not faith – is the new enemy of reason

Comments
Hi Joseph: You are right that we should not beg questions by imposing methodological naturalism with the underlying philosophical materialism that lurks therein -- denials notwithstanding. However, all that requires is that we be open to the three major causal possibilities [which can interact] i.e. chance, necessity, agency. In the end, that is what ID asks for,t hen puts on board a reliable tool for identifying certain important cases of the last of these three mechanisms. Reliable? [In every case where the explanatory filter votes, design, in which we have independent knowledge, it is accurate.) On the second point, it is worse for the chemistry reductionist thesis than that, for dead organisms may have living tissues and cells in them – the whole living organism is plainly more and different from the simple sum of the physical parts. Then, when it comes to minds that we need to think credibly about such matters . . . it seems evolutionary materialist thinkers have inescapably undercut their own ability to think. We are looking at self-referential inconsistency here. GEM of TKIkairosfocus
August 22, 2007
August
08
Aug
22
22
2007
04:07 AM
4
04
07
AM
PDT
Kairosfocus- I know I haven't said it recently but I have always maintained that if living organisms didn't arise from non-living matter via stochastic/ blind watchmaker-type processes then there would be no reason to infer those processes have sole dominion over any subsequent evolution. Also I should note that dead organisms have the SAME chemicals as their living counterparts. Yet they are still dead. That alone refutes Art2's premise about chemistry and living organisms.Joseph
August 21, 2007
August
08
Aug
21
21
2007
08:43 AM
8
08
43
AM
PDT
BA and Joseph: It seems Art is incommunicado for a week or so, and that the above was in effect his last post, absent the thread keeping going for the week intervening. So, in effect we are looking at wrap-up, methinks – unless someone else steps up to the plate. BTW, BA, thanks for the kind words. [I do quietly note my last formal qualification in Math is the third major in my u/grad degree, supplementing my “home” double physics major. Beyond that, I am an applied physicist, who also broadened to take in an MBA with a focus on strategic change.] A few quick thoughts: 1] Elephant Hurling, Literature Bluffing and Selective Hyper-Skepticism: These are of course colourful names apparently originating in the apologetics and ID movements for persuasive but misleading arguments often directed at them/us. Some have objected that adverting to the issues by those names is improper, I suppose by extending the “taint” of those movements. But if targets/victims of a certain tactic give it a convenient name, the substance stands/falls on the merits, not the name. So, to definition:
ELEPHANT HURLING: giving a one-sided summary or declaration of “expert” or “credible” claimed “consensus” opinions on a matter in dispute as if that settles the matter without having to refer to the discussion and resolve the question on the merits of fact and logic. It persuades by the “credibility” of the authorities on the favoured side, thus is an example of improper – because biased -- appeal to authority. This is of course to be distinguished from citing authorities on “your” side of a dispute in a context that is either engaging in the discussion or is balancing remarks made on the other side, so that onlookers may see for themselves well enough both sides of the issue. LIT BLUFFING: On being challenged on hurling, some artful debaters will proceed to do a literature reference “dump” that may sometimes be hard to track down or address in a live debate; i.e they are claiming through merely piling up numbers of cites, that the “weight”/ “consensus” of “credible” scholarship is on their side. However, on tracking down the references, it soon turns out that he cites are irrelevant to the matter at stake, i.e the cites may use terms that happen to show up in a search engine's results, or may brush at the issue tentatively, or may be speculative, or the like. In any case, the pile of cites is insufficient to settle the matter, and the matter should be addressed on the merits clearly and fairly to come to a well-warranted conclusion. SELECTIVE HYPER-SKEPTICISM: My own modest contribution is here to give a descriptive title to a common skeptical debate tactic long since addressed by the likes of a Simon Greenleaf and frequently used in theological, historical, statistical, philosophical and scientific contexts, e.g. Sagan's “extraordinary claims require extraordinary evidence.” No, they only require ADEQUATE and reasonable evidence! The fallacy works by the asserting of the idea that, in effect, if I can doubt your claim [as opposed to my own claim] relative to arbitrarily high standards of proof, I can dismiss. The core issue is, of course, that by consistent application of that standard, the whole field of knowledge vanishes (including those knowledge claims that lurk under the assertions of such skepticism), poof; radical and universal skepticism is self-referentially absurd. But, if one can SELECTIVELY apply skepticism to ideas one is inclined to disbelieve, that one in fact does not apply to similarly supported ideas one accepts -- e.g. both are based on similarly warranted claimed matters of fact or use similar scientific or statistical approaches -- one can pretend to be “rigorous” while begging the question and being inconsistent in his handling of issues. This of course frequently backs up the other two fallacies just above. If we insist on adequate and consistent standards of warrant, especially by giving clearly parallel cases where the same approach and substantially the same conclusion are generally accepted, then that suffices to expose the selectiveness of the radical skepticism being used.
2] Joseph, 89: the same holds for Art2- IOW he is also going to have to bring chemistry and biology into the picture if he is going to assert that all of life’s diversity owes its collective common ancestry to some unknown population of single-celled organisms. Actually, this misses the core challenge; on evolutionary materialist views, one properly has to account for the ORIGIN of life relative to the plausible physical, geological and chemical factors present in the observed cosmos and especially earth at the relevant time – this is to get TO the claimed population of last universal common ancestral unicellular organisms. THEN, on examining the information systems, storage media and scale, and molecular nanomachines in the cell, one also has to credibly and empirically account for body-plan level biodiversity. In short, there are challenges on abiogenesis and on macroevolution. Cf my always linked for a balancing discussion – IMHCO, neither of these challenges has been adequately acknowledged much less faced squarely and addressed properly on the merits by the many evolutionary materialism advocates – especially on the origin of biologically relevant information. 3] Art2 what is the biological data which can account for the physiological and anatomical differences observed between chimps and humans? In Art's absence, are there any takers? [Remember, this is not even really a serious step to answer to the challenges just identified – we share a fundamentally similar body plan with the chimps. But there are certain issues to be addressed: haldane's dilemma, Genetic Entropy, behe's observed “edge” of evolution, the infamous “98% similarity” in genes – and I hear (kindly address) the 2% difference is mostly in inconsequential stuff too, also that we have very similar genes to worms, fish and the like – was the banana our chimp has in mouth in the picture on that surprising degree of overlap, too? If NDT is the biological equivalent of atomic theory, surely it can give us a good account here.] GEM of TKIkairosfocus
August 20, 2007
August
08
Aug
20
20
2007
11:38 PM
11
11
38
PM
PDT
My assertion that all of what we do know about cells reduces to chemistry is a spot-on, completely accurate, if very abbreviated statement.
If that is what we "know" about cells then I would have to say that we don't "know" very much at all.
To close (at least for now – we’ll see where things stand at the end of the month), I think discussants here need to face the fact that, somewhere in any discussion of information, evolution, the OOL, and whatever, one is going to have to bring actual chemistry and biology into the picture.
And the same holds for Art2- IOW he is also going to have to bring chemistry and biology into the picture if he is going to assert that all of life's diversity owes its collective common ancestry to some unknown population of single-celled organisms. Because as of this moment there isn't anything in chemistry or biology which demonstrates such transformations are even possible.
I’m seeing more than a reluctance, but even a disdain here for chemistry and biology.
Nice projection. So tell us Art2 what is the biological data which can account for the physiological and anatomical differences observed between chimps and humans?Joseph
August 20, 2007
August
08
Aug
20
20
2007
08:05 AM
8
08
05
AM
PDT
Art you state: Bornagain77, I don’t mean to be rude, but your lengthy diatribe has been refuted in considerable detail. Much of what I have pointed to details the problems with the your claims, and there is much more that I won’t cite that also does. You are free to ignore what I have mentioned, but you should know that your claims are long-ago laid to rest. Thank you for so solidly refuting my claims, NOT! Or as KF pointed out hurling an elephant! This has to be the most lame rebuttal I have ever seen in my life. You address no specific evidence I site and worse yet you site absolutely no references to back your claims of fraud. I truly respect kf's solid refutation of your "non-complexity issue" and respect his integrity as a scientists/mathematician, but you sir have lost any respect you may have had from me because of your shoddy technique of discerning the truth. Remember Art, science follows the evidence wherever it leads no matter if it is distasteful to our biases. You Sir are guilty of favoring your biases over evidence~!bornagain77
August 20, 2007
August
08
Aug
20
20
2007
07:16 AM
7
07
16
AM
PDT
Hi Dave, I was about to go. By focussing on the information carrying capacity of the digital strings involved, I am in effect saying: how many yes-no Q's are required to specify what is to be done? 1] If there are no y/n elements – all is necessity, then we have not any capacity to convey information, we are in effect forced to use all AAAAAAAAA's and cannot encode information at all. But, once we have at least two alternatives at any one step or element, and capacity to chain, we can store a lot of information, including where to go in our encyclopaedia of the forces, effects, properties and materials of nature to -- purpose! -- get what we want done. DNA uses 4-state elements and proteins use 20-state ones with very interesting chemical and physical properties that act like a super-meccano set or super swiss-army knife. 2] Now of course in the latter case, some of these properties of the links in the chain may in part constrain which elements can be where relative to other elements, and may leave certain chain sections relatively free – i.e., e.g one hydrophobic amino acid may substitute for another and permit the same type of folding to happen, i.e we see here don't care elements and degeneracy, common enough in discrete state control systems. These may partly constrain the utilisation of the ideally available config space and so the in praxis equivalent yes-no chain information content per symbol, but it does not materially alter the overall picture. Indeed the idea that we are looking at and measuring "encyclopaedia"-ndexing information – which BTW is by def'n information -- tells us that life systems are MORE complex than even these measures indicate, i.e we have a lower bound estimate – which is quite okay for our purposes! 3] So now we come to the issue of where did such code-bearing, algorithm-implementing chains come from? 4] As my always linked discusses, in principle it is logically and physically possible that any and every digital string we have seen is the product of lucky noise. But when the resulting config spaces exhaust reasonably available probabilistic resources, then we see that functionally specified complex information is best explained as the result of agent action. Just as we do not revert to chance to explain the strings in this blog thread. 5] Additionally, once we observe WD's UPB as a reasonable limit, 1 in 10^150, and give room for the sort of freeness and constraints above, we see that a string equivalent to 1,000 to 2,000 bits or more [yes/no steps] is beyond the credible reach of chance on the gamut of this observed cosmos. That is easy to pass in this blog's threads, and – sand kicked up to blind onlookers and cloud the issue notwithstanding -- it has long since been passed in the nanotech of life. 6] Then finally observe that in every case of FSCI that we directly know the causal story for, we see that such originates in agency. Thus, it is very reasonable indeed to infer that in the cases where we happen not to have seen the process, such is the most likely cause to a degree of reliability that exceeds many things we routinely bet our life and limb on. GEM of TKIkairosfocus
August 20, 2007
August
08
Aug
20
20
2007
06:20 AM
6
06
20
AM
PDT
kf On measuring information it seems there's a twist in figuring out the information content of a gene. The problem is that the gene specifies a protein but the protein is a reference to further information entailed by the physical properties of the protein. So the gene is analogous to references into an encyclopia of physics. Wouldn't that mean that one has to include the information in the references instead of just the raw bit capacity of the media containing the references? On another topic you were of course exactly right to tell Art that everything is physics not chemistry. Unfortunately it appears it will fall on deaf ears. Art's convictions are both false and immutably held. DaveScot
August 20, 2007
August
08
Aug
20
20
2007
05:26 AM
5
05
26
AM
PDT
2] Art, 82: how does one measure information in this sense? Not assert (which is what the assemblage of numbers and calculations seen in this thread and in the ID literature are), mind you, but experimentally measure? This is of course a dismissal attempt, without actually addressing the issue on the merits. Relative to measuring “information,” onlookers will note that I have stressed the term: information-carrying capacity, which is directly measurable in bits once we see empirically the length of a digital chain and the number of states elements in that chain may take. This is not a bare “assert[ion],” it is an empirically anchored measurement and/or calculation that is commonly used in work with information and communication systems. (For instance one can measure a volume by taking certain linear measurements and calculating through well-founded formulae, not just by pouring in a liquid and observing in a measuring cylinder how much it took to fill up. Indeed, the cylinder was designed using the same sort of formulae.) If you want to make an example of direct empirical “measurement” of information carrying capacity and scale of config space, set up a 16-bit ripple-carry counter based on JK flip flops and step it through all accessible states, counting the number of clock-ticks and negative edges [for say a good old 7476 dual JK TTL chip] till it recycles to the initial reset state; you will immediately see the exact number calculated for a 16-bit system below. You can then reconfigure the counter as a 4-decade binary coded decimal counter and see how you have reduced the effective carrying capacity by specifying the zone of the config space that it can access. [I doubt that we would bother with such an exercise even at High School level these days!] Similarly, the related deduction of the number of states in the resulting configuration space is a common, real-world measure – it is the reason why an old fashioned 16-bit address space in the old 8-bit microprocessors was of maximum length 65,536, and why moving to 32 bits address space (and internal bus widths) on the 68000 opened this up to 4,294,967,296, but using only 20 lines on the 8088 (16 internal data bus, 8 external!) left this address space at 1,048,576. From that hangs much of the story of the kludgish evolution of the PC over the 1980's, to find workarounds – including on address segmentation. [Then in the 1990's when Motorola failed to come through with the 68050 in good time, Apple went Power PC; I wish the common hardware reference platform had won the day then. Now that Intel has fully dominated the market Apple has gone to the Pentium. A real pity.] Had Art looked at my always linked, Appendix 1, point 9, he would have seen also excerpts from Bradley's recent discussion on the case of Cytochrome C, showing the measurement or calculation of information in it relative to the observed frequency of occurrence of specific monomers, leading to the average information per residue, via i = - ∑ pi log2 pi, yielding 4.139 bits per residue, or 455 bits of Shannon Information, and a config space of 1.85 x 10^137. He then further adjusts as per Yockey on observing: “. . . . Some amino acid residues (sites along chain) allow several different amino acids to be used interchangeably in cyctochrome-c without loss of function, reducing i from 4.19 to 2.82 and I (i x 110) from 475 to 310 (Yockey). This yields, “Wo / W1 = 1.85 x 10^137 / 2.1 x 10^93 = 8.8 x 10^44,” which is a non-log information metric. He concluded by citing two experimental studies that produced similar low probabilities for getting to a functional protein from a racemic prebiotic soup. This cumulates from the multitude of required proteins etc into the DNA. And, we have not yet addressed the issue that all of this is in a functionally integrated system architecture irreducible to mere chemistry . . . 3] Art, 82: “electricity” and mechanical operations in living cells are all matters of chemistry. My assertion that all of what we do know about cells reduces to chemistry is a spot-on, completely accurate, if very abbreviated statement. First, as a physicist: Chemistry is a function of Physics [it is in effect an effect of that property of particles we call charge], and electricity is physics not chemistry; indeed, with quantum effects and related magnetism [which traces to relativity BTW], physics grounds the chemistry in fact. So, should we next argue by your logic that Chemistry is not “real,” it is physics only . . . and so on, philosophy and psychology lurking behind physics as the empirical sciences rely on those factors to function, and so on to infinity in an absurd regress . . .? More on the point you are missing the key issue, just as materialist reductionism tends to miss the mind in the midst of its fascination with the meat and the conditioning. Namely, a cell is no more reducible to a cluster of chemical reactions than is a PC reducible to the electrical currents, emfs and resistances etc in its components. It is the specific, functional organisation of carefully designed and assembled components that makes all the difference, and it is the production of that configuration that is characteristically a known artifact of mind. This, I pointed out above, and this the ancients knew, in how they distinguished material cause and cause tracing to active intent of agents, who USE the materials and forces of nature to achieve their ends. And, that is also what DS pointed out. 4] Dismissive aside to BA: Again, sadly, Art resorts to dismissal by elephant hurling, rather than addressing the merits. GEM of TKI OOPS: WD's June 07 paper is here.kairosfocus
August 20, 2007
August
08
Aug
20
20
2007
04:06 AM
4
04
06
AM
PDT
Okay: Thank God, it was "only" a side-swipe for Jamaica. DV, later this morning I'll call my folks later and see how they fared. Now, my family over in Cayman are under the gun, but it is even more distant of a side-swipe that they most likely face. [BTW, the just linked has in it my set of rules of thumb on where hurricanes are likely to go in 1 – 3 days . . . of course, monitor weather and disaster officials and heed their counsel, noting that any probabilistic estimate or likelihood estimate -- apart from 1 or 0 -- is by definition an index of ignorance to one extent or another, as modified by our leaning on whatever grounds towards occurrence or non-occurrence. This of course bridges into our discussion below too.] Now, on points of note: 1] BA, 83: Is it ok if I quote what you wrote [on: “foundational math for the protein specificity that gives a protein its inherent and obvious complexity”] in the future . . . Of course you can quote what I wrote, but it is a simple matter to do the math yourself and IMHCO, far more effective. For instance, DNA uses 4-state elements in a chain that runs from about 500k – 1 mn up to 3 – 4 bn. For the first item, X, that is 4 states. For each of those, the second item, Y can take up 4 values as well, so X-Y has 4 * 4 = 16 possible states. Chaining, for N elements, we have 4^N possible configurations. Using base 10 logs, log [4^N] = N log 4, say ABCD. EFGH = ABCD + 0.EFGH. The easy way for big numbers is to subtract off the power of ten, ABCD, and report it as 10^ABCD, then multiply by antilog 0.EFGH, ie say LMNO. So 4^N ~ LMNO*10^ABCD. This defines the scale of the configuration space, and illustrates the power of scientific notation for compactly expressing large and small numbers. Once we have a space equivalent to more than about 500 – 1,000 bits of information-carrying capacity [think of a 500-bit or 1,000 or 2,000 -bit memory, say a one-bit wide slice off a typical eight-bit or byte-wide RAM chip], we are dealing with 10^150 to 300 to 600 possible states. With 10^600 states, it would be hard to argue that any reasonable island or archipelago of functional configurations is accessible to a random-walk or exhaustive search that begins at any arbitrary initial point. Also, one DNA 4-state element encodes up to 2 bits of information, so chains of 250 to 500 to 1,000 DNA elements correspond more or less to 500 to 1,000 to 2,000 bits of information storage capacity. [WD discusses the effect of among other things fractional occurrence of given states of the elements in the chain in his latest paper here, i.e a confining to a specified zone in the config space.) Now, cells use a nanotechnology that implements step by step algorithms to create and use the macromolecules of life, embedding the blueprint as a part of the system, i.e DNA. So, in aggregate we are looking at vastly more possible configurations than even 10^600 or so. This propagates into proteins, esp enzymes, esp if we look a the required large cluster of such required for life. So if even 90% of the chains in these molecules are is relatively unconfined as to specific residue, we are looking at a large number of 20-state elements in aggregate, and again that puts us rapidly beyond the reach of reasonable random walk or exhaustive searches. Then, thirdly, we have to address the issue of chirality, which leads to a very similar result for the chains, which are L- or R- for proteins and nucleic acid polymers respectively. Just 2 to 3 or so typical 300-residue proteins gets us close to the upper limit for any reasonable random walk search or search by exhaustion. You will note the attempted dismissal without addressing on the merits, the issues. This I turn to in a moment: . . .kairosfocus
August 20, 2007
August
08
Aug
20
20
2007
03:47 AM
3
03
47
AM
PDT
kairosfocus, Hope and pray the hurricane does you no harm. I just want to thank you for laying out the foundational math for the protein specificity that gives a protein its inherent and obvious complexity. Is it ok if I quote what you wrote in the future if I need to defend this point against Darwinists "just so" stories?bornagain77
August 19, 2007
August
08
Aug
19
19
2007
10:23 AM
10
10
23
AM
PDT
If it's just chemistry it should be pretty easy to make a cell form scratch. Right?tribune7
August 19, 2007
August
08
Aug
19
19
2007
07:40 AM
7
07
40
AM
PDT
Art sez: 3. Every process in a cell that has been studied has been found to be a matter of chemistry. Not JUST chemistry. There is more at work than that. However I do understand why you would want people to think that it is just chemistry. That is the only way to simplify living organisms. Too bad reality refutes that premise. There are also command and control of the chemical reactions. For example DNA not only unzips to replicate but also different segments unzip to form other molecules that are then directed to where they are needed. Also the age of the Earth can only be determined once we figure out HOW the Earth was formed. And DNA is an information-rich molecule- that is the DNA of living organisms. That will never change.Joseph
August 19, 2007
August
08
Aug
19
19
2007
06:59 AM
6
06
59
AM
PDT
2] DS, 75: Electrical, chemical, and mechanical forces are all at play in cellular processes. Information storage, retrieval, modification, and translation is a process in a class by itself with electrical, chemical, and mechanical structures and processes serving as media and mechanisms. Again, an astute comment. It is not the chemistry and physics of a motherboard or the CPU and memory chips in it that defines what it is and is about, but the information system architecture, which is here physically expressed and implemented as algorithms are executed based on coding. All of these are in our certain knowledge of cases where we see the causal process in action, rooted in agency, not chance + necessity only, and on the same statistical thermodynamics grounds that we do not expect tornadoes in junkyards to throw out functioning 747 jetliners [cf my always linked App 1, point 6 for why] – nb on the new generation that looks like giving the latest airbus a run for the money. So, relative to what we do know, when we see even more sophisticated cases, that invites the very reasonable inference that the cases are produced by similar but more expert agents. 3] there are no such things as “Fox protocells” or anyone else’s protocells. This appears to be urban legend concocted by armchair abiogenesis pundits. Apparently, this was a term used in the early discussions some 30 years ago, from TBO's discussion. The term protocell has plainly tuned out to be premature. Cf excerpts above. 4] Art 73: It’s telling that you equate “low information” with “accident”. Direct experimental determinations plainly show that enzymes are low-information. Not in aggregate and not in the context in which they function, cf. Above. When config spaces are relatively small compared to probabilistic resources, random walks can hit targets reasonable plausibly. That is why low-/high- information carrying capacity [i.e. implied in configuration spaces: one 2-state element or bit has 2 states, 2 have 4, but 100 have ~1.27*10^30] is a key issue. This I will examine below. 5] Fox protocells: No, I am not thinking of linear or other links from so-called fox protocells to anything. I am pointing out that they are plainly and utterly irrelevant - since 25 – 30+ years ago -- and should not have been raised at all. The misnomer, protocells, you use is telling in this regard. 6] You focus on experiments not yet done. I focus on what we know – and we know that direct experimental measurement has established that enzymes are low-information moieties. First, no experiment has, or could ever establish that monochiral enzymes [whatever may obtain for folding regions etc] which are the core of the functioning of the cell, are low-information structures; given what we already know about them. For, just simply chaining 300 or so L-monomers from a field of 80 to several hundreds of possible candidates goes well beyond the Dembski type bound. Let's use the 40 from 80 available to TBO in the mid 80's as a reasonable estimate. (I won't bother on the achiral nature of glycine as 39 or 40 is immaterial.) [½]^300 ~ 1 in 2.04*10^90, and we are definitely dealing with 100% L-type chains here. (As a yardstick for onlookers: There may be 10^80 atoms in our observed universe, so to get something that is of order a billion times more rare in its config space is far harder to get than marking one atom in the universe then picking it at random first shot; that is an example of a unique specification within a large config space in action.) Something that isolated in a config space of plausibly available racemic amino acids – just a random 300-length amino acid polymer with “all” L-monomers! - is itself a high information constraint. We would have to do this hundreds of times over for a cell. Then to get to sequencing the amino acid chain for bio-function, impresses a lot more information directly and by implication of its role in the cell's nanotechnology. Cf my discussion on clumping and configuring the microjet's functional parts so they work, in App 1, point 6 my always linked. Nope: the idea that enzymes are “low-information” structures was never viable, save as a way of thinking made plausible within a questionable paradigm. 7] this means that DNA is also low-information. This, being premised on demonstrably false and/or highly questionable and misleading premises, is also plainly false and highly misleading. (It also fails the common-sense test, once we simply estimate the carrying capacity of DNA chains that are observed.) Okay, I was busy the past few days so took time to catch back up. GEM of TKIkairosfocus
August 19, 2007
August
08
Aug
19
19
2007
03:58 AM
3
03
58
AM
PDT
Hi Patrick, BA and Art: Maybe it would be helpful to refocus a bit on the article PaV was commenting on:
It was GK Chesterton who famously quipped that "when people stop believing in God, they don't believe in nothing - they believe in anything." So it has proved. But how did it happen? The big mistake is to see religion and reason as polar opposites. They are not. In fact, reason is intrinsic to the Judeo-Christian tradition. The Bible provides a picture of a rational Creator and an orderly universe - which, accordingly, provided the template for the exercise of reason and the development of science. Dawkins pours particular scorn on the Biblical miracles which don't correspond to scientific reality. [Snip here; Bible-believing Christians see God as creating an orderly world in which science can operate, but with the option to intervene beyond those usual patterns for good reasons of his own . . .] The heart of the Judeo-Christian tradition is the belief in the concept of truth, which gives rise to reason. But our postreligious age has proclaimed that there is no such thing as objective truth, only what is "true for me". That is because our society won't put up with anything which gets in the way of 'what I want'. How we feel about things has become all-important. So reason has been knocked off its perch by emotion, and thinking has been replaced by feelings. This has meant our society can no longer distinguish between truth and lies by using evidence and logic. And this collapse of objective truth has, in turn, come to undermine science itself which is playing a role for which it is not fitted.
Sobering thoughts, and well worth following up. They also explain why it is that we see so strong a clinging to straws floating in the alleged prebiotic soup, to shore up the worldview of evolutionary materialism, which is increasingly challenged to address four big bangs: origin of the fine tuned observed cosmos, origin of life within that same observed cosmos, origin of body plan level diversity as observed here on our planet, and origin of a credible, truth-apprehending mind [including conscience and morals]. Now on a few specifics: 1] Patrick, 76, to Art: I’m waiting for you to tell me exactly when and where someone considered to be part of the ID movement made the prediction that individual proteins would contain larger amounts of information than previously estimated. I have highlighted the key operative word. For, proteins do not exist in isolation, but in complex, integrated algorithmic, code-based information systems that function based on genetic and epigenetic structures in the cell. When we pull those strands together we see a very different picture: --> DNA is code-based and serves as information store, mediated through RNA, enzymes and ribosomes etc., through a step by step read process that assembles proteins, which are then folded, transported to the right location, and put to work. All of this is plainly information-intensive, including information not directly coded in the DNA itself but in the structures and functional architecture of existing cells. [DNA by itself is inert.] --> DNA strands go down to about 1 mn for independent living organisms and to 500k or so for parasitic ones; with functional disintegration at 360 k or so. That is we are looking at minimal config spaces of order 4^360,000 ~ 3.95*10^216,741. Even if we say that only 10% of that is “truly” functional, we are till well above the reach of random walk searches in any generous prebiotic soup in oceans, ponds or hydrothermal vents or comets, etc., as 4^36k ~ 1.44*10^21,674. --> Even on proteins, the problem is that we have not just one but hundreds in the simplest plausible cell, let's say 100, with a reasonable length of 300. Folding regions are often relatively insensitive to shifts in amino acids, so let's take 10% as the effective length, towards making a lower-bound estimate for their information content, for the sake of argument on Art's premises. --> We would then be needing to account for 10% of 100 * 300 = 3,000 20-state elements. 20^3,000 ~ 1.23*10^3,903, and these would be coded for in 3-state codons each, in the right relative places in the strand to be, or 9,000 DNA base-pairs. 4^9,000 ~ 3.47*10^5,418. In each and every case the resulting configuration space is well beyond the reasonable upper bounds for random searches to be plausible on the scope of the observed cosmos, much less a planetary body within it. [And, I have not yet got into the selecting and sorting work needed to pick amino acids and nucleic acids of the right chirality out of the many other potential monomers in the environment. Cf. TBO's discussion. Not to mention the challenges of creating the relevant monomers under plausible prebiotic conditions, especially certain nucleic acids, as Shapiro so ably discussed in his recent Sci Am article.] --> In short, the points Art has been riding are a distraction at best. Proteinoids forming microspheres have little to do with creating the right information rich polymers in the right configs to form functioning cells, and the information content of proteins in aggregate is well beyond the reach of the sort of random searches that would obtain under chance + necessity only prebiotic scenarios. . . .kairosfocus
August 19, 2007
August
08
Aug
19
19
2007
03:43 AM
3
03
43
AM
PDT
Art, What gives a protein its "complexity" is its specificity of requirement! This is a bit detailed responce but you will clearly see the point I am making in regards to protein specificity. (What evidence is found for the first life on earth?) we will look at the evidence for the first appearance of life on earth. As well we will also look at the chemical activity of the first life on earth. Once again, the presumption of naturalistic blind chance being the only reasonable cause must be dealt with. It is commonly presumed in many grade school textbooks that life slowly arose in a primordial ocean of pre-biotic soup. Yet, there is absolutely no hard evidence, such as chemical signatures in the geologic record, indicating that a ocean of this pre-biotic soup ever existed. The hard physical evidence scientists have discovered in the geologic record is stunning in its support of the anthropic hypothesis. The oldest sedimentary rocks on earth, known to science, originated underwater (and thus in relatively cool environs) 3.86 billion years ago. Those sediments, which are exposed at Isua in southwestern Greenland, also contain the earliest chemical evidence (fingerprint) of “photosynthetic” life [Nov. 7, 1996, Nature]. This evidence has been fought by naturalists, since it is totally contrary to their evolutionary theory. Yet, Danish scientists were able to bring forth another line of geological evidence to substantiate the primary line of geological evidence for photo-synthetic life in the earth’s earliest known sedimentary rocks (Indications of Oxygenic Photosynthesis,” Earth and Planetary Science Letters 6907 (2003). Thus we have two lines of hard conclusive evidence for photo-synthetic life in the oldest known sedimentary rocks ever found by scientists on earth! The simplest photosynthetic bacterial life on earth is exceedingly complex, too complex to happen by even if the primeval oceans had been full of pre-biotic soup. Thus, naturalists try to suggest pan-spermia (the theory that pre-biotic amino acids, or life itself, came to earth from outer-space on comets) to account for this sudden appearance of life on earth. This theory has several problems. One problem is that astronomers, using spectral analysis, have not found any vast reservoirs of biological molecules anywhere they have looked in the universe. Another problem is, even if comets were nothing but pre-biotic amino acid snowballs, how are the amino acids going to molecularly survive the furnace-like temperatures generated when the comet crashes into the earth? If the pre-biotic molecules were already a life-form on the comet, how could this imagined life-form survive the extremely harsh environment of space for many millions of years, not to mention the fiery crash into the earth? Did this imagined super-cell wear a cape like superman? The first actual fossilized cells scientists have been able to recover in the fossil record are 3.5 billion year old photosynthetic cyano(blue-green)bacteria, from western Australia, which look amazingly similar to a particular type of cyano-bacteria that are still alive today. The smallest cyano-bacterium known to science has hundreds of millions of individual atomic molecules (not counting water molecules), divided into nearly a thousand different species of atomic molecules; and a genome (DNA sequence) of 1.8 million bits, with over a million individual complex protein molecules which are divided into hundreds of different kinds of proteins. The simplest of all bacteria known in science, which is able to live independent of a more complex host organism, is the candidatus pelagibacter ubique and has a DNA sequence of 1,308,759 bits. It also has over a million individual complex protein molecules which are divided into several hundred separate and distinct protein types. The complexity found in the simplest bacterium known to science makes the complexity of any man-made machine look like child's play. As stated by Geneticist Michael Denton PhD, “Although the tiniest living things known to science, bacterial cells, are incredibly small (10-12 grams), each is a veritable micro-miniaturized factory containing thousands of elegantly designed pieces of intricate molecular machinery, made up altogether of one hundred thousand million atoms, far more complicated than any machine built by man and absolutely without parallel in the non-living world”. So, as you can see, there simply is no simple life on earth as naturalism had presumed - even the well known single celled amoeba has the complexity of the city of London and reproduces that complexity in only 20 minutes. Here are a couple of quotes for the complexity found in any biological system, including simple bacteria, by two experts in biology: "Most biological reactions are chain reactions. To interact in a chain, these precisely built molecules must fit together most precisely, as the cog wheels of a Swiss watch do. But if this is so, then how can such a system develop at all? For if any one of the specific cog wheels in these chains is changed, then the whole system must simply become inoperative. Saying it can be improved by random mutation of one link, is like saying you could improve a Swiss watch by dropping it and thus bending one of its wheels or axis. To get a better watch, all the wheels must be changed simultaneously to make a good fit again." Albert Szent-Györgyi von Nagyrapolt (Nobel prize for Medicine in 1937). "Drive in Living Matter to Perfect Itself," Synthesis I, Vol. 1, No. 1, p. 18 (1977) “Each cell with genetic information, from bacteria to man, consists of artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of parts and components, error fail-safe and proof-reading devices utilized for quality control, assembly processes involving the principle of prefabrication and modular construction and a capacity not equaled in any of our most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours" Geneticist Michael Denton PhD. To give an idea how impossible “simple” life is for naturalistic blind chance, Sir Fred Hoyle calculated the chance of obtaining the required set of enzymes for just one of any of the numerous types of “simple” bacterial life found on the early earth to be one in 1040,000 (that is a one with 40 thousand zeros to the right). He compared the random emergence of the simplest bacterium on earth to the likelihood “a tornado sweeping through a junkyard might assemble a Boeing 747 therein”. Sir Fred Hoyle also compared the chance of obtaining just one single functioning protein (out of the over one million protein molecules needed for that simplest cell), by chance combinations of amino acids, to a solar system packed full of blind men solving Rubik’s Cube simultaneously. The simplest bacteria ever found on earth is constructed with over a million protein molecules. Protein molecules are made from one dimensional sequences of the 20 different L-amino acids that can be used as building blocks for proteins. These one dimensional sequences of amino acids fold into complex three-dimensional structures. The proteins vary in length of sequences of amino acids. The average sequence of a typical protein is about 300 to 400 amino acids long. Yet many crucial proteins are thousands of amino acids long. Proteins do their work on the atomic scale. Therefore, proteins must be able to identify and precisely manipulate and interrelate with the many differently, and specifically, shaped atoms, atomic molecules and protein molecules at the same time to accomplish the construction, metabolism, structure and maintenance of the cell. Proteins are required to have the precisely correct shape to accomplish their specific function or functions in the cell. More than a slight variation in the precisely correct shape of the protein molecule type will be for the life of the cell. It turns out there is some tolerance for error in the sequence of L-amino acids that make up some the less crucial protein molecule types. These errors can occur without adversely affecting the precisely required shape of the protein molecule type. This would seem to give some wiggle room to the naturalists, but as the following quote indicates this wiggle room is an illusion. "A common rebuttal is that not all amino acids in organic molecules must be strictly sequenced. One can destroy or randomly replace about 1 amino acid out of 100 without doing damage to the function or shape of the molecule. This is vital since life necessarily exists in a "sequence—disrupting" radiation environment. However, this is equivalent to writing a computer program that will tolerate the destruction of 1 statement of code out of 1001. In other words, this error-handling ability of organic molecules constitutes a far more unlikely occurrence than strictly sequenced molecules." Dr. Hugh Ross PhD. It is easily demonstrated mathematically that the entire universe does not even begin to come close to being old enough, nor large enough, to ally generate just one small but precisely sequenced 100 amino acid protein (out of the over one million interdependent protein molecules of longer sequences that would be required to match the sequences of their particular protein types) in that very first living bacteria. If any combinations of the 20 L-amino acids that are used in constructing proteins are equally possible, then there are (20100) =1.3 x 10130 possible amino acid sequences in proteins being composed of 100 amino acids. This impossibility, of finding even one “required” specifically sequenced protein, would still be true even if amino acids had a tendency to chemically bond with each other, which they don’t despite over fifty years of experimentation trying to get amino acids to bond naturally (The odds of a single 100 amino acid protein overcoming the impossibilities of chemical bonding and forming spontaneously have been calculated at less than 1 in 10125 (Meyer, Evidence for Design, pg. 75)). The staggering impossibility found for the universe ever generating a “required” specifically sequenced 100 amino acid protein by would still be true even if we allowed that the entire universe, all 1080 sub-atomic particles of it, were nothing but groups of 100 freely bonding amino acids, and we then tried a trillion unique combinations per second for all those 100 amino acid groups for 100 billion years! Even after 100 billion years of trying a trillion unique combinations per second, we still would have made only one billion, trillionth of the entire total combinations possible for a 100 amino acid protein during that 100 billion years of trying! Even a child knows you cannot put any piece of a puzzle anywhere in a puzzle. You must have the required piece in the required place! The simplest forms of life ever found on earth are exceedingly far more complicated jigsaw puzzles than any of the puzzles man has ever made. Yet to believe a naturalistic theory we would have to believe that this tremendously complex puzzle of millions of precisely shaped, and placed, protein molecules “just happened” to overcome the impossible hurdles of chemical bonding and probability and put itself together into the sheer wonder of immense complexity that we find in the cell. Instead of us just looking at the probability of a single protein molecule occurring (a solar system full of blind men solving the Rubik’s Cube simultaneously), let’s also look at the complexity that goes into crafting the shape of just one protein molecule. Complexity will give us a better indication if a protein molecule is, indeed, the handi-work of an infinitely powerful Creator. In the year 2000 IBM announced the development of a new super-computer, called Blue Gene, that is 500 times faster than any supercomputer built up until that time. It took 4-5 years to build. Blue Gene stands about six feet high, and occupies a floor space of 40 feet by 40 feet. It cost $100 million to build. It was built specifically to better enable computer simulations of molecular biology. The computer performs one quadrillion (one million billion) computations per second. Despite its speed, it is estimated it will take one entire year for it to analyze the mechanism by which JUST ONE “simple” protein will fold onto itself from its one-dimensional starting point to its final three-dimensional shape. In real life, the protein folds into its final shape in a fraction of a second! The computer would have to operate at least 33 million times faster to accomplish what the protein does in a fraction of a second. That is the complexity found for JUST ONE “simple” protein. It is estimated, on the total number of known life forms on earth, that there are some 50 billion different types of unique proteins today. It is very possible the domain of the protein world may hold many trillions more completely distinct and different types of proteins. The simplest bacterium known to man has millions of protein molecules divided into, at bare minimum, several hundred distinct proteins types. These millions of precisely shaped protein molecules are interwoven into the final structure of the bacterium. Numerous times specific proteins in a distinct protein type will have very specific modifications to a few of the amino acids, in their sequence, in order for them to more precisely accomplish their specific function or functions in the overall parent structure of their protein type. To think naturalists can account for such complexity by saying it “happened by chance” should be the very definition of “absurd” we find in dictionaries. Naturalists have absolutely no answers for how this complexity arose in the first living cell unless, of course, you can take their imagination as hard evidence. Yet the “real” evidence scientists have found overwhelmingly supports the anthropic hypothesis once again. It should be remembered that naturalism postulated a very simple "first cell". Yet the simplest cell scientists have been able to find, or to even realistically theorize about, is vastly more complex than any machine man has ever made through concerted effort !! What makes matters much worse for naturalists is that naturalists try to assert that proteins of one function can easily mutate into other proteins of completely different functions by pure chance. Yet once again the empirical evidence we now have betrays the naturalists. Individual proteins have been experimentally proven to quickly lose their function in the cell with random point mutations. What are the odds of any functional protein in a cell mutating into any other functional folded protein, of very questionable value, by pure chance? “From actual experimental results it can easily be calculated that the odds of finding a folded protein (by random point mutations to an existing protein) are about 1 in 10 to the 65 power (Sauer, MIT). To put this fantastic number in perspective imagine that someone hid a grain of sand, marked with a tiny 'X', somewhere in the Sahara Desert. After wandering blindfolded for several years in the desert you reach down, pick up a grain of sand, take off your blindfold, and find it has a tiny 'X'. Suspicious, you give the grain of sand to someone to hide again, again you wander blindfolded into the desert, bend down, and the grain you pick up again has an 'X'. A third time you repeat this action and a third time you find the marked grain. The odds of finding that marked grain of sand in the Sahara Desert three times in a row are about the same as finding one new functional protein structure (from chance transmutation of an existing functional protein structure). Rather than accept the result as a lucky coincidence, most people would be certain that the game had been fixed.” Michael J. Behe, The Weekly Standard, June 7, 1999, Experimental Support for Regarding Functional Classes of Proteins to be Highly Isolated from Each Other “Mutations are rare phenomena, and a simultaneous change of even two amino acid residues in one protein is totally unlikely. One could think, for instance, that by constantly changing amino acids one by one, it will eventually be possible to change the entire sequence substantially… These minor changes, however, are bound to eventually result in a situation in which the enzyme has ceased to perform its previous function but has not yet begun its ‘new duties’. It is at this point it will be destroyed – along with the organism carrying it.” Maxim D. Frank-Kamenetski, Unraveling DNA, 1997, p. 72. (Professor at Brown U. Center for Advanced Biotechnology and Biomedical Engineering) From 3.8 to .6 billion years ago photosynthetic bacteria, and to a lesser degree sulfate-reducing bacteria, ted the geologic and fossil record (that’s over 80% of the entire time life has existed on earth). The geologic and fossil record also reveals that during this time a large portion of these very first bacterial life-forms lived in complex symbiotic (mutually beneficial) colonies called Stromatolites. Stromatolites are rock like structures that the photo-synthetic bacteria built up over many years (much like coral reefs are slowly built up over many years by the tiny creatures called corals). Although Stromatolites are not nearly as widespread as they once were, they are still around today in a few sparse places like Shark’s Bay Australia. Contrary to what naturalistic thought would expect, these very first photosynthetic bacteria scientists find in the geologic and fossil record are shown to have been preparing the earth for more advanced life to appear from the very start of their existence by reducing the greenhouse gases of earth’s early atmosphere and producing the necessary oxygen for higher life-forms to exist. Photosynthetic bacteria slowly built the oxygen up in the earth’s atmosphere by removing the carbon-dioxide (and other greenhouse gases) from the atmosphere; separated the carbon from the oxygen; then released the oxygen back into the atmosphere (and into the earth’s ocean & crust) while they retained the carbon. Interestingly, the gradual removal of greenhouse gases corresponds exactly to the gradual 15% increase of light and heat coming from the sun during that time (Ross; PhD. Astrophysics; Creation as Science 2006). This “lucky” correspondence of the slow increase of heat from the sun with the same perfectly timed slow removal of greenhouse gases from the earth’s atmosphere was absolutely necessary for the bacteria to continue to live to do their work of preparing the earth for more advanced life to appear. Bacteria obviously depended on the temperature of the earth to remain relatively stable during the billions of years they prepared the earth for higher life forms to appear. More interesting still, the byproducts of greenhouse gas removal by these early bacteria are limestone, marble, gypsum, phosphates, sand, and to a lesser extent, coal, oil and natural gas (note; though some coal, oil and natural gas are from this early era of bacterial life, most coal, oil and natural gas deposits originated on earth after the Cambrian explosion of higher life forms some 540 million years ago). These natural resources produced by these early photosynthetic bacteria are very useful to modern civilizations. Interestingly, while the photo-synthetic bacteria were reducing greenhouse gases and producing natural resources that would be of benefit to modern man, the sulfate-reducing bacteria were also producing their own natural resources that would be very useful to modern man. Sulfate-reducing bacteria helped prepare the earth for advanced life by “detoxifying” the primeval earth and oceans of “poisonous” levels of heavy metals while depositing them as relatively inert metal ore deposits (iron, zinc, magnesium, lead etc.. etc..). To this day, sulfate-reducing bacteria maintain an essential minimal level of these metals in the ecosystem that are high enough so as to be available to the biological systems of the higher life forms that need them, yet low enough so as not to be poisonous to those very same higher life forms. Needless to say, the metal ores deposited by these sulfate-reducing bacteria in the early history of the earth’s geologic record are indispensable to man’s rise above the stone age to modern civilization. Yet even more evidence has been found tying other early types of bacterial life to the anthropic hypothesis. Many different types of bacteria in earths early history lived in complex symbiotic (mutually beneficial) relationships in what are called cryptogamic colonies on the earths primeval continents. These colonies “dramatically” transformed the “primeval land” into “nutrient filled soils” that were receptive for future advanced vegetation to appear. Naturalism has no answers for why all these different bacterial types and colonies found in the geologic and fossil record would start working in precise concert with each other preparing the earth for future life to appear. -// Since oxygen readily reacts and bonds with almost all of the solid elements making up the earth itself, it took photosynthetic bacteria over 3 billion years before the earth’s crust and mantle was saturated with enough oxygen to allow an excess of oxygen to be built up in the atmosphere. Once this was accomplished, higher life forms could finally be introduced on earth. Moreover, scientists find the rise in oxygen percentages in the geologic record to correspond exactly to the sudden appearance of large animals in the fossil record that depended on those particular percentages of oxygen. The geologic record shows a 10% oxygen level at the time of the Cambrian explosion of higher life-forms in the fossil record some 540 million years ago. The geologic record also shows a strange and very quick rise from the 17% oxygen level, of 50 million years ago, to a 23% oxygen level 40 million years ago (Falkowski 2005)). This strange rise in oxygen levels corresponds exactly to the appearance of large mammals in the fossil record who depend on high oxygen levels. Interestingly, for the last 10 million years the oxygen percentage has been holding steady around 21%. 21% happens to be the exact percentage that is of maximum biological utility for humans to exist. If the oxygen level were only a few percentage lower, large mammals would become severely hampered in their ability to metabolize energy; if only three to four percentage higher, there would be uncontrollable outbreaks of fire across the land. Because of this basic chemical requirement of photosynthetic bacterial life establishing and helping maintain the proper oxygen levels for higher life forms on any earth-like planet, this gives us further reason to believe the earth is extremely unique in its ability to support intelligent life in this universe. All these preliminary studies of early life on earth fall right in line with the anthropic hypothesis and have no explanation from any naturalistic theory based on blind chance as to why the very first bacterial life found in the fossil record would suddenly, from the very start of their appearance on earth, start working in precise harmony with each other to prepare the earth for future life to appear. Nor can naturalism explain why, once the bacteria had helped prepare the earth for higher life forms, they continue to work in precise harmony with each other to help maintain the proper balanced conditions that are of primary benefit for the complex life that is above them. -// Though it is impossible to reconstruct the DNA of these earliest bacteria fossils, that scientists find in the fossil record, and compare them to their descendants of today, there are many ancient bacterium fossils recovered from salt crystals and amber crystals that have been compared to their living descendents of today. Some bacterium fossils, in salt crystals, dating back as far as 250 million years have had their DNA recovered, sequenced and compared to their offspring of today (Vreeland RH, 2000 Nature). Scientists accomplished this using a technique called polymerase chain reaction (PCR). To the disbelieving shock of many scientists, both ancient and modern bacteria were found to have the almost exact DNA sequence. Thus the most solid scientific evidence available for the most ancient DNA scientists are able to find does not support evolution happening on the molecular level to the DNA of bacteria. According to the prevailing naturalistic evolutionary dogma, there "HAS" to be “significant mutational drift” to the DNA of bacteria within 250 million years, even though the morphology (shape) of the bacteria could have remained the same. In spite of their preconceived naturalistic bias, scientists find there is no detectable "drift" from ancient DNA according to the best evidences we have so far. I find it interesting that the naturalistic theory of evolution "expects" and even "demands" that there be a significant amount of drift from the DNA of ancient bacteria while the morphology is expected to remain exactly the same with its descendants. Alas for the naturalists once again, the hard evidence of ancient DNA has fell in line with the anthropic hypothesis. Many times naturalists will offer “conclusive” proof for evolution by showing bacteria that have become resistant to a certain antibiotic such as penicillin. When penicillin was first discovered, all the gram positive cocci were susceptible to it. Now 40% of the bacteria Strep pneumo are resistant. Yet, the mutation to DNA that makes Strep pneumo resistant to penicillin results in the loss of a protein function for the bacteria (called, in the usual utilitarian manner, penicillin-binding-protein). A mutation occurred in the DNA leading to a bacterial protein that no longer interacts with the antibiotic and the bacteria survive. Although they survive well in this environment, it has come at a cost. The altered protein is less efficient in performing its normal function. In an environment without antibiotics, the non-mutant bacteria are more likely to survive because the mutant bacteria cannot compete as well. So as you can see, the bacteria did adapt, but it came at a loss of function in a protein of the bacteria, loss of genetic information in the DNA of the bacteria, and it also lessened the bacteria's overall fitness for survival. Scientifically, it is better to say that the bacteria devolved in accordance with the principle of genetic entropy, instead of evolved against this primary principle of how “poly-constrained information” will act in organisms (Sanford; Genetic Entropy 2005). As well, all other observed adaptations of bacteria to “new” environments have been proven to be the result of such degrading of preexisting molecular abilities. Sometimes a complex adaptation in bacteria is exhibited by naturalists (Hall, gene knockout experiments) that defy tremendous mathematical odds. Yet far from confirming evolution as they wish it would, the demonstration of a complex adaptation of a preexisting protein actually indicates another higher level of complexity in the genetic code of the bacteria that somehow found (calculated) how to adapt a preexisting protein with the very same ability as the protein that was knocked out to the new situation (Behe, evidence for design pg. 138). To make matters worse for the naturalists, the complex adaptation of the protein still obeys the principle of genetic entropy for the bacteria, since the adapted bacteria has less overall functionality than the original bacteria did. Thus, even naturalists supposed strongest proof for evolution in bacteria is found to be wanting for proof of evolution since it still has not violated the principle of genetic entropy. Even the most famous cases of adaptations in humans, such as lactase persistence, the sickle cell/malaria adaptation (Behe, The Edge of Evolution 2007), and immune system responses, genetic entropy is still being obeyed when looked at on the level of overall functional genetic information. For naturalists to “conclusively prove” evolution they would have to clearly demonstrate a gain in genetic information. Naturalists have not done so, nor will they ever. The overall interrelated complexity for the integrated whole of a life-form simply will not allow the generation of meaningful information to happen in its DNA by chance alone. “But in all the reading I’ve done in the life-sciences literature, I’ve never found a mutation that added information… All point mutations that have been studied on the molecular level turn out to reduce the genetic information and not increase it.” Lee Spetner (Ph.D. Physics – MIT) “There is no known law of nature, no known process and no known sequence of events which can cause information to originate by itself in matter.” Werner Gitt, “In the Beginning was Information”, 1997, p. 106. (Dr. Gitt was the Director at the German Federal Institute of Physics and Technology) His challenge to scientifically falsify this statement has remained unanswered since first published. Naturalists also claim stunning proof for evolution because bacteria can quickly adapt to detoxify new man-made materials, such as nylon and polystyrene. Yet once again, when carefully looked at on the molecular level, the bacteria still have not demonstrated a gain in genetic information, i.e. though they adapt they still degrade preexisting molecular abilities of the bacteria in order to adapt (genetic entropy). Indeed, it is not nearly as novel as they think it is, for the bacteria are still, only, complacently detoxifying the earth of toxins as they have always been doing for billions of years. Even though naturalists claim this is something brand new, that should be considered stunning proof for evolution, I’m not nearly as impressed, with their stunning proof, as they think I should be (Answers in Genesis; Nylon Eating Bacteria; 2007)! This overriding truth of never being able to violate the entropy of poly-constrained information by natural means applies to the “non-living realm” of viruses, such as bird flu, as well (Ryan Lucas Kitner, Ph.D. 2006). I would also like to point out that scientists have never changed any one type of bacteria into any another type of bacteria, despite years of exhaustive experimentation trying to change any bacteria type into any other bacteria type. In fact, it is commonly known that the further scientists deviate any particular bacteria type from its original state, the more unfit for survival the manipulated population will quickly become. As esteemed French scientist Pierre P. Grasse has stated “What is the use of their unceasing mutations, if they do not change? In sum, the mutations of bacteria and viruses are merely hereditary fluctuations around a median position; a swing to the right, a swing to the left, but no final evolutionary effect.” Needless to say, this limit to the variability of bacteria is extremely bad news for the naturalists. Psalm 104:24 O Lord, how manifold are your works! In wisdom you have made them all. The earth is full of Your possessions -bornagain77
August 18, 2007
August
08
Aug
18
18
2007
01:48 PM
1
01
48
PM
PDT
Art,
Patrick, I’m happy to learn that the ID camp has abandoned the notion that proteins are information-rich moieties. Of course, this means that DNA is also low-information. I’ve been waiting for more than 10 years for IDists to catch up to this realization.
I'm waiting for you to tell me exactly when and where someone considered to be part of the ID movement made the prediction that individual proteins would contain larger amounts of information than previously estimated. The only similar prediction I remember over the years was relevant to "junk DNA" and we all know how that's turning out. Does anyone else know the prediction he is talking about?Patrick
August 18, 2007
August
08
Aug
18
18
2007
08:02 AM
8
08
02
AM
PDT
Art 1. Some time in the past, 4+ billion years ago, the earth was a lifeless place. A bit more recently, 3+ billion years ago, life existed in earth. These two statements are unremarkable and true. And they say something pretty simple – abiogenesis, life from non-life, happened. One can also say that 4 billion years ago the earth was a violent molten piece of rock and 3.5 billion years ago life appeared. Both are not absolutely true. They are provisionally true. The former a lot more confident than the latter. There are only suggestions in the strata that life was around that long ago. 2. Before there was life on earth, there was chemistry. Every scientist who studies these things would agree with this. Before there were stars and galaxies there was chemistry and quantum mechanics and electricity and mechanical forces too so while the statement is true but I'm not sure what point it makes. 3. Every process in a cell that has been studied has been found to be a matter of chemistry. Not at all true. Electrical, chemical, and mechanical forces are all at play in cellular processes. Information storage, retrieval, modification, and translation is a process in a class by itself with electrical, chemical, and mechanical structures and processes serving as media and mechanisms. 4. There is still a gray area, a time between the epoch ... Point 4 is invalid as it's based on the false premise in point 3. By the way, there are no such things as "Fox protocells" or anyone else's protocells. This appears to be urban legend concocted by armchair abiogenesis pundits. No one has fabricated a protocell including Fox.DaveScot
August 18, 2007
August
08
Aug
18
18
2007
05:45 AM
5
05
45
AM
PDT
Hi Patrick: I didn't comment on that one because I thought the loud trumpeting of an elephant being hurled makes the point eleoquently. {Let's hope this claim will not be backed up by a lit bluff!) The nanotech of the cell constitutes an information system well, well beyond 500 - 2,000 bits worth of informational complexity, and the latter is something like a configuration space of 10^600 power. So we can comfortably infer that it is beyond Dembski's UPB relative to finding islands of relevant functionality. In short, we are in effect back to the point underlying Hoyle's comment that the odds of getting to the 2,000 enzymes of life by chance are of order 1 in [10^20]^2,000 ~ 1 in 10^40,000. Art's confident assertion on failed predictions, fails to pass the common-sense test. GEM of TKIkairosfocus
August 18, 2007
August
08
Aug
18
18
2007
02:04 AM
2
02
04
AM
PDT
Recall, now, one of the failed predictions of ID – that the information content of functional macromolecules would be high, enough so as to support some sort of design inference. As a matter of fact, direct experimental measurements as well as the success of bioinformatics tools for identifying function in newly-sequenced genes tell us that the informational content of proteins is inherently low.
Eh? I must have missed that one. Where did anyone in the ID movement make a specific prediction on the information content of proteins?Patrick
August 17, 2007
August
08
Aug
17
17
2007
08:52 AM
8
08
52
AM
PDT
Picking up: Power back on for now, we are plainly fringish. 4] From ARN quote: any sizeable population of randomly-assembled chains of L-amino acids will likely have a large, diverse range of catalytic activities. As indicated by Nakashima (above), this will also hold for thermal proteins and their protocell products, entities that could readily (probably copiously) form in prebiotic conditions If you redefine and broaden the target zone enough [sounds familiar?], it becomes meaningless of course – note how already, amino acids may have some catalytic properties.. More to the point, Art needs to show not just assert how the relevant amino acids much less proteins much less cells based on the DNA-RNA_Enzyme-Ribosome etc system credibly and with sufficient probability originated in reasonable prebiotic environments. Otherwise this is yet another just so story. 5] Latest ARN cites . . . Seem, unfortunately, to be more of the same: reiterating assertions rather than actually substantiating beyond the level of just so stories with huge gaps, the last on ebeing openly a tentative suggestion. 6] Sal, 63 (still worth a follow-up): The ARN link seemed to be saying that protocells had been produced by Fox and that appears to be urban legend. Actually, in TMLO, it seems that the term “protocells” and the like, were introduced among the researchers who were impressed with the sort of list of parallels to cells in table 10-1:
Because of the many similar properties between microspheres and contemporary cells, microspheres were confidently called protocells, the link between the living and nonliving in evolution. Similar structures were given the names plasmogeny 44 (plasma of life) and Jeewarnu 45 (Sanskrit for “particles of life”) . . .
But the problem as already highlighted, is that here was a problem that the actual processes bear little relationships to life-processes at cellular level. So, again, an overstated resemblance highlighted prematurely. GEM of TKIkairosfocus
August 17, 2007
August
08
Aug
17
17
2007
05:05 AM
5
05
05
AM
PDT
All, First, thanks on expressed concerns. On weather situation M'rat: Seems Dean is cutting into the region just N of Barbados, so so far just wind here, maybe up to 40 – 50 in gusts. Rains have just now begun, with a bang. (Now my concern is that it may have done a number on the farmers in Dominica and St Lucia. But moreso, projections put it very near Jamaica at Cat 4 Sunday. Let's hope and pray it does an Ivan if so – ducks away from Jamaica by a mysterious swerve. And onward let's hope it does not do a Katrina etc.) On a few notes: 1] DS, 67: There may be hope that evolutionary biologists will figure out that statistical probability isn’t something you get to ignore when it doesn’t agree with your dearly held convictions True, true. But let us also note that the issue is not really whether life started on earth or in a comet by the mechanisms discussed, but the relative likelihood of same! [In short, it is EVEN MORE IMPROBABLE, by 10^24:1 against, that life started on a comet than on earth. That life started on earth based on chance + necessity based abiogenesis scenarios on the geology at work and the atmosphere that is plausible, is improbable in the extreme.] 2] Art, 68: It is not appropriate to assume that I am thinking of proteinoids as linear predecessors of proteins as we know them, or of Fox’s living protocells as linear predeccesors of life as we know it I am pointing out, by excerpting TMLO and remarking, that the comparisons between proteinoids and proteins, and between properties of microspheres of proteinoids and living cells were arguably greatly exaggerated, circa 1984. I have but little reason to infer further that today's situation has done much to revise that judgement. Indeed, being a little less rushed just now, here is a bit more from p 174, on catalysis:
Fox et al state that “microparticles possess in large degree the rate enhancing activities of the polymers of which trhey are composed” 47 . . . If the protein by itself has a catalytic property, it seems very logical that the protein would retain that property when put into a micelle. The catalytic property is not due to any special property the microsphere possesses. The increase in reaction rate observed in microspheres is very small by comparison to the increase seen in true enzymes (where the rate increase factors are in the billions –10^9). Furthermore, much of the rate increase is due to the amino acids themselves, not the proteinoid . . .
What empirical data do you have, Art, that overthrows that observation? 3] Re my: “enzymes have extreme functional specificity based on highly specific coded chaining, so “catalytic ability” in the abstract is not exactly deeply relevant to the issue of forming life as we observe it “, this is just plain wrong I hear your assertion. Am I to understand that you hold that in effect any random fairly easily random walk accessed protein or proteinoid chain will function more or less “just as well” as the specific cluster of DNA-controlled enzymes we see in cells? (I.e bio-functionality is a matter of easily accessed closely spaced stepping stones in the config spaces relevant to the matter at hand? That is how I seem to see your arguing at ARN, e.g in attempted rebuttal to Meyer, Axe et al.) At least, that is what your remarks I now excerpt from March 2000 seem to suggest:
Recall, now, one of the failed predictions of ID – that the information content of functional macromolecules would be high, enough so as to support some sort of design inference. As a matter of fact, direct experimental measurements as well as the success of bioinformatics tools for identifying function in newly-sequenced genes tell us that the informational content of proteins is inherently low. From a practical perspective, this means that any sizeable population of randomly-assembled chains of L-amino acids will likely have a large, diverse range of catalytic activities. As indicated by Nakashima (above), this will also hold for thermal proteins and their protocell products, entities that could readily (probably copiously) form in prebiotic conditions. And, as indicated in Nakashima’s review, one such property would include the ability to synthesize oligonucleotides.
Do you mean that the precise DNA-controlled sequences that form enzymes are a matter of low-information accident, and more or less any random pattern of monomers and/or a sloppy replication system would have worked just as well or at least adequately? If my summary is anywhere nearly accurate to your view, kindly explain to me on this basis the origin, function and preservation of the cellular process of DNA-reading, code and codon -based protein chaining and the enzymes it uses, then, on your thesis. -> Power has dropped now, so I cut here and post, or at least try. GEM of TKIkairosfocus
August 17, 2007
August
08
Aug
17
17
2007
03:03 AM
3
03
03
AM
PDT
Hang in there KF.tribune7
August 16, 2007
August
08
Aug
16
16
2007
08:02 PM
8
08
02
PM
PDT
kairosfocus, three things. 1. Sorry about messing up your name above. For some reason, I had "Karo Syrup" in my mind. Go figure. 2. Good luck with the upcoming weather. 3. Please take the time to read the various things I have pointed to. It is not appropriate to assume that I am thinking of proteinoids as linear predecessors of proteins as we know them, or of Fox's living protocells as linear predeccesors of life as we know it. This is putting the cart way before the horse, and I am not trying to do this. If you can understand this, then you will better see the gist of my essay. As far as the statement
"enzymes have extreme functional specificity based on highly specific coded chaining, so “catalytic ability” in the abstract is not exactly deeply relevant to the issue of forming life as we observe it"
this is just plain wrong. The 6-page discussion on ISCID that I pointed to spells out the many ways that this is so. The same discussion provides a wet-bench, experimentally-supported contrast to the assertion (unsupported by direct experimentation) regarding "the algorithmic, stored-data controlled framework of life in actual cell-based life." Finally, I'll add one more ISCID discussion that brings into play some concepts that confound the clean, engineering POV even more. As always, enjoy (and again, stay out of harm's way as much as is possible). http://www.iscid.org/ubb/ultimatebb.php?ubb=get_topic;f=6;t=000551#000000Art2
August 16, 2007
August
08
Aug
16
16
2007
06:41 PM
6
06
41
PM
PDT
Patrick The researchers calculate the odds of life starting on Earth rather than inside a comet at one trillion trillion (10 to the power of 24) to one against. That's better than the odds of falciparum coming up with any structure requiring more than 3 interdependent mutations in a trillion trillion chances. There may be hope that evolutionary biologists will figure out that statistical probability isn't something you get to ignore when it doesn't agree with your dearly held convictions.DaveScot
August 16, 2007
August
08
Aug
16
16
2007
06:02 PM
6
06
02
PM
PDT
PS The excerpt fr TMLO Ch 10 was from p 174, my paperback edn.kairosfocus
August 16, 2007
August
08
Aug
16
16
2007
04:08 PM
4
04
08
PM
PDT
All: I have hatches to batten down, so to speak [trees beginning to moan and wave a bit more than usual at evening time now; but I doubt we'll see more than 50 mph here in M'rat]; and the projections for my fam back in Jamaica are not so good at all -- Cat 4 on or about Sunday is no fun. Maybe, DV, this one will do an Ivan jump and miss . . . time for a bit of kneeology [as one of the Weather experts comments on . . .]? So, pardon my being a bit summary, esp with Art's comment. On that, I think it a bit amusing that he could zero in on one point, while missing the major pattern of the gaps between what SF did as reported by the mid 80's, and the realities of proteins in cell based life; as well as the extensions thereto in proteinoid microspheres, which TBO specifically discuss in ch 10, BTW; cf. Table 10-1. If he glances back, he will also see that I was giving a quick response to Sal's remark in 60, so that was no red herring at all – note the non-proteinaceous bonding, the racemic forms, and much more in the excerpt, and the ref to Ch 10, sadly not online. On the stress on catalytic activity etc in his proteinoids, let's just say that enzymes have extreme functional specificity based on highly specific coded chaining, so “catalytic ability” in the abstract is not exactly deeply relevant to the issue of forming life as we observe it – science fiction alternative possible worlds notwithstanding. (Any credible detailed pathway from microspheres to cells thence life as we know it; esp where codes, data storage, information and functionally specified complexity come from? Absent these we are just looking at so many imaginative just so stories . . . ) On the various other protocell claims, the TBO remarks are in the Ch 10, which is unfortunately not online, and I have little time to type out from text. What I can first excerpt briefly and remark on is Nakashima's:
“(p)roteinoids or proteinoid microspheres have many activities. Esterolyis, decarboxylation, amination, deamination, and oxidoreduction are catabolic enzyme activities. The formation of ATP, peptides or oligonucleotides is synthetic enzyme activities.” [NB, cf TMLO table 10-1 on this; nothing truly new there beyond what was reckoned with and addressed by TBO circa 1984, IMHCO, tellingly.]
Proteinoids, as TBO pointed out very relevantly [as excerpted above], are simply not proteins, starting with bonding patterns across monomers in the chains. TBO also observe, and I am excerpting desperately:
. . . microspheres are simply proteinoids attracted together (by physical forces) into a somewhat ordered spherical structure. Here, too, the spherical structure is due to the attraction of the hydrophilic parts of the proteinoids to water and of the hydrophobic parts to each other . . . catalytic activity of the microspheres is not due to any special structure the microsphere possesses . . . much of the rate increase seen in proteinoids is due to the amino acids themselves, not the proteinoids . . .
In short neither the agglomeration of amino acids into proteinoids nor the further agglomeration into microsphere seems to add materially to the existing properties of the amino acids [and where do these credibly come from in an OOL scenario relative to the geology, atmospheric and astronomical situations?], save for of course the effects of in effect random, and predominantly non-protein like chaining triggered by heat. As to the wider pattern of activities listed in the first ARN link, Art glided over the algorithmic, stored-data controlled framework of life in actual cell-based life. So, he changed the subject from the real world to a sci fiction world. It is the real world that we need to explain. Gotta go now, stole a few moments after checking Accuweather and WU . . . Back in touch maybe on the morrow, DV, depending. GEM of TKIkairosfocus
August 16, 2007
August
08
Aug
16
16
2007
04:02 PM
4
04
02
PM
PDT
Homochirality is perhaps the biggest single problem for OOL.
Agreed. As far as I know the latest attempt has been studying circularly polarised ultraviolet light in space and even that was only capable of producing an excess of 2.6% left-handed amino acids. Perhaps there has been more studies on that issue and that's why they're saying stuff like this: http://www.sciencedaily.com/releases/2007/08/070814093819.htm
The researchers calculate the odds of life starting on Earth rather than inside a comet at one trillion trillion (10 to the power of 24) to one against. Professor Wickramasinghe said: "The findings of the comet missions, which surprised many, strengthen the argument for panspermia. We now have a mechanism for how it could have happened. All the necessary elements - clay, organic molecules and water - are there. The longer time scale and the greater mass of comets make it overwhelmingly more likely that life began in space than on earth."
Then there is “Spontaneous emergence of homochirality in noncatalytic systems” , November 2004, Proceedings of the National Acedemy of Sciences: http://www.pnas.org/cgi/conten.....1/48/16733 Their theoretical model describes a dynamic system of amino acids joining and disjoining with a free flow of energy and ingredients. In the best-case scenario, provided that all the ingredients are present in the right conditions, this system might produce about 70% of one hand in a few centuries (a value that stabilizes and does not rise higher). Even this does not form polypeptide chains, only an excess of one-hand in the amino acids. They say that the formation of the first prebiotic peptides is not a trivial problem, as free amino acids are poorly reactive (peptide bonds tend not to form in water). To solve this part of the problem, they imagine alternate wetting and drying periods and the presence of N-carboxyanhydrides to activate the amino acids. The tests required fairly high concentrations of ingredients, and specific temperature and acidity. They couldn’t get any single-handed chains to result, but still feel their model is better than the usual direct autocatalytic reaction models, which they view as “dubious in a prebiotic environment.” Old thread on the subject: https://uncommondescent.com/management/putting-the-cart-before-the-horse/
Comments sometimes get eaten for reasons unknown.
To add to that, I'd suggest to everyone that if you're writing a long response that you copy it somewhere safe before hitting the "submit comment" button. It's maddening to write an article-length response just to have it zapped.Patrick
August 16, 2007
August
08
Aug
16
16
2007
12:47 PM
12
12
47
PM
PDT
art Thanks. I found what I was looking for with Google Scholar shortly after I wrote the urban legend question. The ARN link seemed to be saying that protocells had been produced by Fox and that appears to be urban legend. Microspheres were produced that resembled the exterior of certain cells but that's as far as it went. Spheres are a very common shape in nature especially when boiling liquids are involved. The polypeptides formed were racemic. Homochilarity is perhaps the biggest single problem for OOL. As far as I know the only way anyone found to produce homochilaric monomers is by forming them in a strong laser beam where the polarity of the coherent light aligns the reactants. Natural lasers of high intensity and stability are known to occur (rarely) in some young solar systems so it isn't out of the question as a source of homochiralic monomers. Francis Crick's opinion is still spot on today: An honest man, armed with all the knowledge available to us now, could only state that in some sense, the origin of life appears at the moment to be almost a miracle, so many are the conditions which would have had to have been satisfied to get it going.DaveScot
August 16, 2007
August
08
Aug
16
16
2007
06:31 AM
6
06
31
AM
PDT
Kairofocus, if you had read the ARN post, you would know that most of the points you mention are irrelevant, red herrings of sorts. The exception is the claim that thermal proteinoids do not possess catalytic activities. In this, TBO are wrong. Plain and simple. The discussions on the ARN board, as well as two of the three recent references I posted, are quite explicit in this regard. The bit about catalytic activity is important and interesting. I have no idea why TBO would make such an obviously incorrect claim, but the fact that relatively low information collections of polymers would possess catalytic activities of various sorts (the list is interesting, even provocative) tells us something about the ID theorists claims about CSI, etc. Indeed, one might se Fox's work as a presaging of sorts of the experiments that have shown that functional proteins are actually low-information entities. A couple of ISCID threads that elaborate more on the latter matter. The first is long and detailed, but it pretty nicely lays to rest this mistaken ID tenet (you must read all 6 pages, otherwise you will not get the points). The second is my own twist on a theme that pops up from time to time. Enjoy. http://www.iscid.org/ubb/ultimatebb.php?ubb=get_topic;f=6;t=000145;p=1 http://www.iscid.org/ubb/ultimatebb.php?ubb=get_topic;f=6;t=000035#000000Art2
August 16, 2007
August
08
Aug
16
16
2007
05:16 AM
5
05
16
AM
PDT
Art, Dave and Sal: The drubbing is online, in the three chapters of TBO's TMLO, esp. Ch 9, though the only available on paper Ch 10 is also relevant to the sort of scenarios that are so often touted but do not stand up to even first level serious scrutiny from a less than credulous perspective. I excerpt:
Sidney Fox31 has pioneered the thermal synthesis of polypeptides, naming the products of his synthesis proteinoids. Beginning with either an aqueous solution of amino acids or dry ones, he heats his material at 2000oC for 6-7 hours. [NOTE: Fox has modified this picture in recent years [i.e. to 1984] by developing "low temperature" syntheses, i.e., 90-120oC. See S. Fox, 1976. J Mol Evol 8, 301; and D. Rohlfing, 1976. Science 193, 68]. All initial solvent water, plus water produced during Polymerization, is effectively eliminated through vaporization. This elimination of the water makes possible a small but significant yield of polypeptides, some with as many as 200 amino acid units. Heat is introduced into the system by conduction and convection and leaves in the form of steam. The reason for the success of the polypeptide formation is readily seen by examining again equations 8-15 and 8-16. Note that increasing the temperature would increase the product yield through increasing the value of exp (- [delta]G / RT) [Cf Chs 7 - 8 on this, my always linked App 1 gives a short version]. But more importantly, eliminating the water makes the reaction irreversible, giving an enormous increase in yield over that observed under equilibrium conditions by the application of the law of mass action. Thermal syntheses of polypeptides fail, however, for at least four reasons. First, studies using nuclear magnetic resonance (NMR) have shown that thermal proteinoids "have scarce resemblance to natural peptidic material because beta, gamma, and epsilon peptide bonds largely predominate over alpha-peptide bonds."32 [NOTE: This quotation refers to peptide links involving the beta-carboxyl group of aspartic acid, the gamma-carboxyl group of glutamic acid, and the epsilon-amino group of lysine which are never found in natural proteins. Natural proteins use alpha-peptide bonds exclusively]. Second, thermal proteinoids are composed of approximately equal numbers of L- and D-amino acids in contrast to viable proteins with all L-amino acids. Third, there is no evidence that proteinoids differ significantly from a random sequence of amino acids, with little or no catalytic activity. [It is noted, however, that Fox has long disputed this.] Miller [of Miller-Urey!] and Orgel have made the following observation with regard to Fox's claim that proteinoids resemble proteins: The degree of nonrandomness in thermal polypeptides so far demonstrated is minute compared to nonrandomness of proteins. It is deceptive, then, to suggest that thermal polypeptides are similar to proteins in their nonrandomness.33 Fourth, the geological conditions indicated are too unreasonable to be taken seriously. As Folsome has commented, "The central question [concerning Fox's proteinoids] is where did all those pure, dry, concentrated, and optically active amino acids come from in the real, abiological world?"34 . . .
Maybe Art can enlighten us further on these issues and the recent developments he links, etc? How do they overcome the four issues identified, and other challenges to OOL scenarios and models? GEM of TKIkairosfocus
August 16, 2007
August
08
Aug
16
16
2007
01:05 AM
1
01
05
AM
PDT
1 2 3

Leave a Reply