Uncommon Descent Serving The Intelligent Design Community

Elsevier publishes Granville Sewell’s latest on the Second Law

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Elsevier has just published Granville Sewell’s “A Second Look at the Second Law” (Applied Mathematics Letters, June 2011):

ABSTRACT: It is commonly argued that the spectacular increase in order which has occurred on Earth does not violate the second law of thermodynamics because the Earth is an open system, and anything can happen in an open system as long as the entropy increases outside the system compensate the entropy decreases inside the system. However, if we define ‘‘X-entropy’’ to be the entropy associated with any diffusing component X (for example, X might be heat), and, since entropy measures disorder, ‘‘X-order’’ to be the negative of X-entropy, a closer look at the equations for entropy change shows that they not only say that the X-order cannot increase in a closed system, but that they also say that in an open system the X-order cannot increase faster than it is imported through the boundary. Thus the equations for entropy change do not support the illogical ‘‘compensation’’ idea; instead, they illustrate the tautology that ‘‘if an increase in order is extremely improbable when a system is closed, it is still extremely improbable when the system is open, unless something is entering which makes it not extremely improbable’’. Thus, unless we are willing to argue that the influx of solar energy into the Earth makes the appearance of spaceships, computers and the Internet not extremely improbable, we have to conclude that the second law has in fact been violated here.

Comments
Ulrich Mohrhoff has an interesting discussion of Sewell's earlier paper in: Sewell on Darwinism and the Second Law ANTIMATTERS 1 (2) 2007 pp 61-70. DLH
OT kf; I just watched this video and found it interesting and thought provoking. Perhaps you will find it as such as well. Evidence For Heaven (Pt 1) NDE http://www.youtube.com/watch?v=rR_a8ByUuBc bornagain77
Onlookers: Came back by. I see the proposition of a red herring autocatalytic reaction set in a place where the observed life forms do not base replication of a cell on autocatalysis. That sort of irrelevant distractor is enough to tell us what is going on here. To get tot he observed system we need ot account for the origin of codes, that code informaiton on the working molecules of life, machines to give effect to those codes,a nd the algorithms and programs to do that. Just picking a few typical proteins of length 300 or so AA's will rapidly show that the configuration spaces implied by such are well beyond the credible reach of our observed cosmos acting as a search engine, without intelligent direction. Life forms are well north of 100's of proteins. GEM of TKI GEM of TKI kairosfocus
The excitement over the article's publication may be short lived. There is a report that Applied Mathematics Letters will be removing the paper and rescinding the acceptance. Muramasa
kairosfocus, It is evidently very hard for you to admit that you do not know the phase space of self-replicating systems. You can define no probability measure, and therefore you can get nowhere with any permutation of FIASCO. Mathematical analysis and simulations indicate that an autocatalytic set may be much simpler than the RNA-based system constructed by Lincoln and Joyce (2010). A sensational result of the study by Lincoln and Joyce was that recombinant replicators emerged spontaneously, and came to dominate the population. That is, there was a qualitative change in the system that no one would have predicted. The process of reproduction became more complex, in the plain-language sense of the term. I defy you to show how recombination was front-loaded into the study. Noesis
F/N: maybe I should underscore: the functional specificity of digital code is amply shown by the amount of time and effort given over to debugging to get code right. For language more generally, spell checks and grammar checks are eloquent testimony. kairosfocus
Noesis: The self replication is based on code, uses algorithmic processing, and is coupled to the replication of a metabolising entity that has a replicating facility. Information requited to fulfill the functionality is well beyond 100 k bits storage. Can you show me a case of observed FSCI where the source is known and is chance plus necessity rather than intelligence? What is the empirically known source of algorithms, code and programs? Chance and necessity or intelligence? Going beyond, can you show me that the functional configs are not on deeply isolated islands in a wider space where the overwhelming majority are non-functional? So, what6 is the predictable result of a random walk search on the gamut of our observed cosmos? In short, YOU, PRECISELY, ARE NOT GIVEN AN INITIAL REPLICATING ENTITY. Nor is it ex post facto painting targets around wherever one happens to hit. We do observe function, and we do observe that it is digitally coded. We do observe that digitally coded prescriptive info is sensitive to changes, i.e. has the sort of pattern we have described as islands of function. Those become facts to be explained cogently. Similarly, given the usual pre-biotic scenarios, you have to move from some version or other of Darwin's warm little electrified pond, or a hot sea vent, or a comet or whatever, and plausibly -- on empirically verified evidence -- get to the sort of self-replicating metabolic entity described for cell based life. The problem, plainly, is, you have no credible bridge from that pond to those metabolising vNSR cells. And, you have no credible explanation for FSCI other than intelligence. GEM of TKI PS: Cf here, for some more details on the vNSR. kairosfocus
"not to mention" not "no to mention" Collin
Noesis, Evolution yes, but darwinism no. You are being equivocal with the word evolution. The main question remains "where does the information come from?" (no to mention, where does the mechanism the registers the information come from?) Collin
kairosfocus:
To plausibly find such an island of function in such a space is of course well beyond observability on the gamut of our cosmos, on chance plus necessity without intelligent direction.
Framing the function and the space a posteriori is invalid. As I wrote in my first post of the thread,
But given an initial self-replicator and a complex environment, the space of self-replicators that might arise in the course of evolution is unknowable. Thus there is no way to associate probabilities with forms of self-replicators that might arise in the future. It is bogus, then, to claim after the fact that the end-results of an evolutionary trajectory are improbable.
The fact is that you cannot begin to talk about probability without specifying -- yes, that is you doing the specification -- a discrete component of something you have already observed. Noesis
bornagain, I do not know how life originated on Earth. But I do know that it is logically fallacious to claim that the form of life we exemplify is the only form of life there can be. Even if life on earth is the work of a designer, that does not imply that design is required for life. There is no reason to believe that the RNA-based autocatalytic set constructed by Lincoln and Joyce (2010) is the simplest possible. It is a matter of logic that from variety, heredity, and fecundity comes evolution. The variegation of life is a matter of registering information, not creating it. Noesis
Kairosfocus, Thanks for answering my question. Also thanks for the Paley quote. I think that self-replication does not save the Darwinist because it only adds to the complexity and "poly constrained complexity" (as BA would put it) to the system. There are merely more ways for the system to completely break down. Collin
BA: Re your excerpt:
two rRNAs with a total size of at least 1000 nucleotides – ~10 primitive adaptors of ~30 nucleotides each, in total, ~300 nucleotides – at least one RNA encoding a replicase, ~500 nucleotides (low bound) is required . . .
800 nucleotides at 4 states each implies 4^800 ~ 4.45 *10^481 configs in the implied space. To plausibly find such an island of function in such a space is of course well beyond observability on the gamut of our cosmos, on chance plus necessity without intelligent direction. GEM of TKI kairosfocus
PS: I also need to underscore that the point about the vNSR based self-replication is ADDITIONALITY, as emphasised by the much despised Paley in Ch 2 of his book (the chapter and extension of the watch discussion that somehow seems to be almost always silently passed over in typical dismissive discussions): ________________ >> Suppose, in the next place, that the person who found the watch should after some time discover that, in addition to all the properties which he had hitherto observed in it, it possessed the unexpected property of producing in the course of its movement another watch like itself -- the thing is conceivable; that it contained within it a mechanism, a system of parts -- a mold, for instance, or a complex adjustment of lathes, baffles, and other tools -- evidently and separately calculated for this purpose . . . . The first effect would be to increase his admiration of the contrivance, and his conviction of the consummate skill of the contriver. Whether he regarded the object of the contrivance, the distinct apparatus, the intricate, yet in many parts intelligible mechanism by which it was carried on, he would perceive in this new observation nothing but an additional reason for doing what he had already done -- for referring the construction of the watch to design and to supreme art . . . . He would reflect, that though the watch before him were, in some sense, the maker of the watch, which, was fabricated in the course of its movements, yet it was in a very different sense from that in which a carpenter, for instance, is the maker of a chair -- the author of its contrivance, the cause of the relation of its parts to their use. >> ________________ What we have here through the vNSR facility is that we have a system that does its own thing, AND is able to by having a coded representation, replicate itself. Autocatalytic chemical reaction sets and computer cellular automata simply do not exhibit that key distinction. GEM of TKI kairosfocus
Noesis: Autocatalytic reaction sets, set up by chemists under highly specialised conditions, are utterly irrelevant to the observed metabolising, and code based self-replicating automata that we SEE in the living cell. To put the one forth as the root of the other without a very solid empirically -- observationally -- based explanation of how the one becomes the other, is to put forth a red herring led away to a strawman. Or, to use a metaphor tinged with the insightful analogies that are so often dismissed when it is convenient for materialist advocates to do so: a live donkey kicking the carcass of a dead lion. GEM of TKI kairosfocus
Noesis, In your imagination it seems you have this whole origin of life thing worked out, at least to the point that you feel the huge problems for acquiring even a single functional protein are no big deal. If you can just get that 'sure hunch' you have worked out in your imagination into a 'real world' solution, there is a One Million dollar prize awaiting you; The Origin Of Life Prize Excerpt: "The Origin-of-Life Prize" ® (hereafter called "the Prize") will be awarded for proposing a highly plausible natural-process mechanism for the spontaneous rise of genetic instructions in nature sufficient to give rise to life. The explanation must be consistent with empirical biochemical, kinetic, and thermodynamic concepts as further delineated herein, and be published in a well-respected, peer-reviewed science journal(s). http://www.us.net/life/index.htm Noesis, I also noticed that you 'took it for granted' that once you had self-replication of some sort then, ba da boom, ba da bing, darwinian evolution would be a snap for you,,, It seems you have not been informed of the fact that Darwinists have not even demonstrated a increase in functional information over and above what was already present in life; The GS (genetic selection) Principle – David L. Abel – 2009 Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.” http://www.scitopics.com/The_GS_Principle_The_Genetic_Selection_Principle.html The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010 Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.” http://www.scitopics.com/The_Law_of_Physicodynamic_Insufficiency.html The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.mdpi.com/1422-0067/10/1/247/pdf Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work http://mdpi.com/1422-0067/10/1/247/ag ,, And yet Noesis, though no one has ever witnessed material, or Darwinian, processes generate any non-trivial functional information whatsoever, though the simplest life ever found on earth easily outclasses what our best computer programmers have wrought;,,,, Three Subsets of Sequence Complexity and Their Relevance to Biopolymeric Information - David L. Abel and Jack T. Trevors - Theoretical Biology & Medical Modelling, Vol. 2, 11 August 2005, page 8 "No man-made program comes close to the technical brilliance of even Mycoplasmal genetic algorithms. Mycoplasmas are the simplest known organism with the smallest known genome, to date. How was its genome and other living organisms' genomes programmed?" http://www.biomedcentral.com/content/pdf/1742-4682-2-29.pdf ,,, you yourself, in your few short posts you have submitted thus far, have submitted more information than can reasonably be expected to be generated by the entire universe over the entire history of the universe,,, multiplied by Planck time for good measure!!!! Thus you yourself are presenting concrete evidence of a unique presently acting cause that is alone sufficient to explain the effect in question,, i.e. where did the information come from? Stephen C. Meyer - The Scientific Basis For the Intelligent Design Inference - video http://www.metacafe.com/watch/4104651 Further notes; Even the low end 'hypothetical' probability estimate given by evolutionist, for life spontaneously arising, is fantastically impossible: General and Special Evidence for Intelligent Design in Biology: - The requirements for the emergence of a primitive, coupled replication-translation system, which is considered a candidate for the breakthrough stage in this paper, are much greater. At a minimum, spontaneous formation of: - two rRNAs with a total size of at least 1000 nucleotides - ~10 primitive adaptors of ~30 nucleotides each, in total, ~300 nucleotides - at least one RNA encoding a replicase, ~500 nucleotides (low bound) is required. In the above notation, n = 1800, resulting in E <10-1018. That is, the chance of life occurring by natural processes is 1 in 10 followed by 1018 zeros. (Koonin's intent was to show that short of postulating a multiverse of an infinite number of universes (Many Worlds), the chance of life occurring on earth is vanishingly small.) http://www.conservapedia.com/General_and_Special_Evidence_for_Intelligent_Design_in_Biology Evolutionist Koonin's estimate of 1 in 10 followed by 1018 zeros, for the probability of the simplest self-replicating molecule 'randomly occurring', is a fantastically large number. The number, 10^1018, if written out in its entirety, would be a 1 with one-thousand-eighteen zeros following to the right! The universe itself is estimated to have only 1 with 80 zeros following to the right particles in it. This is clearly well beyond the 10^150 universal probability bound set by William Dembski and is thus clearly a irreducibly complex condition. Koonin, when faced by by the sheer magnitude of his own numbers, makes a 'desperate', though very imaginative, appeal to the never before witnessed quantum mechanism of Many Worlds. Basically Koonin, in appealing to never before observed quantum mechanisms, clearly illustrates that the materialistic argument essentially appears to be like this: Premise One: No materialistic cause of specified complex information is known. Conclusion: Therefore, it must arise from some unknown materialistic cause. On the other hand, Stephen Meyer describes the intelligent design argument as follows: “Premise One: Despite a thorough search, no material causes have been discovered that demonstrate the power to produce large amounts of specified information. “Premise Two: Intelligent causes have demonstrated the power to produce large amounts of specified information. “Conclusion: Intelligent design constitutes the best, most causally adequate, explanation for the information in the cell.” There remains one and only one type of cause that has shown itself able to create functional information like we find in cells, books and software programs -- intelligent design. We know this from our uniform experience and from the design filter -- a mathematically rigorous method of detecting design. Both yield the same answer. (William Dembski and Jonathan Witt, Intelligent Design Uncensored: An Easy-to-Understand Guide to the Controversy, p. 90 (InterVarsity Press, 2010).) The Case Against a Darwinian Origin of Protein Folds - Douglas Axe - 2010 Excerpt Pg. 11: "Based on analysis of the genomes of 447 bacterial species, the projected number of different domain structures per species averages 991. Comparing this to the number of pathways by which metabolic processes are carried out, which is around 263 for E. coli, provides a rough figure of three or four new domain folds being needed, on average, for every new metabolic pathway. In order to accomplish this successfully, an evolutionary search would need to be capable of locating sequences that amount to anything from one in 10^159 to one in 10^308 possibilities, something the neo-Darwinian model falls short of by a very wide margin." http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2010.1 bornagain77
kairosfocus, John von Neumann never heard of an autocatalytic set. Nature feels no compunction about realizing self-replicating systems more simple than he imagined. Noesis
F/N: In short, we can do planned, constructive work. But, at a price of energy degradation of the cosmos. What is being forbidden up to fluctuations [Brownian motion is such a fluctuation], is the spontaneous emergence of order, by overwhelming improbability. That is why thermodynamicians first discussed the infinite monkeys theorem. kairosfocus
Dr Bot Nope. Intelligence allows us to arrange things in ways they would not spontaneously credibly do; due to the balance of statistical weights of clusters of micro-states. (That is the root of my remarks on the likely outcomes of random walks from arbitrary initial points in config spaces with isolated islands of function. E.g. If you have a tray of 1,000 coins, the utterly overwhelming cluster of outcomes will be very near to 50-50 hesds and tails in no simply describable order. That is, to describe the outcomes, you would have to list them coin by coin, basically. HTHT . . . 1,000 times over or the like would be simply describable,a s would be say a Twitter-like message in ASCII code.) The thermodynamic price is paid elsewhere, as an energy converter has to generate waste heat etc. In thermodynamics, you cannot win, you can only break even under one condition, and you cannot get out of the house to get that one condition. Energy is conserved, entropy net rises [or under ideal conditions is constant],and the perfect condition for that is a heat sink at 0 K; which cannot be attained by a finite number of refrigeration cycles. GEM of TKI kairosfocus
Noesis: Dr Sewell's analysis is prior to a metabolising, self-replicating system. He is answering the question of whether an open system explains origin of functionally specific organised, information rich complexity. The answer, on thermodynamics, is no. Next, can you address how a von Neumann self replicating system such as is described here [one that also has a separate functional unit] spontaneously self-organises, creating a digital code, algorithms, storage, specific codes for a correct set of functional proteins and execution machines and all? An irreducibly complex system prior to cell based life and as a condition for self replication thus the chance variation and natural selection usually credited for evolution? And BTW, can you kindly give us an example of your self replicating molecules observed forming on conditions that are plausible for a real world prebiotic environment? Then, can you explain the following cluster of excerpts on OOL from leading researchers: ______________ >> In Dawkins' own words: What Science has now achieved is an emancipation from that impulse to attribute these things to a creator... It was a supreme achievement of the human intellect to realize there is a better explanation ... that these things can come about by purely natural causes ... we understand essentially how life came into being.20 (from the Dawkins-Lennox debate) "We understand essentially how life came into being"?! – Who understands? Who is "we"? Is it Dr. Stuart Kauffman? "Anyone who tells you that he or she knows how life started ... is a fool or a knave." 21 Is it Dr. Robert Shapiro? "The weakest point is our lack of understanding of the origin of life. No evidence remains that we know of to explain the steps that started life here, billions of years ago." 22 Is it Dr. George Whitesides? "Most chemists believe as I do that life emerged spontaneously from mixtures of chemicals in the prebiotic earth. How? I have no idea... On the basis of all chemistry I know, it seems astonishingly improbable." Is it Dr. G. Cairns-Smith? "Is it any wonder that [many scientists] find the origin of life to be utterly perplexing?" 23 Is it Dr. Paul Davies? "Many investigators feel uneasy about stating in public that the origin of life is a mystery, even though behind closed doors they freely admit they are baffled ... the problem of how and where life began is one of the great out-standing mysteries of science." Is it Dr. Richard Dawkins? Here is how Dawkins responded to questions about the Origin of Life during an interview with Ben Stein in the film Expelled: No Intelligence Allowed: Stein: How did it start? Dawkins: Nobody knows how it started, we know the kind of event that it must have been, we know the sort of event that must have happened for the origin of life. Stein: What was that? Dawkins: It was the origin of the first self replicating molecule. Stein: How did that happen? Dawkins: I told you I don't know. Stein: So you have no idea how it started? Dawkins: No, No, NOR DOES ANYONE ELSE. 24 “Nobody understands the origin of life, if they say they do, they are probably trying to fool you.” (Dr. Ken Nealson, microbiologist and co-chairman of the Committee on the Origin and Evolution of Life for the National Academy of Sciences) Nobody, including Professor Dawkins, has any idea "how life came into being!" >> ______________ Thanks GEM of TKI kairosfocus
Perhaps I'm misundertanding this but is the argument basically that, because we have intelligence (the ability to design and create objects with FCSI) we are able to violate the second law? If this is the case then it begs an important question - Are there any other laws of physics that we can be free of by virtue of our intelligence or is the SLOT a special case? DrBot
Prof. Sewell ignores entirely the physics and, more importantly, the logic of self-replicating systems. There are good reasons to believe that autocatalytic sets arise fairly often in environments that are far from thermodynamic equilibrium. Argue that point, if you like. But given an initial self-replicator and a complex environment, the space of self-replicators that might arise in the course of evolution is unknowable. Thus there is no way to associate probabilities with forms of self-replicators that might arise in the future. It is bogus, then, to claim after the fact that the end-results of an evolutionary trajectory are improbable. Even if one had magical knowledge of a space of evolutionary trajectories (defined somehow or another) and a probability distribution on that space, the probability of any particular trajectory would be exceedingly small. This goes to the fundamental error in arguments from improbability. When all trajectories of a dynamical system are highly improbable, then it is invalid to declare some trajectories "effectively impossible." Noesis
Joseph, I also replied to Maya's attempted dismissal of the concept of specified complexity, by pointing here to no 4 in the ID founds series. GEM of TKI kairosfocus
PS: I forgot, ln is log on e the base of natural logs, 2.718 . . . kairosfocus
Collin: Cell based life requires functionally specific, digitally coded information and associated implementing nanomachines organised around not a logic of order but a logic of function. So, we have three distinct concepts: randomness, order, organisation. Organised entities are specifiable on function, but to accommodate information, they are more aperiodic than orderly systems, which tend to have various types of symmetry or a simple repeating cellular structure. What Dr Sewell is addressing is an underlying issue. The basic architecture of materal things is based on atoms, molecules and the like at micro level. Associated with that micro-level arrangement of mass and energy [which includes motion], we have a quantity that measures the degree of freedom within a given macroscopic -- our scale - description. That quantity is entropy, and one way to measure it is given by Boltzmann: s = k * ln w Where s is entropy, k is a part measure of the quantity of energy required/used per degree of freedom [k* T, temp above absolute zero, is in energy more or less per molecule], and w is a count of the number of ways things may be arranged. Disorderly things may be arranged any old how, and so have a great many ways. Orderly and organised things may be arranged in only a relatively few ways and are low entropy. Orderly things are very simply describable, and tend to be very regular, so they are not very informational. Organised things are quite specific [e.g. the string of letters in this post are in a very specific though not regularly repeating order], and that specificity, especially if it is functionally constrained, is highly informational. DNA is like that, and so are the proteins coded from DNA. What happens is that he open systems objection to the inference to design is saying that -- thanks to energy and matter flow through -- and since organised arrangements are arrangements, they can plausibly be accounted for without reference to an organiser. That is, forces of chance and or necessity are enough to account for this. Dr Sewell's rebuttal is that in effect relevant islands of function are deeply isolated in the space of possible configurations, so if one envisions moving to configs at random, it is utterly unlikely that one will find functionally specific ones. So, the organization is information rich and is not credibly accounted for on simple flows of energy, without intelligent direction. So, a moon landing space craft is not credibly accounted for on chance configs, and a computer is not accountable for on mere open-ness to energy and/or mass flows either. A Jumbo Jet points to a Boeing company, not a tornado in a junkyard in Seattle. That is why he speaks of how that which is very unlikely in a closed system is no more likely in an opened up one, except something else is happening that makes the thing not unlikely. The degree of unlikely at stake is typically of order 1 in 10 ^150 or more, much more. The atoms of our observed cosmos, across its lifespan, will go through about 10^150 minimum time states. The minimum length in question is about 10^20 times faster than a strong force nuclear interaction. Hope that begins to make clear. GEM of TKI kairosfocus
Kairosfocus, Thanks for answering my question. If you don't mind, I've got a second one. Is Granville saying that in order to have life, you have to have enough order and that there is insufficient order here? Or is he saying that order cannot create information no matter how much of it you have? Or is he even addressing information at all? Collin
I let him know that you are interesed. Joseph
PS: Joseph, you are discussing encoding of stereophonic sound, and blending in FM broadcasts, which are in turn different from phase modulation. Just a footnote lest the usual pick a point, strawmannise and attack game is played. kairosfocus
Joseph Thanks. I hope he takes time to look at my remarks in the ID founds no 2 and in the always linked note App 1 on thermodynamics issues. BTW, this last has in it a discussion on Brillouin's negentropy view of information, and Jaynes' remarks on the informational view of thermodynamics. NE has it wrong way around. Let's see if he can discuss the Clausius case with us, and what happens when (like with a double acting steam engine), the device contains FSCI, directly or through being a complicated irreducibly complex entity. And, do notice my App 3 discussion on things like snowflakes and hurricanes, esp how order and functional organisation are to be distinguished. Why not let's thrash it all out here in this thread, NE? (Can you post here?) GEM of TKI kairosfocus
kairofocus- your response is up on my blog and I went to his blog and posted a link to your response. I can tell he got it because my bloig is getting hits from his. Joseph
I think the confusion is that information rides on energy- think carrier wave, s in FM. FM has a carrier wave of 38KHz- see that "stereo" light- it is lit by a 19KHz signal derived from the 38KHz carrier wave. The FM information rides on that 38KHz carrier wave but the information is not the carrier wave. Joseph
F/N: I went further along and saw this from NE:
Of course energy is not information, but information is energy . . . . You require energy to build those brains that designed the computer. You see Joe, in the end, even us cannot be accounted for without energy. Take a deep look, see if you can find any step towards your computer that did not require energy. From humans to whatever you like . . . . look carefully at the "design inferences" of Behe and partners, nothing but ignorance. No designer anywhere to be seen so that their "inference" would have some justification.
1 --> Information is not energy, but informational arrangements of matter require energy. to take just one example, the energy [heat content or enthalpy, specifically] to build a chain of monomers would be basically the same for RH or LH molecules, or even racemeic versions. But only one handedness will work in life. 2 --> Just the being racemic alone is one bit per monomer, a specification that is going to exponentiate, up to 2^300 for a typical length protein molecule. 3 --> same energy, different result. 4 --> That energy is required to build and operate brains does not account for the information content of said brains. Indeed, that brain A belongs to GEM who can design and build a computer, and brain B that belongs to my son who cannot as yet do that, is a result of knowledge, education and experience, not energy flows. It arguably would require a comparable energy input to have filled my head with literary theory or music, as with physics, electronics and related areas. 5 --> My computer requires an energy flow to operate, but that energy flow is not equivalent to the information that is used in it. 6 --> Your demand for independent demonstration of a designer in the known to be remote and unobserved past is an exercise in selective hyperskepticism. 7 --> On the uniformity principle used in origins science, we look at what is causally sufficient to produce a given effect in the present. Then, we look for its characteristic signs. On inference to best empirically anchored explanation, we then use the signs to infer to the most credible cause in the past. 8 --> NE knows or should know that FSCI and IC are indeed empirically reliable signs of intelligent cause in the present, indeed there are many examples and no counter-examples. Thus, we have good reason to view them as signatures of intelligent cause. 9 --> What NE has done is to inject the a priori evo mat assumption that an intelligent cause was IMPOSSIBLE so the past MUST be explained on chance plus necessity regardless of the problems with empirical insufficiency of cause, so the bare logical possibility of what lucky noise can do will have to do. 10 --> Begging the question in service to an ideology, in short, No wonder such so often resort to projecting theocratic motives on those who differ with them. GEM of TKI kairosfocus
Joseph I have submitted a comment in response to NE at your blog, and it should be pending your moderation. I fully understand your need to moderate comments. GEM of TKI kairosfocus
F/N: It is worth taking a moment to note on points on NE's argument at Joseph's blog: _________________ >> If you first understand that there is a relationship between energy and information, a --> Q: And, what is that link? b --> A: It is that informational configs of symbols, or information-rich functional configs of components, are at islands of fucntion in large config spaces, and so are maximally unlikely to be reched by random walks from arbitrary initial points. c --> In addition, if thermal agitation is a relevsant to a system, the simple dumping in of energy will heat the object up, exponentially increasing the number of microscopic ways that energy and mass can be arranged, i.e increasing disorder and making the information-rich islands even less likely to be arrived at by random walks. you understand that energy flow can explain information content. c --> FALLACY. This is the open systems can explain organisation error just described. NE needs to read Wicken's 1979 remark on this carefully:
‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]
c --> Where the number of accessible configs is beyond astronomically large [1,000 or more bits of info storage capacity] this means that the selection cannot be explained on chance, not credibly. Then you have to understand that "irreducible complexity" is but a display of ignorance about how a system could have arisen. d --> This is little more than a resort to Dawkins' sneering prejudice that if you disagree with his evolutionary materialism, you are ignorant, stupid, insane or wicked. e --> It then resorts to the substitution of logical possibility for empirical reasonableness in light of the space of possibilities and the different possible causes of configurations. f --> That is, there is a smuggled in assumption that intelligence is effectively impossible, so, regardless of the beyond astronomical remoteness of the odds that chance plus necessity originated the functional configs by good luck, tha tis what "must" have happened. g --> Going beyond this, if you see a flyable Jumbo Jet, you infer to Boeing corp, not a tornado in a junkyard in Seattle, as the number of flyable configs is so remote that the functional organisation is best explained on design, not undirected chance plus necessity without intelligence. h --> And, when it comes to the origin of irreducibly complex entities, one needs to account for the configs of the parts that allows them to match and work together, then for the wiring diagram that puts them in a functional order. I assure you, a working electronics circuit is a matter of very careful specification and matching of parts in a wiring arrangement that is intelligently designed. i --> NE is ducking the force of the factors C1 - 5 discussed here in the ID foundations series It might truly be "irreducibly complex" as it exists today, but that does not preclude its appearance in steps but then some part that allowed this to happen disappeared in the realms of time. j --> This is the resort to bare logical possibility without empirical data, and in defiance of the conditions C1 - 5 which means that IC systems will naturally be well beyond 1,000 bits of complex, functionally specified information. k --> Anyone can spout the equivalent of the claim that the jumbo jet on the tarmac could logically possibly have come about form a tornado in a junkyard. But, what is really needed is to show that, based on empirical evidence. Just like, it is logically possible for a perpetual motion machine of the second kind to work at least once. SHOW it, don't speculate on what just might be. l --> the proposed steps and happy coincidences of matching sub-components is a matter of blind faith that this is what must have happened as a priori the possibility of a designer has been ruled out by imposing evolutionary materialism by the back door route of the so called methodological naturalism. It could also be n --> More speculation. that it just looks "irreducibly complex" but it is lack of knowledge that makes it appear so, o --> Start with a metabolising entity that hen has to have a coded tape based self replicating entitty to replicate itself, i.e a von Neumann self-replicator. p --> Such a vNSR requires:
(i) an underlying storable code to record the required information to create not only (a) the primary functional machine [[here, for a "clanking replicator" as illustrated, a Turing-type “universal computer”; in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also (b) the self-replicating facility; and, that (c) can express step by step finite procedures for using the facility; (ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with (iii) a tape reader [[called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling: (iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by (v) either: (1) a pre-existing reservoir of required parts and energy sources, or (2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment. Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor. That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist.
q --> Kindly explain how such an entity can come about spontaneously in steps, since as well, not only do we need these things to work together to function, but we need this facility to have the possibility that the system can vary its code and perhaps accidentally bump into an improvement that could dominate the population, i.e this is a precondition of evolving. (Odds of that leading to a new body plan, of course, are beyond astronomical.) rather than there being authentic "irreducible complexity." s --> logical possibility and promissory note speculation conveniently substituted for empirical, factual observation.>> ____________________ Talking points based on imposing a priori materialism are easy to spout, cogent arguments backed up by empirical evidence and logical analysis, not so easy. kairosfocus
Joseph and BA: Pardon my directlness, but NE is spouting specious talking points, as opposed to raising sound arguments that cogently address the real issue at stake. Worse, he is simply not facing the force of what Dr Sewell is raising, in light of say what the Clausius example on entropy rise in heat exchanges vs the heat flow through an energy converting device, is telling us. (Cf my two examples just following, noting the diagram in the first.) In addition to BA's very helpful video link, I suggest that he needs to address the points made by Wicken et al and cited here, and the associated issues discussed here in my always linked and the onward linked TMLO, 1984. He needs to ask himself what it means to inject raw, uncorrelated energy into a system, and the consequences of that, if the only way the system can address the energy is to increase the random thermal agitation of its molecules. Dr Sewell has a very apt remark in his article [which, recall, is peer reviewed] that I was again noticing yesterday: ____________________ >> The fact that thermal entropy cannot decrease in a closed system, but can decrease in an open system, was used to conclude that, in other applications, any entropy decrease in an open system is possible as long as it is compensated somehow by entropy increases outside this system, so that the total “entropy” (as though there were only one type) in the universe, or any other closed system containing the open system, still increases . . . . The second law of thermodynamics is all about probability; it uses proba-bility at the microscopic level to predict macroscopic change.3 Carbon dis-tributes itself more and more uniformly in an isolated solid because that is what the laws of probability predict when diffusion alone is operative. Thus the second law predicts that natural (unintelligent) causes will not do macroscopically describable things which are extremely improbable from the microscopic point of view. The reason natural forces can turn a computer or a spaceship into rubble and not vice versa is probability: of all the possi-ble arrangements atoms could take, only a very small percentage could add, subtract, multiply and divide real numbers, or fly astronauts to the moon and back safely. Of course, we must be careful to define “extremely improbable” events to be events of probability less than some very small threshold: if we define events of probability less than 1% to be extremely improbable, then obviously natural causes can do extremely improbable things.4 But after we define a sufficiently low threshold, everyone seems to agree that “natural forces will rearrange atoms into digital computers” is a macroscopically describable event that is still extremely improbable from the microscopic point of view, and thus forbidden by the second law—at least if this happens in a closed system. But it is not true that the laws of probability only apply to closed systems: if a system is open, you just have to take into account what is crossing the boundary when deciding what is extremely improbable and what is not. What happens in a closed system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well.
_____________ F/N 4 If we repeat an experiment 2k times, and define an event to be “simply describable”(macroscopically describable) if it can be described in m or fewer bits (so that there are 2m or fewer such events), and “extremely improbable” when it has probability 1/2n or less, then the probability that any extremely improbable, simply describable event will ever occur is less than 2k+m/2n. Thus we just have to make sure to choose n to be much larger than k + m. If we flip a billion fair coins, any outcome we get can be said to be extremely improbable, but we only have cause for astonishment if something extremely improbable and simply describable happens, such as “all heads,” or “every third coin is tails,” or “only every third coin is tails.” For practical purposes, almost anything that can be described without resorting to an atom-by-atom (or coin-by-coin) accounting can be considered “macroscopically” describable. [NB: This is of course closely related to what we have called functionally specific, i.e. the observed function gives us a basis for a description that is "simple," and since relatively very few configs in a space of possibilities will be functional, it is highly unlikely to be encountered on a random walk from arbitrary initial configs, on the gamut of the observed cosmos across its lifespan]
The “compensation” counter-argument was produced by people who gen-eralized the model equation for closed systems, but forgot to generalize the equation for open systems. Both equations are only valid for our simple mod-els, where it is assumed that only heat conduction or diffusion is going on; naturally in more complex situations, the laws of probability do not make such simple predictions. Nevertheless, in “Can ANYTHING Happen in an Open System?” [Sewell 2001], I generalized the equations for open systems to the following tautology, which is valid in all situations:
If an increase in order is extremely improbable when a system is closed, it is still extremely improbable when the system is open, unless something is entering which makes it not extremely im-probable.
The fact that order is disappearing in the next room does not make it any easier for computers to appear in our room—unless this order is disappearing into our room, and then only if it is a type of order that makes the appearance of computers not extremely improbable, for example, computers. Importing thermal order into an open system may make the temperature distribution less random, and importing carbon order may make the carbon distribution less random, but neither makes the formation of computers more probable.>> [Sewell. Granville, A Second Look at the Second Law (Applied Mathematics Letters 24 (June 2011) pp 1022-5 Reference: http://dx.doi.org/10.1016/j.aml.2011.01.019),preprint, pp. 5 - 8.] _______________________ Those who spout the open systems talking point reveal their ignorance or manipulativeness if they know better. RX: NE needs to read Sewell and other similar sources, then seriously address the point on the merits instead of the strawman tactic talking points. GEM of TKI PS: Onlookers, see why I insisted that we needed to address the topic of note 2 in the ID foundations series? We have to understand enough of what thermodynamics is about to answer to the notion that simple dumping of raw, uncorrelated energy into a system can account credibly for the origin of functionally specific complex organisation and associated information. Not so, but we have to pull back the thermodynamics veil a bit to see why. What this boils down to is yet another form of the, since I assume an intelligence at the relevant point is impossible, then since chance can logically possibly do it that is how it "must" have been done. Nope, you will have to show why you are confident that such an intelligence is not possible at the time and place in question, and do so in the teeth of the evidence pointing to an intelligence as the best credible explanation of the origin of the cosmos. Indeed, at the root of the cosmos, if we bring on board the relevant logic of cause for the moment, we are looking at a necessary being that is capable of causing the origin of a cosmos fine-tuned for C-chemistry cell based life. kairosfocus
further note: This 'uniqueness', and higher dimensional dominance, of information is also now found to extend into molecular biology; Quantum Information In DNA & Protein Folding - video http://www.metacafe.com/watch/5936605/ The relevance of continuous variable entanglement in DNA – June 21, 2010 Abstract: We consider a chain of harmonic oscillators with dipole-dipole interaction between nearest neighbours resulting in a van der Waals type bonding. The binding energies between entangled and classically correlated states are compared. We apply our model to DNA. By comparing our model with numerical simulations we conclude that entanglement may play a crucial role in explaining the stability of the DNA double helix. http://arxiv.org/abs/1006.4053v1 Quantum entanglement holds together life’s blueprint Excerpt: “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford. http://neshealthblog.wordpress.com/2010/09/15/quantum-entanglement-holds-together-lifes-blueprint/ Does DNA Have Telepathic Properties?-A Galaxy Insight Excerpt: DNA has been found to have a bizarre ability to put itself together, even at a distance, when according to known science it shouldn't be able to. Explanation: None, at least not yet.,,, The recognition of similar sequences in DNA’s chemical subunits, occurs in a way unrecognized by science. There is no known reason why the DNA is able to combine the way it does, and from a current theoretical standpoint this feat should be chemically impossible. http://www.dailygalaxy.com/my_weblog/2009/04/does-dna-have-t.html 4-Dimensional Quarter Power Scaling In Biology - video http://www.metacafe.com/w/5964041/ further notes; This following experiment clearly shows information is not an 'emergent property' of any solid material basis as is dogmatically asserted by some materialists: Converting Quantum Bits: Physicists Transfer Information Between Matter and Light Excerpt: A team of physicists at the Georgia Institute of Technology has taken a significant step toward the development of quantum communications systems by successfully transferring quantum information from two different groups of atoms onto a single photon. http://gtresearchnews.gatech.edu/newsrelease/quantumtrans.htm The following articles show that even atoms (Ions) are subject to teleportation: Of note: An ion is an atom or molecule in which the total number of electrons is not equal to the total number of protons, giving it a net positive or negative electrical charge. Ions have been teleported successfully for the first time by two independent research groups Excerpt: In fact, copying isn't quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable - it is enforced by the laws of quantum mechanics, which stipulate that you can't 'clone' a quantum state. In principle, however, the 'copy' can be indistinguishable from the original (that was destroyed),,, http://www.rsc.org/chemistryworld/Issues/2004/October/beammeup.asp Atom takes a quantum leap - 2009 Excerpt: Ytterbium ions have been 'teleported' over a distance of a metre.,,, "What you're moving is information, not the actual atoms," says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second. http://www.freerepublic.com/focus/news/2171769/posts ,,,, Let All creation bring glory to God! All Of Creation - Mercyme http://www.youtube.com/watch?v=kkdniYsUrM8 bornagain77
Joseph, it looks like neg. entro. said this, 'still, the energy from the sun was several orders of magnitude more than necessary to account for such information.' Actually contrary to his persistent denial (I've seen this guy before), pouring more energy into any 'ordered' system increases the disorder of that 'ordered' system more quickly; Evolution Vs. Thermodynamics - Open System Refutation - Thomas Kindell - video http://www.metacafe.com/watch/4143014/ As well, neg. entro. is right in some sense, there is a fairly direct relation between energy and information, but it is not the relation that neg. entro. wants. The relation is that if you displace the 'infinite information' of a photon the photon will cease to be, because each and every photon in the universe is actually made of infinite specified information. Explaining Information Transfer in Quantum Teleportation: Armond Duwell †‡ University of Pittsburgh Excerpt: In contrast to a classical bit, the description of a (photon) qubit requires an infinite amount of information. The amount of information is infinite because two real numbers are required in the expansion of the state vector of a two state quantum system (Jozsa 1997, 1) --- Concept 2. is used by Bennett, et al. Recall that they infer that since an infinite amount of information is required to specify a (photon) qubit, an infinite amount of information must be transferred to teleport. http://www.cas.umt.edu/phil/faculty/duwell/DuwellPSA2K.pdf Single photons to soak up data: Excerpt: the orbital angular momentum of a photon can take on an infinite number of values. Since a photon can also exist in a superposition of these states, it could – in principle – be encoded with an infinite amount of information. http://physicsworld.com/cws/article/news/7201 How Teleportation Will Work - Excerpt: In 1993, the idea of teleportation moved out of the realm of science fiction and into the world of theoretical possibility. It was then that physicist Charles Bennett and a team of researchers at IBM confirmed that quantum teleportation was possible, but only if the original object being teleported was destroyed. --- As predicted, the original photon no longer existed once the replica was made. http://science.howstuffworks.com/teleportation1.htm Quantum Teleportation - IBM Research Page Excerpt: "it would destroy the original (photon) in the process,," http://www.research.ibm.com/quantuminfo/teleportation/ Unconditional Quantum Teleportation - abstract Excerpt: This is the first realization of unconditional quantum teleportation where every state entering the device is actually teleported,, http://www.sciencemag.org/cgi/content/abstract/282/5389/706 Moreover transcendent information is now shown to be a independent, and unique, higher dimensional entity, that is completely separate, and dominate, of matter and energy; The Failure Of Local Realism - Materialism - Alain Aspect - video http://www.metacafe.com/w/4744145 The falsification for local realism (materialism) was recently greatly strengthened: Physicists close two loopholes while violating local realism - November 2010 Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview. http://www.physorg.com/news/2010-11-physicists-loopholes-violating-local-realism.html bornagain77
Hold the press! I have it on my blog, posted by someone who goes by "Negative Entropy", that energy flow can explain information content. He is also on Dr Hunter's blog telling me that information is energy. So that is it, problem solved.... Not Joseph
Collin: 1: chance based disorder:rfheijgeg73gerbhjs (notice randomness so to describe you have to quote the string) 2: order: ddddddddddddddddddddddddd (cf how a crystal is built up by stacking the unit cell over and over again. Describe: punch d over and over again.) 3: Information-rich organisation: this text is an example of functionally specific, complex organisation that is meaningful and informational. (aperiodic, but non-random, as specified to function.) GEM of TKI kairosfocus
Granville, I'm trying to fully understand exactly what the difference is between order and information. Is it possible for there to be an unlimited supply of order but not enough information for life? Collin
to clarify; but for 'the open systems of' computers like ‘Watson’ in particular. bornagain77
JGuy, I caught that fact, but thought the examples in molecular biology would be relevant as well since the examples can easily be argued to exceed what man has accomplished in concerted engineering.,,, But seeing 'Watson's' recent victory on Jeopardy this last week, and the rampant speculation about computers becoming 'conscious' (strong AI), Dr. Sewell's comparison to what human engineers have accomplished, against the strict limits presented by thermodynamics, are far more appropriate for clearly illustrating the limits of any 'open system' in general, but for computer's like 'Watson' in particular. i.e. For showing that ALL design must be implemented 'top down' not 'bottom up'. bornagain77
The preprint version of the article is here . Granville Sewell
bornagain77: The mol-bio analogs are noteworthy features! :) Not sure if you thoguht otherwise, but I think when Sewell wrote, "[...] makes the appearance of spaceships, computers and the Internet not extremely improbable [...]", that he literally means spaceships, computers and the internet. i.e. he is apparently granting a strictly materialist presumption and that Darwinian evolution would have to explain the existence of spaceships etc....since spaceships are a result of human engineering...and humans a result of some blind material 'process' in that granted presumption. JGuy
To those who would think that spaceships, computers and the Internet do not have analogs in molecular biology, I point out that there are 'surpassing analogs' in molecular biology; The Virus (bateriophage) - Assembly Of A Molecular "Lunar Landing" Machine - video http://www.metacafe.com/watch/4023122 The Virus - A Molecular Lunar Landing Machine - video http://www.metacafe.com/watch/4205494 The first thought I had when I saw the bacteriophage virus is that it looks similar to the lunar lander of the Apollo program. The comparison is not without merit considering some of the relative distances to be traveled and the virus must somehow possess, as of yet unelucidated, orientation, guidance, docking, unloading, loading, etc... mechanisms. Human DNA is like a computer program but far, far more advanced than any software we've ever created. Bill Gates, The Road Ahead, 1996, p. 188 Bill Gates, in recognizing the superiority found in Genetic Coding, compared to the best computer coding we now have, has now funded research into this area: Welcome to CoSBi - (Computational and Systems Biology) Excerpt: Biological systems are the most parallel systems ever studied and we hope to use our better understanding of how living systems handle information to design new computational paradigms, programming languages and software development environments. The net result would be the design and implementation of better applications firmly grounded on new computational, massively parallel paradigms in many different areas. http://www.cosbi.eu/index.php/component/content/article/171 3-D Structure Of Human Genome: Fractal Globule Architecture Packs Two Meters Of DNA Into Each Cell - Oct. 2009 Excerpt: the information density in the nucleus is trillions of times higher than on a computer chip -- while avoiding the knots and tangles that might interfere with the cell's ability to read its own genome. Moreover, the DNA can easily unfold and refold during gene activation, gene repression, and cell replication. http://www.sciencedaily.com/releases/2009/10/091008142957.htm Nanoelectronic Transistor Combined With Biological Machine Could Lead To Better Electronics: - Aug. 2009 Excerpt: While modern communication devices rely on electric fields and currents to carry the flow of information, biological systems are much more complex. They use an arsenal of membrane receptors, channels and pumps to control signal transduction that is unmatched by even the most powerful computers. http://www.sciencedaily.com/releases/2009/08/090811091834.htm Systems biology: Untangling the protein web - July 2009 Excerpt: Vidal thinks that technological improvements — especially in nanotechnology, to generate more data, and microscopy, to explore interaction inside cells, along with increased computer power — are required to push systems biology forward. "Combine all this and you can start to think that maybe some of the information flow can be captured," he says. But when it comes to figuring out the best way to explore information flow in cells, Tyers jokes that it is like comparing different degrees of infinity. "The interesting point coming out of all these studies is how complex these systems are — the different feedback loops and how they cross-regulate each other and adapt to perturbations are only just becoming apparent," he says. "The simple pathway models are a gross oversimplification of what is actually happening." http://www.nature.com/nature/journal/v460/n7253/full/460415a.html Cells Are Like Robust Computational Systems, - June 2009 Excerpt: Gene regulatory networks in cell nuclei are similar to cloud computing networks, such as Google or Yahoo!, researchers report today in the online journal Molecular Systems Biology. The similarity is that each system keeps working despite the failure of individual components, whether they are master genes or computer processors. ,,,,"We now have reason to think of cells as robust computational devices, employing redundancy in the same way that enables large computing systems, such as Amazon, to keep operating despite the fact that servers routinely fail." http://www.sciencedaily.com/releases/2009/06/090616103205.htm bornagain77

Leave a Reply