Uncommon Descent Serving The Intelligent Design Community

Andy McIntosh’s Peer-Reviewed ID Paper–Note the Editor’s Note!

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Prof. Andy McIntoshProfessor Andy McIntosh, an ID proponent in the UK, has a peer-reviewed paper on the thermodynamic barriers to Darwinian evolution:

A. C. McIntosh, “Information and Entropy—Top-Down or Bottom-Up Development in Living Systems?” International Journal of Design & Nature and Ecodynamics 4(4) (2009): 351-385

The Editor appends the following note:

Editor’s Note: This paper presents a different paradigm than the traditional view. It is, in the view of the Journal, an exploratory paper that does not give a complete justification for the alternative view. The reader should not assume that the Journal or the reviewers agree with the conclusions of the paper.  It is a valuable contribution that challenges the conventional vision that systems can design and organise themselves.  The Journal hopes that the paper will promote the exchange of ideas in this important topic.  Comments are invited in the form of ‘Letters to the Editor’.

Here is the abstract: 

Abstract: This paper deals with the fundamental and challenging question of the ultimate origin of genetic information from a thermodynamic perspective. The theory of evolution postulates that random mutations and natural selection can increase genetic information over successive generations. It is often argued from an evolutionary perspective that this does not violate the second law of thermodynamics because it is proposed that the entropy of a non-isolated system could reduce due to energy input from an outside source, especially the sun when considering the earth as a biotic system. By this it is proposed that a particular system can become organised at the expense of an increase in entropy elsewhere. However, whilst this argument works for structures such as snowflakes that are formed by natural forces, it does not work for genetic information because the information system is composed of machinery which requires precise and non-spontaneous raised free energy levels – and crystals like snowflakes have zero free energy as the phase transition occurs. The functional machinery of biological systems such as DNA, RNA and proteins requires that precise, non-spontaneous raised free energies be formed in the molecular bonds which are maintained in a far from equilibrium state. Furthermore, biological structures contain coded instructions which, as is shown in this paper, are not defined by the matter and energy of the molecules carrying this information. Thus, the specified complexity cannot be created by natural forces even in conditions far from equilibrium. The genetic information needed to code for complex structures like proteins actually requires information which organises the natural forces surrounding it and not the other way around – the information is crucially not defined by the material on which it sits. The information system locally requires the free energies of the molecular machinery to be raised in order for the information to be stored. Consequently, the fundamental laws of thermodynamics show that entropy reduction which can occur naturally in non-isolated systems is not a sufficient argument to explain the origin of either biological machinery or genetic information that is inextricably intertwined with it. This paper highlights the distinctive and non-material nature of information and its relationship with matter, energy and natural forces. It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate.

Comments
Petrushka, by the way, I accept common descent too.
Fine, then we could be looking for common ground. I can accept that there are vast unknown areas in biology. My inclination is to look for naturalistic explanations. I do this for two reasons: one, the search for naturalistic explanations has a long history of being productive, and two, the alternative leads to the rather sterile conclusion that some unknown agency did some unspecified thing(s) at unspecified time(s) and place(s) using unspecified methods for unspecified reasons. there are some really difficult problems in science. Gravity is one. Galileo found an equation that described gravity as acceleration, bu it took a couple hundred years for Newton to apply this to all objects, including suns and planets. Then it took another couple hundred years for Einstein to iron out some inconsistencies. And we are still left with an incomplete theory. At no point in the history of gravity, since Galileo, has there been any progress made by imputing demiurges to explain inconsistencies in data. But even Newton was tempted in this direction, so intelligence and competence are no barrier to this kind of thinking. It has a powerful hold on the human imagination. I have no illusions that I am going to change anyone's mind. My motive is to sharpen my own knowledge and abilities. I don't know if I will continue much on this thread. It depends on whether I find areas that I think need clarifying. I am completely unconvinced by the argument from probability. Such an argument would require a complete history of changes in genomes, in complete detail. The kind of detail you would need to argue that lotto winners were somehow rigging the game. (It's been done, both the rigging and the catching.) Simply arguing that the present condition is improbable carries no weight. Neither you nor I have the detailed history that would expose an instance of tampering with genomes. If you wish to argue, as Michael Denton does, that existence in the form of physical constants is rigged to produce life, I have no interest in arguing against that. Whatever the history of existence, it does seem to produce life.Petrushka
June 7, 2010
June
06
Jun
7
07
2010
10:31 AM
10
10
31
AM
PDT
Petruska, there are examples of completely unique genes and proteins in humans (at least 50 to 100) ... Can you even give an example of mutations to existing genes producing morphogenesis of a new species?
I'm not sure how the number 50 to 100 affects my characterization of "most." It's my understanding that ther are something like nine million unique gene in human intestional bacteria. I would stand by my assertion that most genes have originated in microbes. But looking at articles on human specific genes, I find that they may very well be modifications of inherited genetic material. http://news.wustl.edu/news/Pages/11349.aspx I suppose if you don't accept common descent, you can always hope that a complete gene fell out of the sky, like Athena from the head of Zeus. But where will your argument go if the new genes turn out to be just slight modifications of inherited sequences?Petrushka
June 7, 2010
June
06
Jun
7
07
2010
10:04 AM
10
10
04
AM
PDT
Upright BiPed: Thank you for saving me the time :) Petrushka, by the way, I accept common descent too.gpuccio
June 7, 2010
June
06
Jun
7
07
2010
10:00 AM
10
10
00
AM
PDT
The rebuttals from the materialist on this thread are growing increasingly weak. Actually, they started off weak and have gone down hill. Petrushka tells GP that he/she thinks ID proponents assume all protein specification (every "bit") is critical. He/she suggests it is an assumption ID proponents have been making for "some time", implying this to be a fatal flaw in their thinking and (apparently) through his/her long-time study of the ID argument he/she has come to this thoughtful conclusion about ID proponents. In turn, GP has been abundently clear that this is not the case. He goes into much discussion of the protein domains. He reminds Petrushka that his/her suggestion would mean that every single amino acid would have to be ultraconserved, and challenges Petrushka to present a single ID proponent that makes such a claim. In the end, he ask Petrushka if he/she really believes that molecular scientists such as Behe, Abel, Axe, and Durston are not aware of this basic understanding of biology. So what is Petrushka's response? As always, its to slip through your prior words without ackowledging their falsity, then, change the subject. Petrushka hilariously retorts "I note that Behe accepts common descent". He/she then goes on the quote Durston saying exactly the opposite of his/her original claim. Geeez.Upright BiPed
June 7, 2010
June
06
Jun
7
07
2010
09:25 AM
9
09
25
AM
PDT
Petruska, there are examples of completely unique genes and proteins in humans (at least 50 to 100) Thus can you give an example of just one protein/gene originating by natural means instead of so cavalierly pushing it back to a former age of pre-Cambrian miracles? Can you even give an example of mutations to existing genes producing morphogenesis of a new species? Can you produce any concrete empirical evidence whatsoever besides your blind faith that all this staggering complexity, that dwarfs out puny understanding in molecular biology, originated by purely material processes. Since I've been debating this for a few years and thus see no answer forthcoming from you or any other materialists, could you please tell me the answer of an easier question? Can you please tell me how the universe originated by material processes when no material, time or space, processes existed before the creation of the universe? Testify To Love - Avalon http://www.youtube.com/watch?v=P5TpPCEcI84bornagain77
June 7, 2010
June
06
Jun
7
07
2010
09:15 AM
9
09
15
AM
PDT
I will repeat what I said on another thread. Most of the genes that code for proteins originated before the Cambrian, or at least they exist in single celled organisms. We may never be able to reconstruct that history. But I'm not sure what you are asking. Are you commenting on the shape? I've picked up rocks in Virginia in the shape of a cross. Actually, I've picked up handfuls of them. If you believe that everything that has happened did so so inevitably, then you can compute some astronomical odds. For example, what are the odds of all your ancestors meeting at exactly the right time and place, enabling them to produce you?Petrushka
June 7, 2010
June
06
Jun
7
07
2010
08:38 AM
8
08
38
AM
PDT
Petruska could you please describe in detail the gradual origination of this following protein? (or any other biological protein for that matter?) The Laminins - authors Peter Elkblom and Rupert Timpl: "laminins hold cells and tissues together." "Electron microscopy reveals a cross-like shape for all laminins investigated so far." http://www.truthorfiction.com/rumors/l/laminin.htm Laminin Protein Molecule - diagram http://www.soulharvest.net/resources/laminin+banner+2.png Laminin Molecule - Electron Microscope Photograph http://www.survivorbiblestudy.com/_RefFiles/Laminin%20slide.jpg Laminin Protein Molecule - Louie Giglio - a very cool video http://www.youtube.com/watch?v=F0-NPPIeeRk Laminin is made up of 3712 amino acids,,, 20^3712 = 10^26822 ,,,To put this in terms similar to what ID theorist William Dembski would use, this protein molecule complex of 3712 amino acids is well beyond the reach of the 10^150 probabilistic resource available to the universe. In fact Petruska, though the cross shape is merely suggestive, and not conclusive as I readily admit, I did not realize just how strong the evidence actually was for the suggestiveness until a molecular biologist tried to assert that pagan symbols could also easily be found. His primary example to refute the cross shaped laminin? http://en.wikipedia.org/wiki/File:Sucrose_porin_1a0s.png https://uncommondescent.com/intelligent-design/uncommon-descent-contest-question-7-foul-anonymous-darwinist-blogger-exposed-why-so-foul/#comment-325128 Now that was quite a stretch for him to make that association was it not? But why did he feel compelled to make such a flimsy rebuttal of a merely suggestive piece of evidence unless the case for Darwinism is non-existent in the empirical realm for the formation of proteins? If Darwinism had any evidence at all of proteins originating by natural means then surely this molecular biologist would not have stooped to such a level. All Of Creation - Mercyme http://www.youtube.com/watch?v=kkdniYsUrM8bornagain77
June 7, 2010
June
06
Jun
7
07
2010
08:07 AM
8
08
07
AM
PDT
Do you really believe that people like Behe, Axe, Abel and Durston, who have seriously been researching this problem for years, and have published about it, are not aware of this simple fact?
I note that Behe accepts common descent.
And you have been shown lots of evidence and of arguments that conflict with the truth of a protein being a gradient rather than a step function.
Well let's see what Durston says:
In principle, some proteins may change from a non-functional state to a functional state gradually as their sequences change. Furthermore, iso-enzymes in some cases may catalyze the same reaction, but have different sequences. Also, certain enzymes may demonstrate variations in both their sequence and their function. Finally, a single mutation in a functional sequence can sometimes render the sequence non-functional relative to the original function.
Petrushka
June 7, 2010
June
06
Jun
7
07
2010
07:17 AM
7
07
17
AM
PDT
gpuccio @ 62,
And there is no problem of “intended use”: just what the protein can do, and indeed does, in the cellular context.
If the protein results in a change in the body plan that prevents it from reproducing at the same rate as a competitor, that protein has hurt it's host's chance of survival. The host body plan can go extinct, because of that protein. As in any engineering, we have feedback.Toronto
June 7, 2010
June
06
Jun
7
07
2010
06:09 AM
6
06
09
AM
PDT
Petrushka (#65): Apparently in the ID version of biology you can compute probabilities using hypothetical data. In science, you always build models to explain known data. Models can use assumptions, possibly reasonable assumptions, where data are not yet fully known. And model must be evaluated quantitatively to establish of they are consistent with their assumptions, and if they are internally consistent. Neo-darwinian evolution is the only model which seems not to care about that. I know that sounds snide, but based on the papers you asked me to read, ID proponents have for some time been unjustifiably calculating probabilities on the assumption that every bit in the genome specifying a protein was critical. It just sounds false. ID proponents have never done that. It is well known that in protein sequences individual sites have different relevance. Otherwise, every single aminoacid should be ultraconserved. That is the basics of biochemistry of proteins. That's why ID proponents have never argue that the target of the search is a single structure. The target is a functional space, called target space. Do you really believe that people like Behe, Axe, Abel and Durston, who have seriously been researching this problem for years, and have published about it, are not aware of this simple fact? You can also read any post of mine on this blog in the last few years, and I challenge you to find one where I make such an assumption. I will refrain from returning the favor. I don’t know what the probabilities are. That's certainly true. And, like many darwinists, you don't seem to care. I’ve seen nothing, however, that conflicts with the usefulness of a protein being a gradient rather than a step function. I can't deny that "a protein being a gradient rather than a step function" would certainly be useful, at least for darwinists. The problem is that it is not true. And you have been shown lots of evidence and of arguments that conflict with the truth of a protein being a gradient rather than a step function. But you can always choose not to look at them, or just not to accept them. You could even discuss them. Your privilege.gpuccio
June 7, 2010
June
06
Jun
7
07
2010
05:39 AM
5
05
39
AM
PDT
Petrushka: #64: Regarding superfamilies and difficult transitions: has any biologist proposed that such transitions have taken place? How do you suppose that new superfamilies and new folds emerged? By special creation? :) According to the paper about the evolution of protein domains which I have many times quoted, about half of protein domains must be present in LUCA. You will say: but that is OOL, we are not debating OOL at this moment. That does not solve the problem, but it's OK for me: one thing at a time. But the other half, more than 1000 superfamilies, have emerged after, many of them, hundreds of them, in metazoa. Do you want to explain them, or we just give them for granted? Are biologists still scientists, interested in explaining what we observe? So, if transitions did not happen, we are back to special creation. If that is your favourite hypothesis, just state it. It isn't mine. It seems to me that the papers argue that the transition from random sequence to functional sequence can take place seamlessly. Absolutely not. The Weiss paper just confirms what we alredy knew: that superfamilies are distant islands of functionality. But to be sure of that, no paper is necessary. It is enough to take casual pairs of proteins from different superfamilies, imput them into BLAST, and verify the percent of homology. I have done that many times. You can do that yourself. No significant homology is found. That's exactly why superfamilies are superfamilies. And two proteins from different superfamilies are as distant as it's possible, at the primary structure level. The only different view could be that there is something common in all functional proteins, which makes them in some way part of a generic island of sequences. But thw Weiss paper excludes exactly that. Perhaps even the random sequence has functionality. Absolutely not. That is well known. Most random aminoacid sequences do not even start to fold. In a living environment they would be only dangerous corpses. And of the sequences which fold, only a tiny part fod well. And of those which fold, only a few have specific and useful fucntions. To be functional, a protein must not only fold very efficiently, but also fold in a way which has some biologic possible use, and have an active site which has some biologic possible use, and be integrated in the environment where it originated, and be correctly regulated, and so on. Targets, again. Almost any string of alphabetical letters will contain functional substrings — letters, letter pairs, even triplets found in words. The papers you linked argue that something like that is true of proteins. Where? There is no evidence of that. Show me a substring of single protein domains which has selectable function. Anyway, the Axe paper debates that point very seriously. The functional unit of proteins remains the domain. And the average domain length is about 130 AAs.gpuccio
June 7, 2010
June
06
Jun
7
07
2010
05:23 AM
5
05
23
AM
PDT
Petruska you state: "I’ve seen nothing, however, that conflicts with the usefulness of a protein being a gradient rather than a step function." And the evidence says,,, The Case Against a Darwinian Origin of Protein Folds - Douglas Axe, Jay Richards - audio http://intelligentdesign.podomatic.com/player/web/2010-05-03T11_09_03-07_00 Minimal Complexity Relegates Life Origin Models To Fanciful Speculation - Nov. 2009 Excerpt: Based on the structural requirements of enzyme activity Axe emphatically argued against a global-ascent model of the function landscape in which incremental improvements of an arbitrary starting sequence "lead to a globally optimal final sequence with reasonably high probability". For a protein made from scratch in a prebiotic soup, the odds of finding such globally optimal solutions are infinitesimally small- somewhere between 1 in 10exp140 and 1 in 10exp164 for a 150 amino acid long sequence if we factor in the probabilities of forming peptide bonds and of incorporating only left handed amino acids. http://www.arn.org/blogs/index.php/2/2009/11/10/minimal_complexity_relegates_life_origin The Case Against a Darwinian Origin of Protein Folds - Douglas Axe - 2010 Excerpt Pg. 11: "Based on analysis of the genomes of 447 bacterial species, the projected number of different domain structures per species averages 991. Comparing this to the number of pathways by which metabolic processes are carried out, which is around 263 for E. coli, provides a rough figure of three or four new domain folds being needed, on average, for every new metabolic pathway. In order to accomplish this successfully, an evolutionary search would need to be capable of locating sequences that amount to anything from one in 10^159 to one in 10^308 possibilities, something the neo-Darwinian model falls short of by a very wide margin." http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2010.1 Evolution vs. Functional Proteins (Mount Improbable) - Doug Axe - Video http://www.metacafe.com/watch/4018222 Dollo’s law, the symmetry of time, and the edge of evolution - Michael Behe - Oct 2009 Excerpt: Nature has recently published an interesting paper which places severe limits on Darwinian evolution.,,, A time-symmetric Dollo’s law turns the notion of “pre-adaptation” on its head. The law instead predicts something like “pre-sequestration”, where proteins that are currently being used for one complex purpose are very unlikely to be available for either reversion to past functions or future alternative uses. http://www.evolutionnews.org/2009/10/dollos_law_the_symmetry_of_tim.html Severe Limits to Darwinian Evolution: - Michael Behe - Oct. 2009 Excerpt: The immediate, obvious implication is that the 2009 results render problematic even pretty small changes in structure/function for all proteins — not just the ones he worked on.,,,Thanks to Thornton’s impressive work, we can now see that the limits to Darwinian evolution are more severe than even I had supposed. http://www.evolutionnews.org/2009/10/severe_limits_to_darwinian_evo.html#more Mathematically Defining Functional Information In Molecular Biology - Kirk Durston - short video http://www.metacafe.com/watch/3995236 "a very rough but conservative result is that if all the sequences that define a particular (protein) structure or fold-set where gathered into an area 1 square meter in area, the next island would be tens of millions of light years away." Kirk Durston Stephen Meyer - Functional Proteins And Information For Body Plans - video http://www.metacafe.com/watch/4050681 The best evidence evolutionists have for gradual ascent of proteins? A Man-Made ATP-Binding Protein Evolved Independent of Nature Causes Abnormal Growth in Bacterial Cells Excerpt: "Recent advances in de novo protein evolution have made it possible to create synthetic proteins from unbiased libraries that fold into stable tertiary structures with predefined functions. However, it is not known whether such proteins will be functional when expressed inside living cells or how a host organism would respond to an encounter with a non-biological protein. Here, we examine the physiology and morphology of Escherichia coli cells engineered to express a synthetic ATP-binding protein evolved entirely from non-biological origins. We show that this man-made protein disrupts the normal energetic balance of the cell by altering the levels of intracellular ATP. This disruption cascades into a series of events that ultimately limit reproductive competency by inhibiting cell division." http://www.plosone.org/article/info:doi%2F10.1371%2Fjournal.pone.0007385 Thus evolutionists have not shown the "ascent" of even on functional protein.bornagain77
June 7, 2010
June
06
Jun
7
07
2010
04:05 AM
4
04
05
AM
PDT
This is really funny. First of all, why do you compare a methodology regarding biology to measurements in the hard sciences? there are big differences, as you should know.
Apparently in the ID version of biology you can compute probabilities using hypothetical data. I know that sounds snide, but based on the papers you asked me to read, ID proponents have for some time been unjustifiably calculating probabilities on the assumption that every bit in the genome specifying a protein was critical. I will refrain from returning the favor. I don't know what the probabilities are. I've seen nothing, however, that conflicts with the usefulness of a protein being a gradient rather than a step function.Petrushka
June 6, 2010
June
06
Jun
6
06
2010
08:34 PM
8
08
34
PM
PDT
Regarding superfamilies and difficult transitions: has any biologist proposed that such transitions have taken place? It seems to me that the papers argue that the transition from random sequence to functional sequence can take place seamlessly. Perhaps even the random sequence has functionality. Almost any string of alphabetical letters will contain functional substrings -- letters, letter pairs, even triplets found in words. The papers you linked argue that something like that is true of proteins.Petrushka
June 6, 2010
June
06
Jun
6
06
2010
04:41 PM
4
04
41
PM
PDT
Petrushka (#61): Well, you are back to "no arguments". It means simply that there is no consensus to use this methodology. In contrast to the consensus for measurements of Ohms, Amps and such in electrical engineering, or measurements of entropy in physics. This is really funny. First of all, why do you compare a methodology regarding biology to measurements in the hard sciences? there are big differences, as you should know. But the really funny thing is that you are asking for a "consensus" about a methodology proposed in an ID friendly paper! Funny indeed. As for who has missed the authors intended meaning, I will only note that these papers do not Mention seas of non functionality, nor do they suggest any problems for traditional stepwise evolution. If you find anything like that in the papers, feel free to quote it. First of all, what interests me in a paper are its facts and conclusions, not the political opinion of the authors. I am free to quote the facts and methodology of a paper even if the authors get to different conclusions from mine. That said, fortunately that's not the case here. Durston has come here at UD in the past exactly to explain the relevance to ID of his methodology (and had to go away befor completing his work for "unknown reasons"). There was also a video posted here where he explained what his work meant. And if you want to re4ad words like "seas of non functionality" or some equivalent concept, please go to the site of the new peer reviewed journal "bio-complexity" and read the review by Axe about the problem of the emergence of protein domains. You will find there practically all that I have said, and more. The title? "The Case Against a Darwinian Origin of Protein Folds". And guess who posted a comment about the Axe paper? David L. Abel, in person. And the comment? "Excellent paper."gpuccio
June 6, 2010
June
06
Jun
6
06
2010
04:40 PM
4
04
40
PM
PDT
Toronto (#52): Applying a complexity measurement to a pattern from simply a mathematical point of view, is almost impossible without knowing a lot about the intended use of that pattern. What's your problem? Perhaps you have not read my definition (point 1): 1) The specification has to be functional. In other words, the information is specified because it conveys the intructions for a specific function, one which can be recognized and defined and objectively measured as present or absent, if necessary using a quantitative threshold. That's a lot of context, and not "simply a mathematical point of view". What do you mean about the "intended use of that pattern"? The first requirement to speak oof dFSCI is that a conscious observer recognize a function and explicitly define it. For proteins, it is necessary that the protein is recognized has having a function in a context (which, as I have already said, can easily be found in protein databases at the field "fucntion"). An enzyme, for instance, is a biochemical catalyst of a specific, measurable biochemical reaction in the context of the cell. Again, what is your problem? That’s why Petrushka says @ 48, [snip..]that the value or fitness of any allele can change over time.[..snip] [..snip]Any theory of biological information must take this into account. there is no fixed target.[..snip] In other words, the FS component of your dFSCI, has changed. Again, what has that to do with the biochemical fucntion of a protein? That does not change. Moreover, my definition of dFSCI is given for an explicit function, not for "any possible fucntion". I will not go into this discussion now, it's too late, but I paste here a brief response I gave to Petrushka on this subject on the Ayala thread: Petrushka: "If one has a specific target, such as the works of Shakespeare, odds are that random variation and selection will not produce it" My response: "Nor will they produce the proteome. Again, the argument of “evolution can take any direction” is a false argument. Once you have a basic structure for life (DNA, proteins, cells, etc.), directions are extremely narrow. You need proteins which fold and have active sites, and those active sites must do something useful in the context where they are supposed to arise, and must interact with what already exists, and the new proteins must be regulated in the correct way, and so on. Targets, targets everywhere!" Finally, your two sequences. These are old, useless tricks. I have stated explicitly that one can discuss the presence of dFSCI only if a fucntion is recognized. A function can be in a string, but the observer can not recognize it. That will be a false negative. We have always stated explicitly that the search for dFSCI has potentially a lot of false negatives. There are two main causes: the specification can be present, but the complexity can be low ("simple designs"); or the specification is there but is not recognized ("unrecognized specification"). And so? If you give me an AA sequence of say 150 AAs, and ask me if it is functionally specified, I have no idea how to answer you. But if I know that the sequence corresponds ti the primary structure of a known functional protein, then I can recognize the specification, define it explicitly (a protein which in the lab in the right context can do such and such) and even fix a quantitatibe test to assess the function. So, again, what is your problem? I thought I had alkready said all that in my previous posts. So, why do you come here with your tricks? But, to answer you just the same, what I see in your two sequences is only the following: a) two short binary sequences (total complexity 12 bits for each). So, no discussion about possible dFSCI here. b) the second sequence is obviously potentially compressible, so it would never qualify as dFSCI accroding to my definition even if it were longer. c) I recognize nothing in the first sequence, but that could just be my ignorance of mathemathics. Now you could say: but it's the first figures of pi, or something else... And so? I just don't know. Again, and so? What has all that to do with "patterns" and "intended use"? A protein sequence has no special pattern: it is pseudo-random. This is one of the points I have made ad nauseam in my previous posts, That's why I could never recognize it just by looking at it. And there is no problem of "intended use": just what the protein can do, and indeed does, in the cellular context.gpuccio
June 6, 2010
June
06
Jun
6
06
2010
04:24 PM
4
04
24
PM
PDT
What does that mean? In empirical science, everything is a proposal, and nothing is a finished product, whatever that may mean...
It means simply that there is no consensus to use this methodology. In contrast to the consensus for measurements of Ohms, Amps and such in electrical engineering, or measurements of entropy in physics. As for who has missed the authors intended meaning, I will only note that these papers do not Mention seas of non functionality, nor do they suggest any problems for traditional stepwise evolution. If you find anything like that in the papers, feel free to quote it.Petrushka
June 6, 2010
June
06
Jun
6
06
2010
04:17 PM
4
04
17
PM
PDT
warehuff: I suppose you refer to what is at present post 46. The link for the Abel and Trevors paper is the following: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1208958/ and here is the link to the Durston paper: http://www.tbiomed.com/content/4/1/47 I apologize, the problem is that I had pasted the post from another thread. I hope now the links work.gpuccio
June 6, 2010
June
06
Jun
6
06
2010
03:52 PM
3
03
52
PM
PDT
Petrushka (#52 – 56): That’s better.Now you have made specific statements, and we can discuss. I will try to be as clear as possible, but the subject requires some patient attention. 1) Random sequences (what Durston calls “the null state”). A truly random sequence of AAs, of length n, has a maximum value of Shannon’s H of 4.32 bits per site. This is easily calculated as follows: Let’s pretend that n = 100 The total complexity of the string will be 20^100, that is about 10^130, that is about 2^432. So, the total complexity of the string is 432 bits, and the complexity per site is 4.32 bits. That value, as you certainly know, is often called information, but is in reality a measure of uncertainty, and has nothing to do with meaning or function. Shannon’s theory is not a theory of meaning. It is perfectly normal that the maximum value of H is obtained in purely random strings, where the uncertainty is maximum. Another way to say that is that in a purely random string compressibility is extremely low and Kolmogorov complexity is similar to total complexity. But purely random sequences convey no meaning or function. 2) The paper by Weiss et al. In this paper, the authors try in various ways to evaluate the uncertainty (H) in a specific set of proteins, and also to evaluate the compressibility of the same set. They find that the value of H per site in their set is 4.19 bits per site, only slightly lower than the maximum value, and that compressibility in the same set is very low. But the key point to understand the meaning of their findings is to see what is the set of proteins which they evaluate. We can find that in the paper: “As data sets, we use a set of protein sequences with one protein of each superfamily. This superfamily set was introduced by White (1994).” In other words, they are taking one protein from each superfamily, and they are evaluating H in the whole set. At the same time, they are evaluating if the whole set is compressible. So, what do their findings mean? As their data set is composed of protein form different superfamilies, it can be considered as a sample of the whole genome. The low compressibility and the minimal reduction in uncertainty mean two important things: a) The protein sequences are pseudo-random: there is no intrinsic feature in them which allows recognition form true random sequences, except for the function of the protein itself. In other words, functional protein sequences cannot be generated algorithmically, and are not significantly compressible. b) Proteins from different superfamilies are totally unrelated one from the other: there is no recurrent similarity between them which can significantly reduce the uncertainty. The H per site in a trasversal sample of the proteome is almost as big as the maximum H of true random sequences. That is very important, because it is evidence that protein superfamilies are isolated islands of functionality in the ocean of possible sequences. 3) The paper by Durston et al. What is the difference then in the procedure used by Durston? It’s simple. Durston applies the calculation of uncertainty (H) to sets of homologue proteins in different species. That’s a completely different scenario. Here each set is formed by sequences in different species which have the same folding and the same function, but may differ in primary sequence in some measure. Let’s take, for instance, form table 1 of the paper, the case of ribosomal S12, a rather small protein of 121 AAs. The authors have evaluated 603 sequences of that protein from different species. What they do is the following: a) They calculate the complexity of the null state (a purely random sequence of 121 AAs), which will be: 20^121, that is 2^522. Total complexity: 522 bits; H per site 4.32. b) They calculate (and this is the truly smart idea) H for the whole set of 603 sequences, assigning values for each site which depend critically on how much the site is conserved in the set. The two extreme situatuations are: a site ultraconserved in all sequences corresponds to H 0 bits (no uncertainty); a site which varies in a completely random way in the set corresponds to the maximum H (4.32 bits). Obviously, each aminoacid site can have any intermediated value. c) For each site, they calculate a-b. This is the reduction of uncertainty for that site due to the fact that sequences are part of a functionally constrained subset. This value, expressed in Fits (functional bits), is a measure of how much that site is “constrained” for that functional sequence: a site ultraconserved will have the maximum Fit value (4.32 – 0 = 4.32 Fits), and therefore the maximum quan tity of specified information. A site where aminoacids can vary randomly will have the minimum Fit value (4.32 – 4.32 = 0), and will contribute nothing to the total specified information. d) Finally, the Fit values of each site are summed, and that gives the total Fit value for that family of proteins. The average Fit value per site is also calculated. For instance, for the protein family ribosomal S12, while the total complexity of the random state was 522 bits, the Fit value is 359 bits (522 – 163), and the average Fit value per site is 3.0 bits. Another way of expressing that is that the size of the search space is 2^522 (10^157); the size of the target (functional) space is 2^163 (10^49); and the ratio of the two (the specified complexity) is 2^359 (10^108). In other words, the probability of finding a functional sequence of this group by a single random event is of the order of 10^-108. All that can be found in the paper, but what does that mean? It means that the island of functionality for this specific function is one part to 10^108 in the search space. Quite a remarkable result. Before you object again that proteins do not form altogether randomly, I will refer again to the result in 2): each protein superfamily is a separated island of functionality. To pass form one to another by random variation, one has to traverse the ocean of non functional sequences. And anyway, if you read carefully the Durston paper, you will see that the method can be applied also to the calculation of the change in functional specified complexity with sequence variations, and can therefore be applied quantitatively to specified transition scenarios, for instance from one function to another, if and when proposed by those who believe in them (the darwinists). So, I think you have completely missed the point of all this reasoning. I hope these further notes may help. And by the way, what do you mean with that phrase about the Durston paper? “It seems to be a proposal rather than a finished product.” What does that mean? In empirical science, everything is a proposal, and nothing is a finished product, whatever that may mean. Unless you are one of those people who believe that theories can become facts…gpuccio
June 6, 2010
June
06
Jun
6
06
2010
03:41 PM
3
03
41
PM
PDT
gpuccio, neither of the links in message 48 work. They both have elipses inserted into them.warehuff
June 6, 2010
June
06
Jun
6
06
2010
01:29 PM
1
01
29
PM
PDT
From the references you provided, and which, I assume you endorse as accurate, I gather the following: 1. Protein sequences appear to be lightly edited random sequences. 2. Protein function can change gradually from non-functional through varying degrees of functionality. (It can also be dramatically affected, for better or for worse, by a single point mutation. 3. Measures of information must include fitness. There are many possible ways of defining fitness, but the only one relevant of evolution is reproductive fitness. Changes in proteins that affect things like hair color are relevant only if they increase or decrease reproductive success.Petrushka
June 6, 2010
June
06
Jun
6
06
2010
09:32 AM
9
09
32
AM
PDT
I'm not sure the Durston paper means what you think it means. It seems to be a proposal rather than a finished product. Furthermore, it's focus in on the evolution of proteins. It doesn't suggest that such evolution is impossible:
In principle, some proteins may change from a non-functional state to a functional state gradually as their sequences change. ... the FSC of a biosequence can be measured as it changes with time, as shown in Figure 1. When measuring evolutionary change in terms of FSC, it is necessary to account for a change in function due to insertions, deletions, substitutions and shuffling.
http://www.tbiomed.com/content/4/1/47Petrushka
June 6, 2010
June
06
Jun
6
06
2010
09:19 AM
9
09
19
AM
PDT
It might seem paradoxical that a functional sequence would be less complex than a random one, but the information theory you endorse seems to assign a maximum quantity to random strings.Petrushka
June 6, 2010
June
06
Jun
6
06
2010
08:11 AM
8
08
11
AM
PDT
It seems to me that the study you linked pretty much obliterates the notion that every bit of a protein producing gene needs to be specified. Maybe one percent, or maybe less, since the article says non-randomness is not required for functionality. I suspect this has some consequences for any computation of probability.Petrushka
June 6, 2010
June
06
Jun
6
06
2010
08:05 AM
8
08
05
AM
PDT
You linked me to this:
Even though proteins are the results of long-time and specific evolution, little regularities in the sequences can be detected by the means of information theory. Our rather rough estimate that proteins are about 1% less complex than random sequences stresses this point. Studies on protein structure prediction have shown that non-randomness in the sequence is not required for protein to be functional: Ptitsyn & Volken-stein (1986) suggest that proteins are slightly edited random polymers...
So according to the paper you linked an asked me to read, functional proteins have, on average, one percent less information than non-functional proteins. Furthermore, "little deviation from randomness in the sequence is needed for a protein to recognize a specific receptor surface." So I am going to ask again, of what use is a theory of biological information or biological entropy that does not take into account the effects of change on viability and reproductive success?Petrushka
June 6, 2010
June
06
Jun
6
06
2010
07:54 AM
7
07
54
AM
PDT
gpuccio @ 47,
Complexity is a classical measure of information in bits.
Applying a complexity measurement to a pattern from simply a mathematical point of view, is almost impossible without knowing a lot about the intended use of that pattern. That's why Petrushka says @ 48,
[snip..]that the value or fitness of any allele can change over time.[..snip] [..snip]Any theory of biological information must take this into account. there is no fixed target.[..snip]
In other words, the FS component of your dFSCI, has changed. As an example of trying to compute your dFSCI without an evironment, which of the following has more dFSCI? 1) 010011010010 OR 2) 111111111111Toronto
June 6, 2010
June
06
Jun
6
06
2010
05:51 AM
5
05
51
AM
PDT
Petrushka (#49): Again, read Durston's paper. He has measure the functional information in about twenty protein families. But please, read it! And, just a bit of information for you: we do know the sequences of a lot of proteins in the proteome. So, a lot of calculations can be made, if darwinists will ever care to offer some real scenario, even if only hypothetical, of molecular evolution. Now, please, add some other short post which has nothing to do with the tons of detailed arguments which I have offered. I wish you the best.gpuccio
June 6, 2010
June
06
Jun
6
06
2010
05:45 AM
5
05
45
AM
PDT
Petrshka (#48): You are really beyond any hope. Is that your idea of a discussion? Why an unit of measure has not been established? You go on saying that, and I go on saying that the unit if fucntional bits, or fits. Bit, exactly as in the measurement of Shannon's H. Do you understand that? Functional, because you meause the information in fucntional strings. Is that clear? If not, please say why. And please, read the Durston paper, where the fits are rigorously defined. And what has the value of fitness of alleles to do with the measurement of the FSCI of a protein? I have already answered that, but I believe it's useless. The function of a protein is a biochemical property, it has nothing to do with fitness, or with fitness landscapes. The enzymatic activity of a protein can be easily measured in a lab, in absolute units. It requires no assumption about fitness landscapes, allele changes or anything else. Just the measurement of an objective biochemical property. And you can find that property listed in all protein databases for many known pproteins, in the field "function". Is that clear? If not, please state why. I have already answere in great detail your "no fixed target" argument on another thread, without any comment from you. I will not do it again here. I have never stated that "difital" menas "binary". That was Seversky's statement, if I am not wrong. I have only said that "digital" means "coded in numbers", and that the quaternaryt alphabet of DNA can be easily translated into binary, if for any reason we want to do that.gpuccio
June 6, 2010
June
06
Jun
6
06
2010
05:24 AM
5
05
24
AM
PDT
The problem continues to be that ID proponents continue to base fundamental conclusions on data which doesn't exist. Until you can point to a specific genome and say it contains x bits of information, and that the genome of another individual contains x+y or x-y bits of information, the claims regarding creation of information are empty. I can't think of any instance in science where conclusions about quantities preceded the measurement of those quantities.Petrushka
June 6, 2010
June
06
Jun
6
06
2010
01:31 AM
1
01
31
AM
PDT
Lots of words, but no example of a measurement. I disagree that an objective unit of measurement has been established. You haven't even mentioned the most obvious complicating fact: that the value or fitness of any allele can change over time. It can change because the selecting environment changes, or it can change because of interactions with other changing alleles. Any theory of biological information must take this into account. there is no fixed target. the "correct" spelling of words shifts. The correct spelling is determined by whether an individual reproduces. I respectfully disagree with the assertion that "digital" implies binary. We opted for binary coding in computers because it was easy to implement in relays, vacuum tubes and transistors.Petrushka
June 6, 2010
June
06
Jun
6
06
2010
01:06 AM
1
01
06
AM
PDT
1 2 3 4

Leave a Reply